DepR: Depth Guided Single-view Scene Reconstruction with Instance-level Diffusion

ICCV 2025
* equal contribution    corresponding author
Project done while Qingcheng Zhao interned at UC San Diego.
1ShanghaiTech University 2UC San Diego 3Lambda, Inc. 4Stanford University
Input Image
Reconstructed scene (interactive)

Scenes

Abstract

We propose DepR, a depth-guided single-view scene reconstruction framework that integrates instance-level diffusion within a compositional paradigm. Instead of reconstructing the entire scene holistically, DepR generates individual objects and subsequently composes them into a coherent 3D layout.

Unlike previous methods that use depth solely for object layout estimation during inference and therefore fail to fully exploit its rich geometric information, DepR leverages depth throughout both training and inference. Specifically, we introduce depth-guided conditioning to effectively encode shape priors into diffusion models. During inference, depth further guides DDIM sampling and layout optimization, enhancing alignment between the reconstruction and the input image. Despite being trained on limited synthetic data, DepR achieves state-of-the-art performance and demonstrates strong generalization in singleview scene reconstruction, as shown through evaluations on both synthetic and real-world datasets.

Method

DepR Pipeline

Overview of our DepR. Depth is utilized in three key stages: 1) to back-project features to condition the latent tri-plane diffusion model to generate complete 3D shapes; 2) to guide the diffusion sampling process via gradients from a depth loss; and 3) to optimize object poses via layout loss for accurate scene composition.

Comparisons on 3D-FRONT

Qualitative comparisons on synthetic dataset 3D-FRONT

Comparisons on Pix-3D and Our Own Images

Qualitative comparisons on Pix-3D dataset and our own images

BibTeX


  @article{zhao2025depr,
    title={DepR: Depth Guided Single-view Scene Reconstruction with Instance-level Diffusion},
    author={Zhao, Qingcheng and Zhang, Xiang and Xu, Haiyang and Chen, Zeyuan and Xie, Jianwen and Gao, Yuan and Tu, Zhuowen},
    journal={arXiv preprint arXiv:2507.22825},
    year={2025}
  }