Skip to yearly menu bar Skip to main content


Poster

Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly

Junsheng Zhou · Yu-Shen Liu · Zhizhong Han


Abstract:

Large language and visual models have been leading a revolution in visual computing. By greatly scaling up sizes of data and model parameters, the large models learn deep priors which lead to remarkable performance in various tasks. In this work, we present deep prior assembly, a novel framework that assembles diverse deep priors from large models for scene generation from single images in a zero-shot manner. We show that this challenging task can be done without extra knowledge but just simply generalizing one deep prior in one sub-task. To this end, we introduce novel methods related to poses, scales, and occlusion parsing which are keys to enable deep priors to work together in a robust way. Deep prior assembly does not acquire any 3D or 2D data-driven training in the task and demonstrates superior performance in generalization priors on open-world scenes. We conduct evaluations on large-scale dataset, and report analysis, numerical and visual comparisons with the latest methods to show our superiority.

Live content is unavailable. Log in and register to view live content