Timezone: »
Content creation, central to applications such as virtual reality, can be tedious and time-consuming. Recent image synthesis methods simplify this task by offering tools to generate new views from as little as a single input image, or by converting a semantic map into a photorealistic image. We propose to push the envelope further, and introduce Generative View Synthesis (GVS) that can synthesize multiple photorealistic views of a scene given a single semantic map. We show that the sequential application of existing techniques, e.g., semantics-to-image translation followed by monocular view synthesis, fail at capturing the scene's structure. In contrast, we solve the semantics-to-image translation in concert with the estimation of the 3D layout of the scene, thus producing geometrically consistent novel views that preserve semantic structures. We first lift the input 2D semantic map onto a 3D layered representation of the scene in feature space, thereby preserving the semantic labels of 3D geometric structures. We then project the layered features onto the target views to generate the final novel-view images. We verify the strengths of our method and compare it with several advanced baselines on three different datasets. Our approach also allows for style manipulation and image editing operations, such as the addition or removal of objects, with simple manipulations of the input style images and semantic maps respectively. For code and additional results, visit the project page at https://gvsnet.github.io
Author Information
Tewodros Amberbir Habtegebrial (Technische Universität Kaiserslautern)
Varun Jampani (Google)
Orazio Gallo (NVIDIA Research)
Didier Stricker (DFKI)
More from the Same Authors
-
2021 Spotlight: ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction »
Gengshan Yang · Deqing Sun · Varun Jampani · Daniel Vlasic · Forrester Cole · Ce Liu · Deva Ramanan -
2022 Poster: LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery »
Chun-Han Yao · Wei-Chih Hung · Yuanzhen Li · Michael Rubinstein · Ming-Hsuan Yang · Varun Jampani -
2022 Poster: SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections »
Mark Boss · Andreas Engelhardt · Abhishek Kar · Yuanzhen Li · Deqing Sun · Jonathan Barron · Hendrik PA Lensch · Varun Jampani -
2022 Poster: Subsidiary Prototype Alignment for Universal Domain Adaptation »
Jogendra Nath Kundu · Suvaansh Bhambri · Akshay R Kulkarni · Hiran Sarkar · Varun Jampani · Venkatesh Babu R -
2022 Poster: Polynomial Neural Fields for Subband Decomposition and Manipulation »
Guandao Yang · Sagie Benaim · Varun Jampani · Kyle Genova · Jonathan Barron · Thomas Funkhouser · Bharath Hariharan · Serge Belongie -
2021 Poster: Robust Visual Reasoning via Language Guided Neural Module Networks »
Arjun Akula · Varun Jampani · Soravit Changpinyo · Song-Chun Zhu -
2021 Poster: Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition »
Mark Boss · Varun Jampani · Raphael Braun · Ce Liu · Jonathan Barron · Hendrik PA Lensch -
2021 Poster: ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction »
Gengshan Yang · Deqing Sun · Varun Jampani · Daniel Vlasic · Forrester Cole · Ce Liu · Deva Ramanan -
2021 Poster: Non-local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation »
Jogendra Nath Kundu · Siddharth Seth · Anirudh Jamkhandi · Pradyumna YM · Varun Jampani · Anirban Chakraborty · Venkatesh Babu R -
2021 Poster: Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery »
Ramesha Rakesh Mugaludi · Jogendra Nath Kundu · Varun Jampani · Venkatesh Babu R