Timezone: »
Richly segmented 3D scene reconstructions are an integral basis for many high-level scene understanding tasks, such as for robotics, motion planning, or augmented reality. Existing works in 3D perception from a single RGB image tend to focus on geometric reconstruction only, or geometric reconstruction with semantic segmentation or instance segmentation.Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction -- from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations.We propose a new approach for holistic 3D scene understanding from a single RGB image which learns to lift and propagate 2D features from an input image to a 3D volumetric scene representation.Our panoptic 3D reconstruction metric evaluates both geometric reconstruction quality as well as panoptic segmentation.Our experiments demonstrate that our approach for panoptic 3D scene reconstruction outperforms alternative approaches for this task.
Author Information
Manuel Dahnert (Technical University of Munich)
Ji Hou (Technical University of Munich)
Matthias Niessner (Technical University of Munich)
Angela Dai (Technical University of Munich)
More from the Same Authors
-
2022 Poster: PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape Completion on Unseen Categories »
Yuchen Rao · Yinyu Nie · Angela Dai -
2022 Spotlight: Lightning Talks 6A-3 »
Junyu Xie · Chengliang Zhong · Ali Ayub · Sravanti Addepalli · Harsh Rangwani · Jiapeng Tang · Yuchen Rao · Zhiying Jiang · Yuqi Wang · Xingzhe He · Gene Chou · Ilya Chugunov · Samyak Jain · Yuntao Chen · Weidi Xie · Sumukh K Aithal · Carter Fendley · Lev Markhasin · Yiqin Dai · Peixing You · Bastian Wandt · Yinyu Nie · Helge Rhodin · Felix Heide · Ji Xin · Angela Dai · Andrew Zisserman · Bi Wang · Xiaoxue Chen · Mayank Mishra · ZHAO-XIANG ZHANG · Venkatesh Babu R · Justus Thies · Ming Li · Hao Zhao · Venkatesh Babu R · Jimmy Lin · Fuchun Sun · Matthias Niessner · Guyue Zhou · Xiaodong Mu · Chuang Gan · Wenbing Huang -
2022 Spotlight: PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape Completion on Unseen Categories »
Yuchen Rao · Yinyu Nie · Angela Dai -
2022 Spotlight: Neural Shape Deformation Priors »
Jiapeng Tang · Lev Markhasin · Bi Wang · Justus Thies · Matthias Niessner -
2022 Spotlight: 3DILG: Irregular Latent Grids for 3D Generative Modeling »
Biao Zhang · Matthias Niessner · Peter Wonka -
2022 Poster: Neural Shape Deformation Priors »
Jiapeng Tang · Lev Markhasin · Bi Wang · Justus Thies · Matthias Niessner -
2022 Poster: 3DILG: Irregular Latent Grids for 3D Generative Modeling »
Biao Zhang · Matthias Niessner · Peter Wonka -
2022 Poster: The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes »
Peter Kocsis · Peter Súkeník · Guillem Braso · Matthias Niessner · Laura Leal-Taixé · Ismail Elezi -
2021 Poster: TransformerFusion: Monocular RGB Scene Reconstruction using Transformers »
Aljaz Bozic · Pablo Palafox · Justus Thies · Angela Dai · Matthias Niessner -
2020 : Angela Dai - Self-supervised generation of 3D shapes and scenes »
Angela Dai -
2020 Poster: Neural Non-Rigid Tracking »
Aljaz Bozic · Pablo Palafox · Michael Zollhöfer · Angela Dai · Justus Thies · Matthias Niessner