Understanding the generation of 3D shapes and scenes is fundamental to comprehensive perception and understanding of real-world environments. Recently, we have seen impressive progress in 3D shape generation and promising results in generating 3D scenes, largely relying on the availability of large-scale synthetic 3D datasets. However, the application to real-world scenes remains challenging due to the domain gap between synthetic and real 3D data. In this talk, I will discuss a self-supervised approach for 3D scene generation from partial RGB-D observations, and propose new techniques for self-supervised training for generating 3D geometry and color of scenes.
Bio: Angela Dai is an Assistant Professor at the Technical University of Munich. Her research focuses on understanding how the 3D world around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship.