Timezone: »

 
Poster
SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections
Mark Boss · Andreas Engelhardt · Abhishek Kar · Yuanzhen Li · Deqing Sun · Jonathan Barron · Hendrik PA Lensch · Varun Jampani

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #522

Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics. Neural approaches such as NeRF have achieved photorealistic results on novel view synthesis, but they require known camera poses. Solving this problem with unknown camera poses is highly challenging as it requires joint optimization over shape, radiance, and pose. This problem is exacerbated when the input images are captured in the wild with varying backgrounds and illuminations. Standard pose estimation techniques fail in such image collections in the wild due to very few estimated correspondences across images. Furthermore, NeRF cannot relight a scene under any illumination, as it operates on radiance (the product of reflectance and illumination). We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination. Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR. To our knowledge, our method is the first to tackle this severely unconstrained task with minimal user interaction.

Author Information

Mark Boss (Unity Technologies)
Andreas Engelhardt (University of Tuebingen)
Abhishek Kar (UC Berkeley)

Abhishek Kar is a 5th year graduate student in Jitendra Malik’s lab at UC Berkeley. He received his B.Tech in Computer Science from IIT Kanpur in 2012. Abhishek is the recipient of the CVPR Best Student Paper award in 2015 for his work on category specific shape reconstruction. His research interests lie in 3D computer vision, deep learning and computational photography. He has also spent time at Microsoft Research working on viewing large imagery on mobile devices and at Fyusion capturing "3D photos" with mobile devices and developing deep learning models for them. Some features he has shipped/worked on at Fyusion include 3D visual search, creation of user generated AR/VR content and real-time style transfer on mobile devices.

Yuanzhen Li (Massachusetts Institute of Technology)
Deqing Sun (Google)
Jonathan Barron (Google Research)
Hendrik PA Lensch (University of Tübingen)
Varun Jampani (Google Research)

More from the Same Authors