Timezone: »

 
Poster
Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
Wenzheng Chen · Huan Ling · Jun Gao · Edward Smith · Jaakko Lehtinen · Alec Jacobson · Sanja Fidler

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #92

Many machine learning models operate on images, but ignore the fact that images are 2D projections formed by 3D geometry interacting with light, in a process called rendering. Enabling ML models to understand image formation might be key for generalization. However, due to an essential rasterization step involving discrete assignment operations, rendering pipelines are non-differentiable and thus largely inaccessible to gradient-based ML techniques. In this paper, we present DIB-Render, a novel rendering framework through which gradients can be analytically computed. Key to our approach is to view rasterization as a weighted interpolation, allowing image gradients to back-propagate through various standard vertex shaders within a single framework. Our approach supports optimizing over vertex positions, colors, normals, light directions and texture coordinates, and allows us to incorporate various well-known lighting models from graphics. We showcase our approach in two ML applications: single-image 3D object prediction, and 3D textured object generation, both trained using exclusively 2D supervision.

Author Information

Wenzheng Chen (University of Toronto)
Huan Ling (University of Toronto, NVIDIA)
Jun Gao (University of Toronto)
Edward Smith (McGill University)
Jaakko Lehtinen (NVIDIA Research; Aalto University)
Alec Jacobson (University of Toronto)
Sanja Fidler (University of Toronto)

More from the Same Authors