Skip to yearly menu bar Skip to main content


Poster

From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $\alpha$-NeuS

Haoran Zhang · Junkai Deng · Xuhui Chen · Fei Hou · Wencheng Wang · Hong Qin · Chen Qian · Ying He

East Exhibit Hall A-C #1507
[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Traditional 3D shape reconstruction techniques from multi-view images, such as structure from motion and multi-view stereo, primarily focus on opaque surfaces. Similarly, recent advances in neural radiance fields and its variants also primarily address opaque objects, encountering difficulties with the complex lighting effects caused by transparent materials. This paper introduces $\alpha$-NeuS, a new method for simultaneously reconstructing thin transparent objects and opaque objects based on neural implicit surfaces (NeuS). Our method leverages the observation that transparent surfaces induce local extreme values in the learned signed distance fields during neural volumetric rendering, contrasting with opaque surfaces that align with zero level sets. Traditional iso-surfacing algorithms such as Marching Cubes, which rely on fixed iso-values, are ill-suited for this data. We address this by converting the distance field from signed to unsigned and developing an optimization method that extracts level sets corresponding to both local minima and zero iso-values. We prove that the reconstructed surfaces are unbiased for both transparent and opaque objects. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness. Our data and code will be publicly available at GitHub.

Live content is unavailable. Log in and register to view live content