Skip to yearly menu bar Skip to main content


Poster

4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization

Mijeong Kim · Jongwoo Lim · Bohyung Han


Abstract:

Reconstructing dynamic scenes from monocular videos is very challenging, particularly in scenarios with rapid object motion or self-occlusion.While much progress has been made in dynamic scene synthesis based on neural radiance fields, existing methods struggle with slow inference or are limited to artificially captured videos.To address these challenges, we introduce UA-4DGS, a novel approach that renders clear free-view images of arbitrary dynamic scenes from in-the-wild monocular videos. To compensate for the lack of information from self-occlusion, we give diffusion priors to unseen views under considering the pixel-wise uncertainty score of the rendered image.It is the first local-adaptive regularization technique in neural radiance fields paradigms without problems of under-fitting on training images.Moreover, we first points out the initialization issue of Gaussian Splatting for dynamic scenes with fast motions, where Structure from Motion (SfM) techniques designed for static scenes yield missing initialization on dynamic regions. To tackle this problem, we introduce dynamic region densification, which improves reconstruction performance and memory efficiency.Furthermore, we demonstrate the versatility of our approach by showing that incorporating uncertainty considerations into other NeRF regularization techniques can enhance their performance.

Live content is unavailable. Log in and register to view live content