CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation

Xiuzhe Wu · Peng Dai · Weipeng DENG · Handi Chen · Yang Wu · Yan-Pei Cao · Ying Shan · Xiaojuan Qi

Great Hall & Hall B1+B2 (level 1) #200
[ ] [ Project Page ]
Thu 14 Dec 3 p.m. PST — 5 p.m. PST


Existing methods for adapting Neural Radiance Fields (NeRFs) to scene changes require extensive data capture and model retraining, which is both time-consuming and labor-intensive. In this paper, we tackle the challenge of efficiently adapting NeRFs to real-world scene changes over time using a few new images while retaining the memory of unaltered areas, focusing on the continual learning aspect of NeRFs. To this end, we propose CL-NeRF, which consists of two key components: a lightweight expert adaptor for adapting to new changes and evolving scene representations and a conflict-aware knowledge distillation learning objective for memorizing unchanged parts. We also present a new benchmark for evaluating Continual Learning of NeRFs with comprehensive metrics. Our extensive experiments demonstrate that CL-NeRF can synthesize high-quality novel views of both changed and unchanged regions with high training efficiency, surpassing existing methods in terms of reducing forgetting and adapting to changes. Code and benchmark will be made available.

Chat is not available.