Timezone: »

 
Spotlight
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Matthew Tancik · Pratul Srinivasan · Ben Mildenhall · Sara Fridovich-Keil · Nithin Raghavan · Utkarsh Singhal · Ravi Ramamoorthi · Jonathan Barron · Ren Ng

Thu Dec 10 08:00 AM -- 08:10 AM (PST) @ Orals & Spotlights: Graph/Relational/Theory

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP has impractically slow convergence to high frequency signal components. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.

Author Information

Matthew Tancik (UC Berkeley)
Pratul Srinivasan (Google Research)
Ben Mildenhall (UC Berkeley)
Sara Fridovich-Keil (UC Berkeley)
Nithin Raghavan (UC Berkeley)
Utkarsh Singhal (UC Berkeley)
Ravi Ramamoorthi (University of California San Diego)
Jon Barron (Google Research)
Ren Ng (University of California, Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors