On the Frequency-bias of Coordinate-MLPs

Sameera Ramasinghe · Lachlan E. MacDonald · Simon Lucey

Hall J #429

Keywords: [ Coordinate Networks ] [ Implicit neural representations ] [ implicit regularization ]

[ Abstract ]
[ Paper [ Poster [ OpenReview
Thu 1 Dec 9 a.m. PST — 11 a.m. PST


We show that typical implicit regularization assumptions for deep neural networks (for regression) do not hold for coordinate-MLPs, a family of MLPs that are now ubiquitous in computer vision for representing high-frequency signals. Lack of such implicit bias disrupts smooth interpolations between training samples, and hampers generalizing across signal regions with different spectra. We investigate this behavior through a Fourier lens and uncover that as the bandwidth of a coordinate-MLP is enhanced, lower frequencies tend to get suppressed unless a suitable prior is provided explicitly. Based on these insights, we propose a simple regularization technique that can mitigate the above problem, which can be incorporated into existing networks without any architectural modifications.

Chat is not available.