Timezone: »
Recent advances have shown that implicit bias of gradient descent on over-parameterized models enables the recovery of low-rank matrices from linear measurements, even with no prior knowledge on the intrinsic rank. In contrast, for {\em robust} low-rank matrix recovery from {\em grossly corrupted} measurements, over-parameterization leads to overfitting without prior knowledge on both the intrinsic rank and sparsity of corruption. This paper shows that with a {\em double over-parameterization} for both the low-rank matrix and sparse corruption, gradient descent with {\em discrepant learning rates} provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption. We further extend our approach for the robust recovery of natural images by over-parameterizing images with deep convolutional networks. Experiments show that our method handles different test images and varying corruption levels with a single learning pipeline where the network width and termination conditions do not need to be adjusted on a case-by-case basis. Underlying the success is again the implicit bias with discrepant learning rates on different over-parameterized parameters, which may bear on broader applications.
Author Information
Chong You (University of California, Berkeley)
Zhihui Zhu (Johns Hopkins University)
Qing Qu (New York University)
Yi Ma (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization »
Wed Dec 9th 03:10 -- 03:20 AM Room Orals & Spotlights: Deep Learning/Theory
More from the Same Authors
-
2020 Poster: Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization »
Chaobing Song · Yong Jiang · Yi Ma -
2020 Poster: Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities »
Chaobing Song · Zhengyuan Zhou · Yichao Zhou · Yong Jiang · Yi Ma -
2020 Poster: Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction »
Yaodong Yu · Kwan Ho Ryan Chan · Chong You · Chaobing Song · Yi Ma -
2019 Poster: Distributed Low-rank Matrix Factorization With Exact Consensus »
Zhihui Zhu · Qiuwei Li · Xinshuo Yang · Gongguo Tang · Michael B Wakin -
2019 Poster: A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution »
Qing Qu · Xiao Li · Zhihui Zhu -
2019 Spotlight: A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution »
Qing Qu · Xiao Li · Zhihui Zhu -
2019 Poster: A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning »
Zhihui Zhu · Tianyu Ding · Daniel Robinson · Manolis Tsakiris · RenĂ© Vidal -
2019 Poster: NeurVPS: Neural Vanishing Point Scanning via Conic Convolution »
Yichao Zhou · Haozhi Qi · Jingwei Huang · Yi Ma -
2018 Poster: Dual Principal Component Pursuit: Improved Analysis and Efficient Algorithms »
Zhihui Zhu · Yifan Wang · Daniel Robinson · Daniel Naiman · RenĂ© Vidal · Manolis Tsakiris -
2018 Poster: Dropping Symmetry for Fast Symmetric Nonnegative Matrix Factorization »
Zhihui Zhu · Xiao Li · Kai Liu · Qiuwei Li