Timezone: »
Despite strong empirical performance for image classification, deep neural networks are often regarded as ``black boxes'' and they are difficult to interpret. On the other hand, sparse convolutional models, which assume that a signal can be expressed by a linear combination of a few elements from a convolutional dictionary, are powerful tools for analyzing natural images with good theoretical interpretability and biological plausibility. However, such principled models have not demonstrated competitive performance when compared with empirically designed deep networks. This paper revisits the sparse convolutional modeling for image classification and bridges the gap between good empirical performance (of deep learning) and good interpretability (of sparse convolutional models). Our method uses differentiable optimization layers that are defined from convolutional sparse coding as drop-in replacements of standard convolutional layers in conventional deep neural networks. We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100 and ImageNet datasets when compared to conventional neural networks. By leveraging stable recovery property of sparse modeling, we further show that such models can be much more robust to input corruptions as well as adversarial perturbations in testing through a simple proper trade-off between sparse regularization and data reconstruction terms.
Author Information
xili dai (Hong Kong University of Science and Technology)
Mingyang Li (Tsinghua University, Tsinghua University)
Pengyuan Zhai (Harvard University, Harvard University)
Shengbang Tong (University of California Berkeley)
Xingjian Gao (University of California, Berkeley)
Shao-Lun Huang (Tsinghua University, Tsinghua University)
Zhihui Zhu (The Ohio State University)
Chong You (University of California, Berkeley)
Yi Ma (UC Berkeley)
More from the Same Authors
-
2021 Spotlight: A Geometric Analysis of Neural Collapse with Unconstrained Features »
Zhihui Zhu · Tianyu Ding · Jinxin Zhou · Xiao Li · Chong You · Jeremias Sulam · Qing Qu -
2021 : On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging »
Chris Junchi Li · Yaodong Yu · Nicolas Loizou · Gauthier Gidel · Yi Ma · Nicolas Le Roux perso · Michael Jordan -
2021 : On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging »
Chris Junchi Li · Yaodong Yu · Nicolas Loizou · Gauthier Gidel · Yi Ma · Nicolas Le Roux perso · Michael Jordan -
2021 : An Empirical Study of Pre-trained Models on Out-of-distribution Generalization »
Yaodong Yu · Heinrich Jiang · Dara Bahri · Hossein Mobahi · Seungyeon Kim · Ankit Rawat · Andreas Veit · Yi Ma -
2023 Poster: White-Box Transformers via Sparse Rate Reduction »
Yaodong Yu · Sam Buchanan · Druv Pai · Tianzhe Chu · Ziyang Wu · Shengbang Tong · Benjamin Haeffele · Yi Ma -
2023 Poster: Mass-Producing Failures of Multimodal Models »
Shengbang Tong · Erik Jones · Jacob Steinhardt -
2023 Poster: Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning »
Mitsuhiko Nakamoto · Yuexiang Zhai · Anikait Singh · Max Sobol Mark · Yi Ma · Chelsea Finn · Aviral Kumar · Sergey Levine -
2022 : Invited Talk: Yi Ma »
Yi Ma -
2022 Poster: Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold »
Can Yaras · Peng Wang · Zhihui Zhu · Laura Balzano · Qing Qu -
2022 Poster: Robust Calibration with Multi-domain Temperature Scaling »
Yaodong Yu · Stephen Bates · Yi Ma · Michael Jordan -
2022 Poster: Are All Losses Created Equal: A Neural Collapse Perspective »
Jinxin Zhou · Chong You · Xiao Li · Kangning Liu · Sheng Liu · Qing Qu · Zhihui Zhu -
2022 Poster: TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels »
Yaodong Yu · Alexander Wei · Sai Praneeth Karimireddy · Yi Ma · Michael Jordan -
2022 Poster: Error Analysis of Tensor-Train Cross Approximation »
Zhen Qin · Alexander Lidiak · Zhexuan Gong · Gongguo Tang · Michael B Wakin · Zhihui Zhu -
2021 : Performance-Guaranteed ODE Solvers with Complexity-Informed Neural Networks »
Feng Zhao · Xiang Chen · Jun Wang · Zuoqiang Shi · Shao-Lun Huang -
2021 Poster: A Geometric Analysis of Neural Collapse with Unconstrained Features »
Zhihui Zhu · Tianyu Ding · Jinxin Zhou · Xiao Li · Chong You · Jeremias Sulam · Qing Qu -
2021 Poster: Only Train Once: A One-Shot Neural Network Training And Pruning Framework »
Tianyi Chen · Bo Ji · Tianyu Ding · Biyi Fang · Guanyi Wang · Zhihui Zhu · Luming Liang · Yixin Shi · Sheng Yi · Xiao Tu -
2021 Poster: Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery »
Lijun Ding · Liwei Jiang · Yudong Chen · Qing Qu · Zhihui Zhu -
2021 Poster: A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning »
Xinyi Tong · Xiangxiang Xu · Shao-Lun Huang · Lizhong Zheng -
2021 Poster: Convolutional Normalization: Improving Deep Convolutional Network Robustness and Training »
Sheng Liu · Xiao Li · Simon Zhai · Chong You · Zhihui Zhu · Carlos Fernandez-Granda · Qing Qu -
2020 Poster: Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization »
Chaobing Song · Yong Jiang · Yi Ma -
2020 Poster: Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities »
Chaobing Song · Zhengyuan Zhou · Yichao Zhou · Yong Jiang · Yi Ma -
2020 Poster: Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization »
Chong You · Zhihui Zhu · Qing Qu · Yi Ma -
2020 Spotlight: Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization »
Chong You · Zhihui Zhu · Qing Qu · Yi Ma -
2020 Poster: Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction »
Yaodong Yu · Kwan Ho Ryan Chan · Chong You · Chaobing Song · Yi Ma -
2019 Poster: Distributed Low-rank Matrix Factorization With Exact Consensus »
Zhihui Zhu · Qiuwei Li · Xinshuo Yang · Gongguo Tang · Michael B Wakin -
2019 Poster: A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution »
Qing Qu · Xiao Li · Zhihui Zhu -
2019 Spotlight: A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution »
Qing Qu · Xiao Li · Zhihui Zhu -
2019 Poster: A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning »
Zhihui Zhu · Tianyu Ding · Daniel Robinson · Manolis Tsakiris · RenĂ© Vidal -
2019 Poster: NeurVPS: Neural Vanishing Point Scanning via Conic Convolution »
Yichao Zhou · Haozhi Qi · Jingwei Huang · Yi Ma -
2018 Poster: Dual Principal Component Pursuit: Improved Analysis and Efficient Algorithms »
Zhihui Zhu · Yifan Wang · Daniel Robinson · Daniel Naiman · RenĂ© Vidal · Manolis Tsakiris -
2018 Poster: Dropping Symmetry for Fast Symmetric Nonnegative Matrix Factorization »
Zhihui Zhu · Xiao Li · Kai Liu · Qiuwei Li