Timezone: »

 
Poster
Accelerated Linearized Laplace Approximation for Bayesian Deep Learning
Zhijie Deng · Feng Zhou · Jun Zhu

@

Laplace approximation (LA) and its linearized variant (LLA) enable effortless adaptation of pretrained deep neural networks to Bayesian neural networks. The generalized Gauss-Newton (GGN) approximation is typically introduced to improve their tractability. However, LA and LLA are still confronted with non-trivial inefficiency issues and should rely on Kronecker-factored, diagonal, or even last-layer approximate GGN matrices in practical use. These approximations are likely to harm the fidelity of learning outcomes. To tackle this issue, inspired by the connections between LLA and neural target kernels (NTKs), we develop a Nystrom approximation to NTKs to accelerate LLA. Our method benefits from the capability of popular deep learning libraries for forward mode automatic differentiation, and enjoys reassuring theoretical guarantees. Extensive studies reflect the merits of the proposed method in aspects of both scalability and performance. Our method can even scale up to architectures like vision transformers. We also offer valuable ablation studies to diagnose our method. Code is available at https://github.com/thudzj/ELLA.

Author Information

Zhijie Deng (Shanghai Jiao Tong University)
Zhijie Deng

Zhijie Deng joined Qing Yuan Research Institute of Shanghai Jiao Tong University as a tenure-track assistant professor in July 2022. He obtained the Ph.D. degree from Department of Computer Science and Technology, Tsinghua University in June 2022, under the supervision of Prof. Bo Zhang and Prof. Jun Zhu.

Feng Zhou (Renmin University of China)
Jun Zhu (Tsinghua University)

More from the Same Authors