Timezone: »

 
Poster
On the Expressiveness of Approximate Inference in Bayesian Neural Networks
Andrew Foong · David Burt · Yingzhen Li · Richard Turner

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1626

While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood. We study the quality of common variational methods in approximating the Bayesian predictive distribution. For single-hidden layer ReLU BNNs, we prove a fundamental limitation in function-space of two of the most commonly used distributions defined in weight-space: mean-field Gaussian and Monte Carlo dropout. We find there are simple cases where neither method can have substantially increased uncertainty in between well-separated regions of low uncertainty. We provide strong empirical evidence that exact inference does not have this pathology, hence it is due to the approximation and not the model. In contrast, for deep networks, we prove a universality result showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates. However, we find empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks. Our results motivate careful consideration of the implications of approximate inference methods in BNNs.

Author Information

Andrew Foong (University of Cambridge)
David Burt (University of Cambridge)
Yingzhen Li (Microsoft Research Cambridge)
Richard Turner (University of Cambridge)

More from the Same Authors