Timezone: »

 
Poster
Variational Inference with Tail-adaptive f-Divergence
Dilin Wang · Hao Liu · Qiang Liu

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 210 #37

Variational inference with α-divergences has been widely used in modern probabilistic machine learning. Compared to Kullback-Leibler (KL) divergence, a major advantage of using α-divergences (with positive α values) is their mass-covering property. However, estimating and optimizing α-divergences require to use importance sampling, which could have extremely large or infinite variances due to heavy tails of importance weights. In this paper, we propose a new class of tail-adaptive f-divergences that adaptively change the convex function f with the tail of the importance weights, in a way that theoretically guarantee finite moments, while simultaneously achieving mass-covering properties. We test our methods on Bayesian neural networks, as well as deep reinforcement learning in which our method is applied to improve a recent soft actor-critic (SAC) algorithm (Haarnoja et al., 2018). Our results show that our approach yields significant advantages compared with existing methods based on classical KL and α-divergences.

Author Information

Dilin Wang (UT Austin)
Hao Liu (Salesforce, Berkeley)
Qiang Liu (UT Austin)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors