Skip to yearly menu bar Skip to main content


Poster

Amortized Bethe Free Energy Minimization for Learning MRFs

Sam Wiseman · Yoon Kim

East Exhibition Hall B + C #145

Keywords: [ Latent Variable Models ] [ Probabilistic Methods -> Belief Propagation; Probabilistic Methods ] [ Deep Learning ] [ Generative Models ]


Abstract:

We propose to learn deep undirected graphical models (i.e., MRFs) with a non-ELBO objective for which we can calculate exact gradients. In particular, we optimize a saddle-point objective deriving from the Bethe free energy approximation to the partition function. Unlike much recent work in approximate inference, the derived objective requires no sampling, and can be efficiently computed even for very expressive MRFs. We furthermore amortize this optimization with trained inference networks. Experimentally, we find that the proposed approach compares favorably with loopy belief propagation, but is faster, and it allows for attaining better held out log likelihood than other recent approximate inference schemes.

Live content is unavailable. Log in and register to view live content