Skip to yearly menu bar Skip to main content


Poster

Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder

Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan

Keywords: [ Deep Learning ] [ Variational Inference ] [ Deep Learning -> Generative Models; Probabilistic Methods ]

[ ]
[ Paper [ Slides
2019 Poster

Abstract: Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In the discrete case, one can perform reparametrization using the Gumbel-Max trick, but the resulting objective relies on an $\arg \max$ operation and is non-differentiable. In contrast to previous works which resort to \emph{softmax}-based relaxations, we propose to optimize it directly by applying the \emph{direct loss minimization} approach. Our proposal extends naturally to structured discrete latent variable models when evaluating the $\arg \max$ operation is tractable. We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables.

Chat is not available.