Timezone: »

 
Poster
Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting
Aditya Grover · Jiaming Song · Ashish Kapoor · Kenneth Tran · Alekh Agarwal · Eric Horvitz · Stefano Ermon

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #136

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We show that this likelihood-free importance weighting method induces a new energy-based model and employ it to correct for the bias in existing models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.

Author Information

Aditya Grover (Stanford University)
Jiaming Song (Stanford University)

I am a first year Ph.D. student in Stanford University. I think about problems in machine learning and deep learning under the supervision of Stefano Ermon. I did my undergrad at Tsinghua University, where I was lucky enough to collaborate with Jun Zhu and Lawrence Carin on scalable Bayesian machine learning.

Ashish Kapoor (Microsoft)
Kenneth Tran (Microsoft Research)
Alekh Agarwal (Microsoft Research)
Eric Horvitz (Microsoft Research)
Stefano Ermon (Stanford)

More from the Same Authors