Bias and Generalization in Deep Generative Models: An Empirical Study
Shengjia Zhao · Hongyu Ren · Arianna Yuan · Jiaming Song · Noah Goodman · Stefano Ermon

Tue Dec 4th 04:05 -- 04:10 PM @ Room 220 E

In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images by probing the learning algorithm with carefully designed training datasets. By measuring properties of the learned distribution, we are able to find interesting patterns of generalization. We verify that these patterns are consistent across datasets, common models and architectures.

Author Information

Shengjia Zhao (Stanford University)
Hongyu Ren (Stanford University)
Arianna Yuan (Stanford University)
Jiaming Song (Stanford University)

I am a first year Ph.D. student in Stanford University. I think about problems in machine learning and deep learning under the supervision of Stefano Ermon. I did my undergrad at Tsinghua University, where I was lucky enough to collaborate with Jun Zhu and Lawrence Carin on scalable Bayesian machine learning.

Noah Goodman (Stanford University)
Stefano Ermon (Stanford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors