Poster
Learning a 1-layer conditional generative model in total variation
Ajil Jalal · Justin Kang · Ananya Uppal · Kannan Ramchandran · Eric Price
Great Hall & Hall B1+B2 (level 1) #1009
Abstract:
A conditional generative model is a method for sampling from a conditional distribution . For example, one may want to sample an image of a cat given the label cat''. A feed-forward conditional generative model is a function that takes the input and a random seed , and outputs a sample from . Ideally the distribution of outputs would be close in total variation to the ideal distribution .Generalization bounds for other learning models require assumptions on the distribution of , even in simple settings like linear regression with Gaussian noise. We show these assumptions are unnecessary in our model, for both linear regression and single-layer ReLU networks. Given samples , we show how to learn a 1-layer ReLU conditional generative model in total variation. As our result has no assumption on the distribution of inputs , if we are given access to the internal activations of a deep generative model, we can compose our 1-layer guarantee to progressively learn the deep model using a near-linear number of samples.
Chat is not available.