Timezone: »

 
Learning Representations for Zero-Shot Image Generation without Text
Gautam Singh · Fei Deng · Sungjin Ahn

DALL-E has shown an impressive ability to generate novel --- significantly and systematically different from the training distribution --- yet realistic images. This is possible because it utilizes the dataset of text-image pairs where the text provides the source of compositionality. Following this result, an important extending question is whether this compositionality can still be achieved even without conditioning on text. In this paper, we propose a simple but novel slot-based autoencoding architecture, called SLATE, that achieves this text-free DALL-E by learning compositional slot-based representations purely from images, an ability lacking in DALL-E. Unlike existing object-centric representation models that decode pixels independently for each slot and each pixel location and compose them via mixture-based alpha composition, we propose to use the Image GPT decoder conditioned on the slots for a more flexible generation by capturing complex interaction among the pixels and the slots. In experiments, we show that this simple architecture achieves zero-shot generation of novel images without text and better quality in generation than the models based on mixture decoders.

Author Information

Gautam Singh (Rutgers University)

I am starting my second year as a Ph.D. student at the Department of Computer Science at Rutgers University. My focus area is probabilistic generative models. Prior to this, I worked at IBM Research India for 3 years after finishing my undergrad from IIT Guwahati.

Fei Deng (Rutgers University)
Sungjin Ahn (KAIST)

More from the Same Authors