Timezone: »

Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Chitwan Saharia · William Chan · Saurabh Saxena · Lala Li · Jay Whang · Emily Denton · Kamyar Ghasemipour · Raphael Gontijo Lopes · Burcu Karagol Ayan · Tim Salimans · Jonathan Ho · David Fleet · Mohammad Norouzi

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #912

We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g., T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.

Author Information

Chitwan Saharia (Google)
William Chan (Carnegie Mellon University)
Saurabh Saxena (Google)
Lala Li (Google)
Jay Whang (University of Texas at Austin)
Emily Denton (Google)
Kamyar Ghasemipour (Robotics @ Google, University of Toronto, Vector Institute)
Raphael Gontijo Lopes (Google Brain)
Burcu Karagol Ayan (Google)

Burcu Karagol Ayan is a software engineer at Google working on language understanding and responsible AI for multimodal generative models. She holds a PhD from the University of Maryland.

Tim Salimans (Google Brain Amsterdam)
Jonathan Ho (Google)
David Fleet (Google Research, Brain Team and University of Toronto)
Mohammad Norouzi (Google Brain)

More from the Same Authors