Skip to yearly menu bar Skip to main content


Poster

Phased Consistency Model

Fu-Yun Wang · Zhaoyang Huang · Alexander Bergman · Dazhong Shen · Peng Gao · Michael Lingelbach · Keqiang Sun · Weikang Bian · Guanglu Song · Yu Liu · Hongsheng Li · Xiaogang Wang

East Exhibit Hall A-C #2810
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The consistency model (CM) has recently made significant progress in accelerating the generation of diffusion models. However, its application to high-resolution, text-conditioned image generation in the latent space (a.k.a., LCM) remains unsatisfactory. In this paper, we identify three key flaws in the current design of LCM. We investigate the reasons behind these limitations and propose the Phased Consistency Model (PCM), which generalizes the design space and addresses all identified limitations. Our evaluations demonstrate that PCM significantly outperforms LCM across 1-16 step generation settings. While PCM is specifically designed for multi-step refinement, it achieves even superior or comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show that PCM's methodology is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator.

Live content is unavailable. Log in and register to view live content