Skip to yearly menu bar Skip to main content


Poster

VideoTetris: Towards Compositional Text-to-Video Generation

Ye Tian · Ling Yang · Haotian Yang · Yuan Gao · Yufan Deng · Xintao Wang · Zhaochen Yu · Xin Tao · Pengfei Wan · Di ZHANG · Bin CUI

East Exhibit Hall A-C #1806
[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Diffusion models have demonstrated great success in text-to-video (T2V) generation. However, existing methods may face challenges when handling complex (long) video generation scenarios that involve multiple objects or dynamic changes in object numbers. To address these limitations, we propose VideoTetris, a novel framework that enables compositional T2V generation. Specifically, we propose spatio-temporal compositional diffusion to precisely follow complex textual semantics by manipulating and composing the attention maps of denoising networks spatially and temporally. Moreover, we propose a new dynamic-aware data processing pipeline and a consistency regularization method to enhance the consistency of auto-regressive video generation. Extensive experiments demonstrate that our VideoTetris achieves impressive qualitative and quantitative results in compositional T2V generation. Code is available at: https://github.com/YangLing0818/VideoTetris

Live content is unavailable. Log in and register to view live content