Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning for Creativity and Design

Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization

Peter Schaldenbrand · Zhixuan Liu · Jean Oh


Abstract:

We introduce an approach to generating videos based on a series of given language descriptions. Frames of the video are generated sequentially and optimized by guidance from the CLIP image-text encoder; iterating through language descriptions, weighting the current description higher than others. As opposed to optimizing through an image generator model itself, which tends to be computationally heavy, the proposed approach computes the CLIP loss directly at the pixel level, achieving general content at a speed suitable for near real-time systems. The approach can generate videos in up to 720p resolution, variable frame-rates, and arbitrary aspect ratios at a rate of 1-2 frames per second. Please visit our website to view videos and access our open-source code: https://pschaldenbrand.github.io/text2video/.

Chat is not available.