Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models

Motion Flow Matching for Efficient Human Motion Synthesis and Editing

TAO HU · Wenzhe Yin · Pingchuan Ma · Yunlu Chen · Basura Fernando · Yuki M Asano · Efstratios Gavves · Pascal Mettes · Björn Ommer · Cees Snoek


Abstract:

Human motion synthesis is a fundamental task in the field of computer animation. Recent methods based on diffusion models or GPT structure demonstrate commendable performance but exhibit drawbacks in terms of slow sampling speeds or the accumulation of errors. In this paper, we propose Motion Flow Matching, a novel generative model designed for human motion generation featuring efficient sampling and effectiveness in motion editing applications. Our method reduces the sampling complexity from 1000 steps in previous diffusion models to just 10 steps, while achieving comparable performance in text-to-motion and action-to-motion generation benchmarks. Noticeably, our approach establishes a new state-of-the-art result of Fréchet Inception Distance on the KIT-ML dataset. What is more, we tailor a straightforward motion editing paradigm named trajectory rewriting leveraging the ODE-style generative models and apply it to various editing scenarios including motion prediction, motion in-between prediction, motion interpolation, and upper-body editing.

Chat is not available.