Poster
MGPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and Generation
Mingshuang Luo · RuiBing Hou · Zhuo Li · Hong Chang · Zimo Liu · Yaowei Wang · Shiguang Shan
Abstract:
This paper presents MGPT, an advanced ultimodal, ultitask framework for otion comprehension and generation. MGPT operates on three fundamental principles. The first focuses on creating a unified representation space for various motion-relevant modalities. We employ discrete vector quantization for multimodal conditional signals, such as text, music and motion/dance, enabling seamless integration into a large language model (LLM) with a single vocabulary.The second involves modeling motion generation directly in the raw motion space. This strategy circumvents the information loss associated with a discrete tokenizer, resulting in more detailed and comprehensive motion generation. Third, MGPT learns to model the connections and synergies among various motion-relevant tasks. Text, the most familiar and well-understood modality for LLMs, is utilized as a bridge to establish connections between different motion tasks, facilitating mutual reinforcement. To our knowledge, MGPT is the first model capable of comprehending and generating motions based on multiple signals.Extensive experiments highlight MGPT's superior performance across various motion-relevant tasks and its powerful zero-shot generalization capabilities for extremely challenging tasks. Project page: \url{https://github.com/luomingshuang/M3GPT}.
Chat is not available.