Skip to yearly menu bar Skip to main content


Poster

One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos

Zechen Bai · Tong He · Haiyang Mei · Pichao WANG · Ziteng Gao · Joya Chen · liulei · Zheng Zhang · Mike Zheng Shou

East Exhibit Hall A-C #1708
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

We introduce VideoLISA, a video-based multimodal large language model designed to tackle the problem of language-instructed reasoning segmentation in videos. Leveraging the reasoning capabilities and world knowledge of large language models, and augmented by the Segment Anything Model, VideoLISA generates temporally consistent segmentation masks in videos based on language instructions. Existing image-based methods, such as LISA, struggle with video tasks due to the additional temporal dimension, which requires temporal dynamic understanding and consistent segmentation across frames. VideoLISA addresses these challenges by integrating a Sparse Dense Sampling strategy into the video-LLM, which balances temporal context and spatial detail within computational constraints. Additionally, we propose a One-Token-Seg-All approach using a specially designed token, enabling the model to segment and track objects across multiple frames. Extensive evaluations on diverse benchmarks, including our newly introduced ReasonVOS benchmark, demonstrate VideoLISA's superior performance in video object segmentation tasks involving complex reasoning, temporal understanding, and object tracking. While optimized for videos, VideoLISA also shows promising generalization to image segmentation, revealing its potential as a unified foundation model for language-instructed object segmentation. Code and model will be available at: https://github.com/showlab/VideoLISA.

Live content is unavailable. Log in and register to view live content