Timezone: »

Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Elad Ben Avraham · Roei Herzig · Karttikeya Mangalam · Amir Bar · Anna Rohrbach · Leonid Karlinsky · Trevor Darrell · Amir Globerson

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #230

Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights. First, as both images and videos contain structured information, we enrich a transformer model with a set of object tokens that can be used across images and videos. Second, the scene representations of individual frames in video should ``align'' with those of still images. This is achieved via a Frame-Clip Consistency loss, which ensures the flow of structured information between images and videos. We explore a particular instantiation of scene structure, namely a Hand-Object Graph, consisting of hands and objects with their locations as nodes, and physical relations of contact/no-contact as edges. SViT shows strong performance improvements on multiple video understanding tasks and datasets, including the first place in the Ego4D CVPR'22 Point of No Return Temporal Localization Challenge. For code and pretrained models, visit the project page at https://eladb3.github.io/SViT/.

Author Information

Elad Ben Avraham (Tel Aviv University)
Roei Herzig (Tel Aviv University)
Karttikeya Mangalam (UC Berkeley (BAIR))

I’m a first year PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley where I’m jointly advised by Prof. Jitendra Malik and Prof. Yi Ma.

Amir Bar (TAU / UC Berkeley)
Amir Bar

Amir Bar is a fourth-year Ph.D. candidate at Tel Aviv University and a Visiting Ph.D. Researcher at UC Berkeley, advised by Amir Globerson and Trevor Darrell. His primary research area centers around self-supervised learning and how to use large amounts of unlabeled images and videos to enable computers to develop visual understanding. Lately, his focus has been on improving learning algorithms for Masked Image Modeling and Visual Prompting, which involves adapting computer vision models during test time for novel computer vision tasks without changing the model weights or task-specific fine-tuning.

Anna Rohrbach (UC Berkeley)
Leonid Karlinsky (Weizmann Institute of Science)
Trevor Darrell (Electrical Engineering & Computer Science Department)
Amir Globerson (Tel Aviv University, Google)

More from the Same Authors