Timezone: »

Egocentric Video-Language Pretraining
Kevin Qinghong Lin · Jinpeng Wang · Mattia Soldan · Michael Wray · Rui Yan · Eric Z. XU · Difei Gao · Rong-Cheng Tu · Wenzhe Zhao · Weijie Kong · Chengfei Cai · WANG HongFa · Dima Damen · Bernard Ghanem · Wei Liu · Mike Zheng Shou

Thu Dec 08 05:00 PM -- 07:00 PM (PST) @

Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video-text datasets, such as HowTo100M. In this work, we exploit the recently released Ego4D dataset to pioneer Egocentric VLP along three directions. (i) We create EgoClip, a 1st-person video-text pretraining dataset comprising 3.8M clip-text pairs well-chosen from Ego4D, covering a large variety of human daily activities. (ii) We propose a novel pretraining objective, dubbed EgoNCE, which adapts video-text contrastive learning to the egocentric domain by mining egocentric-aware positive and negative samples. (iii) We introduce EgoMCQ, a development benchmark that is close to EgoClip and hence can support effective validation and fast exploration of our design decisions in EgoClip and EgoNCE. Furthermore, we demonstrate strong performance on five egocentric downstream tasks across three datasets: video-text retrieval on EPIC-KITCHENS-100; action recognition on Charades-Ego; natural language query, moment query, and object state change classification on Ego4D challenge benchmarks. The dataset and code are available at https://github.com/showlab/EgoVLP.

Author Information

Kevin Qinghong Lin (National University of Singapore)
Kevin Qinghong Lin

I am currently a first-year Ph.D. student in Show Lab @ NUS, working with Prof. Mike Shou. Before that, I spend a wonderful year at Tencent as an intern, working with Dr. Wei Liu. I obtained my B.Sc and M.Sc degree in Shenzhen University. My research interests lie in Multi-Modal Learning, especially Vision-Language Pretraining.

Mattia Soldan (KAUST)
Michael Wray (University of Bristol)
Rui Yan (Nanjing University of Science and Technology)
Eric Z. XU (National University of Singapore)
Difei Gao (NUS)
Rong-Cheng Tu (Beijing Institute of Technology)
Wenzhe Zhao (South China University of Technology)
Weijie Kong (Peking University)
Chengfei Cai (Zhejiang University)
WANG HongFa (Chinese Academy of Sciences)
Dima Damen (University of Bristol)
Dima Damen

Professor of Computer Vision at the University of Bristol.

Bernard Ghanem (KAUST)
Wei Liu (Tencent)
Mike Zheng Shou (National University of Singapore)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors