`

Timezone: »

 
Poster
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer
Zineng Tang · Jaemin Cho · Hao Tan · Mohit Bansal

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Since visual perception can give rich information beyond text descriptions for world understanding, there has been increasing interest in leveraging visual grounding for language learning. Recently, vokenization (Tan and Bansal, 2020) has attracted attention by using the predictions of a text-to-image retrieval model as labels for language model supervision. Despite its success, the method suffers from approximation error of using finite image labels and the lack of vocabulary diversity of a small image-text dataset. To overcome these limitations, we present VidLanKD, a video-language knowledge distillation method for improving language understanding. We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset. To avoid approximation error, we propose to use different knowledge distillation objectives. In addition, the use of a large-scale video-text dataset helps learn diverse and richer vocabularies. In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models, on several downstream language understanding tasks including GLUE, SQuAD, and SWAG. We also demonstrate the improved world knowledge, physical reasoning, and temporal reasoning capabilities of our model by evaluating on the GLUE-diagnostics, PIQA, and TRACIE datasets. Lastly, we present comprehensive ablation studies as well as visualizations of the learned text-to-video grounding results of our teacher and student language models.

Author Information

Zineng Tang (University of North Carolina, Chapel Hill)
Jaemin Cho (UNC Chapel Hill)
Hao Tan (University of North Carolina, Chapel Hill)
Mohit Bansal (UNC Chapel Hill)

More from the Same Authors