Skip to yearly menu bar Skip to main content


Poster

A flexible model for training action localization with varying levels of supervision

Guilhem Chéron · Jean-Baptiste Alayrac · Ivan Laptev · Cordelia Schmid

Room 517 AB #149

Keywords: [ Activity and Event Recognition ] [ Video Analysis ] [ Computer Vision ] [ Semi-Supervised Learning ]


Abstract:

Spatio-temporal action detection in videos is typically addressed in a fully-supervised setup with manual annotation of training videos required at every frame. Since such annotation is extremely tedious and prohibits scalability, there is a clear need to minimize the amount of manual supervision. In this work we propose a unifying framework that can handle and combine varying types of less demanding weak supervision. Our model is based on discriminative clustering and integrates different types of supervision as constraints on the optimization. We investigate applications of such a model to training setups with alternative supervisory signals ranging from video-level class labels over temporal points or sparse action bounding boxes to the full per-frame annotation of action bounding boxes. Experiments on the challenging UCF101-24 and DALY datasets demonstrate competitive performance of our method at a fraction of supervision used by previous methods. The flexibility of our model enables joint learning from data with different levels of annotation. Experimental results demonstrate a significant gain by adding a few fully supervised examples to otherwise weakly labeled videos.

Live content is unavailable. Log in and register to view live content