Skip to yearly menu bar Skip to main content


Poster

More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation

Quanfu Fan · Chun-Fu (Richard) Chen · Hilde Kuehne · Marco Pistoia · David Cox

East Exhibition Hall B, C #72

Keywords: [ Applications ] [ Applications -> Activity and Event Recognition; Applications -> Computer Vision; Deep Learning ] [ CNN Architectures ]


Abstract:

Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present an lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by 3~4 times in FLOPs and ~2 times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at \url{https://github.com/IBM/bLVNet-TAM}.

Live content is unavailable. Log in and register to view live content