We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII (12.5% relative improvement) and HMDB (RGB) datasets. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem.
Rohit Girdhar (Carnegie Mellon University)
Deva Ramanan (Carnegie Mellon University)
More from the Same Authors
2019 Poster: Volumetric Correspondence Networks for Optical Flow »
Gengshan Yang · Deva Ramanan
2017 Poster: Learning to Model the Tail »
Yu-Xiong Wang · Deva Ramanan · Martial Hebert