Skip to yearly menu bar Skip to main content


Poster

Deep Attentive Tracking via Reciprocative Learning

Shi Pu · YIBING SONG · Chao Ma · Honggang Zhang · Ming-Hsuan Yang

Room 517 AB #147

Keywords: [ Computer Vision ] [ Tracking and Motion in Video ]


Abstract:

Visual attention, derived from cognitive neuroscience, facilitates human perception on the most pertinent subset of the sensory data. Recently, significant efforts have been made to exploit attention schemes to advance computer vision systems. For visual tracking, it is often challenging to track target objects undergoing large appearance changes. Attention maps facilitate visual tracking by selectively paying attention to temporal robust features. Existing tracking-by-detection approaches mainly use additional attention modules to generate feature weights as the classifiers are not equipped with such mechanisms. In this paper, we propose a reciprocative learning algorithm to exploit visual attention for training deep classifiers. The proposed algorithm consists of feed-forward and backward operations to generate attention maps, which serve as regularization terms coupled with the original classification loss function for training. The deep classifier learns to attend to the regions of target objects robust to appearance changes. Extensive experiments on large-scale benchmark datasets show that the proposed attentive tracking method performs favorably against the state-of-the-art approaches.

Live content is unavailable. Log in and register to view live content