Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Gaze Meets ML

Memory-Based Sequential Attention

Jason Stock · Charles Anderson

[ ] [ Project Page ]
Sat 16 Dec 12:15 p.m. PST — 12:30 p.m. PST
 
presentation: Gaze Meets ML
Sat 16 Dec 6:15 a.m. PST — 3 p.m. PST

Abstract:

Computational models of sequential attention often use recurrent neural networks, which may lead to information loss over accumulated glimpses and an inability to dynamically reweigh glimpses at each step. Addressing the former limitation should result in greater performance, while addressing the latter should enable greater interpretability. In this work, we propose a biologically-inspired model of sequential attention for image classification. Specifically, our algorithm contextualizes the history of observed locations from within an image to inform future gaze points, akin to scanpaths in the biological visual system. We achieve this by using a transformer-based memory module coupled with a reinforcement learning-based learning algorithm, improving both task performance and model interpretability. In addition to empirically evaluating our approach on classical vision tasks, we demonstrate the robustness of our algorithm to different initial locations in the image and provide interpretations of sampled locations from within the trajectory.

Chat is not available.