Timezone: »

Towards Interpretable Reinforcement Learning Using Attention Augmented Agents
Alexander Mott · Daniel Zoran · Mike Chrzanowski · Daan Wierstra · Danilo Jimenez Rezende

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #235

Inspired by recent work in attention models for image captioning and question answering, we present a soft attention model for the reinforcement learning domain. This model bottlenecks the view of an agent by a soft, top-down attention mechanism, forcing the agent to focus on task-relevant information by sequentially querying its view of the environment. The output of the attention mechanism allows direct observation of the information used by the agent to select its actions, enabling easier interpretation of this model than of traditional models. We analyze the different strategies the agents learn and show that a handful of strategies arise repeatedly across different games. We also show that the model learns to query separately about space and content (where'' vs.what''). We demonstrate that an agent using this mechanism can achieve performance competitive with state-of-the-art models on ATARI tasks while still being interpretable.

Author Information

Alexander Mott (DeepMind)
Daniel Zoran (DeepMind)
Mike Chrzanowski (Google Brain)
Daan Wierstra (DeepMind Technologies)
Danilo Jimenez Rezende (Google DeepMind)

More from the Same Authors