Skip to yearly menu bar Skip to main content


Poster

What Did You Think Would Happen? Explaining Agent Behaviour through Intended Outcomes

Herman Yau · Chris Russell · Simon Hadfield

Poster Session 3 #969

Abstract:

We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome. These explanations describe the outcome an agent is trying to achieve by its actions. We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning. Rather, the information needed for the explanations must be collected in conjunction with training the agent. We derive approaches designed to extract local explanations based on intention for several variants of Q-function approximation and prove consistency between the explanations and the Q-values learned. We demonstrate our method on multiple reinforcement learning problems, and provide code to help researchers introspecting their RL environments and algorithms.

Chat is not available.