Timezone: »

Trajectory-based Explainability Framework for Offline RL
Shripad Deshmukh · Arpan Dasgupta · Chirag Agarwal · Nan Jiang · Balaji Krishnamurthy · Georgios Theocharous · Jayakumar Subramanian
Event URL: https://openreview.net/forum?id=p8_QCs0_q-A »

Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo).

Author Information

Shripad Deshmukh (Adobe)
Arpan Dasgupta (International Institute of Information Technology, Hyderabad, International Institute of Information Technology Hyderabad)
Chirag Agarwal (Harvard University/Adobe)
Nan Jiang (University of Illinois at Urbana-Champaign)
Balaji Krishnamurthy (Adobe Inc)
Georgios Theocharous (Adobe Research)
Jayakumar Subramanian (Adobe Systems)

More from the Same Authors