Timezone: »

 
(When) Are Contrastive Explanations of Reinforcement Learning Helpful?
Sanjana Narayanan · Isaac Lage · Finale Doshi-Velez

Global explanations of a reinforcement learning (RL) agent's expected behavior can make it safer to deploy. However, such explanations are often difficult to understand because of the complicated nature of many RL policies. Effective human explanations are often contrastive, referencing a known contrast (policy) to reduce redundancy. At the same time, these explanations also require the additional effort of referencing that contrast when evaluating an explanation. We conduct a user study to understand whether and when contrastive explanations might be preferable to complete explanations that do not require referencing a contrast. We find that complete explanations are generally more effective when they are the same size or smaller than a contrastive explanation of the same policy, and no worse when they are larger. This suggests that contrastive explanations are not sufficient to solve the problem of effectively explaining reinforcement learning policies, and require additional careful study for use in this context.

Author Information

Sanjana Narayanan (Harvard University)
Sanjana Narayanan

I graduated from Harvard University in May 2021 with a degree in Computer Science. As an undergraduate at Harvard, I conducted research with Prof. Finale Doshi-Velez on probabilistic models and ML interpretability. For the past year and a half, I have been a Software Engineer at Meta, working on applied ML research for Instagram recommender systems (with a focus on contextual bandits, online reinforcement learning, and offline optimization). I am currently looking for ML engineer positions elsewhere in industry.

Isaac Lage (Harvard)
Finale Doshi-Velez (Harvard)

More from the Same Authors