Timezone: »
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when.In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradients × Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods, demonstrating that no method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks.By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
Author Information
Tessa Han (Harvard University)
Suraj Srinivas (School of Engineering and Applied Sciences, Harvard University)
Himabindu Lakkaraju (Harvard)
More from the Same Authors
-
2022 : TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations »
Dylan Slack · Satyapriya Krishna · Himabindu Lakkaraju · Sameer Singh -
2022 : On the Impact of Adversarially Robust Models on Algorithmic Recourse »
Satyapriya Krishna · Chirag Agarwal · Himabindu Lakkaraju -
2023 Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations »
Jiaqi Ma · Danielle Belgrave · P-R Stark · Daniele Magazzeni · Himabindu Lakkaraju -
2023 Workshop: XAI in Action: Past, Present, and Future Applications »
Chhavi Yadav · Michal Moshkovitz · Nave Frost · Suraj Srinivas · Bingqing Chen · Valentyn Boreiko · Himabindu Lakkaraju · J. Zico Kolter · Dotan Di Castro · Kamalika Chaudhuri -
2022 : Contributed Talk: TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations »
Dylan Slack · Satyapriya Krishna · Himabindu Lakkaraju · Sameer Singh -
2022 Poster: Data-Efficient Structured Pruning via Submodular Optimization »
Marwa El Halabi · Suraj Srinivas · Simon Lacoste-Julien -
2022 Poster: OpenXAI: Towards a Transparent Evaluation of Model Explanations »
Chirag Agarwal · Satyapriya Krishna · Eshika Saxena · Martin Pawelczyk · Nari Johnson · Isha Puri · Marinka Zitnik · Himabindu Lakkaraju -
2022 Poster: Efficient Training of Low-Curvature Neural Networks »
Suraj Srinivas · Kyle Matoba · Himabindu Lakkaraju · François Fleuret