Timezone: »
In recent years, post-hoc local instance-level and global dataset-level explainability of black-box models has received a lot of attention. Lesser attention has been given to obtaining insights at intermediate or group levels, which is a need outlined in recent works that study the challenges in realizing the guidelines in the General Data Protection Regulation (GDPR). In this paper, we propose a meta-method that, given a typical local explainability method, can build a multilevel explanation tree. The leaves of this tree correspond to local explanations, the root corresponds to global explanation, and intermediate levels correspond to explanations for groups of data points that it automatically clusters. The method can also leverage side information, where users can specify points for which they may want the explanations to be similar. We argue that such a multilevel structure can also be an effective form of communication, where one could obtain few explanations that characterize the entire dataset by considering an appropriate level in our explanation tree. Explanations for novel test points can be cost-efficiently obtained by associating them with the closest training points. When the local explainability technique is generalized additive (viz. LIME, GAMs), we develop fast approximate algorithm for building the multilevel tree and study its convergence behavior. We show that we produce high fidelity sparse explanations on several public datasets and also validate the effectiveness of the proposed technique based on two human studies -- one with experts and the other with non-expert users -- on real world datasets.
Author Information
Karthikeyan Natesan Ramamurthy (IBM Research)
Bhanukiran Vinzamuri (IBM Research)
Yunfeng Zhang (IBM Research)
Amit Dhurandhar (IBM Research)
More from the Same Authors
-
2021 : Accurate Multi-Endpoint Molecular Toxicity Predictions in Humans with Contrastive Explanations »
Bhanushee Sharma · Vijil Chenthamarakshan · Amit Dhurandhar · James Hendler · Jonathan S. Dordick · Payel Das -
2023 : Causal Market Blanket Representations for Domain Generalization Prediction »
Naiyu Yin · Hanjing Wang · Tian Gao · Amit Dhurandhar · Qiang Ji -
2023 Poster: Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning »
Amit Dhurandhar · Karthikeyan Natesan Ramamurthy · Kartik Ahuja · Vijay Arya -
2023 Poster: Cookie Consent Has Disparate Impact on Estimation Accuracy »
Erik Miehling · Rahul Nair · Elizabeth Daly · Karthikeyan Natesan Ramamurthy · Robert Redmond -
2023 Poster: The Impact of Positional Encoding on Length Generalization in Transformers »
Amirhossein Kazemnejad · Inkit Padhi · Karthikeyan Natesan Ramamurthy · Payel Das · Siva Reddy -
2022 Poster: Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations »
Amit Dhurandhar · Karthikeyan Natesan Ramamurthy · Karthikeyan Shanmugam -
2022 Poster: On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach »
Dennis Wei · Rahul Nair · Amit Dhurandhar · Kush Varshney · Elizabeth Daly · Moninder Singh -
2021 Poster: CoFrNets: Interpretable Neural Architecture Inspired by Continued Fractions »
Isha Puri · Amit Dhurandhar · Tejaswini Pedapati · Karthikeyan Shanmugam · Dennis Wei · Kush Varshney -
2020 : Closing Remarks »
Frederic Chazal · Smita Krishnaswamy · Roland Kwitt · Karthikeyan Natesan Ramamurthy · Bastian Rieck · Yuhei Umeda · Guy Wolf -
2020 : Spotlight: Characterizing the Latent Space of Molecular Generative Models with Persistent Homology Metrics »
Yair Schiff · Payel Das · Vijil Chenthamarakshan · Karthikeyan Natesan Ramamurthy -
2020 Workshop: Topological Data Analysis and Beyond »
Bastian Rieck · Frederic Chazal · Smita Krishnaswamy · Roland Kwitt · Karthikeyan Natesan Ramamurthy · Yuhei Umeda · Guy Wolf -
2020 : Opening Remarks »
Frederic Chazal · Smita Krishnaswamy · Roland Kwitt · Karthikeyan Natesan Ramamurthy · Bastian Rieck · Yuhei Umeda · Guy Wolf -
2020 Poster: Finding the Homology of Decision Boundaries with Active Learning »
Weizhi Li · Gautam Dasarathy · Karthikeyan Natesan Ramamurthy · Visar Berisha -
2020 Poster: Learning Global Transparent Models consistent with Local Contrastive Explanations »
Tejaswini Pedapati · Avinash Balakrishnan · Karthikeyan Shanmugam · Amit Dhurandhar -
2020 Expo Demonstration: Beyond AutoML: AI Automation & Scaling »
Lisa Amini · Nitin Gupta · Parikshit Ram · Kiran Kate · Bhanukiran Vinzamuri · Nathalie Baracaldo · Martin Korytak · Daniel K Weidele · Dakuo Wang -
2019 : Coffee Break and Poster Session »
Rameswar Panda · Prasanna Sattigeri · Kush Varshney · Karthikeyan Natesan Ramamurthy · Harvineet Singh · Vishwali Mhasawade · Shalmali Joshi · Laleh Seyyed-Kalantari · Matthew McDermott · Gal Yona · James Atwood · Hansa Srinivasan · Yonatan Halpern · D. Sculley · Behrouz Babaki · Margarida Carvalho · Josie Williams · Narges Razavian · Haoran Zhang · Amy Lu · Irene Y Chen · Xiaojie Mao · Angela Zhou · Nathan Kallus -
2018 Poster: Improving Simple Models with Confidence Profiles »
Amit Dhurandhar · Karthikeyan Shanmugam · Ronny Luss · Peder A Olsen -
2018 Demonstration: PatentAI: IP Infringement Detection with Enhanced Paraphrase Identification »
Youssef Drissi · Karthikeyan Natesan Ramamurthy · Prasanna Sattigeri -
2018 Poster: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives »
Amit Dhurandhar · Pin-Yu Chen · Ronny Luss · Chun-Chen Tu · Paishun Ting · Karthikeyan Shanmugam · Payel Das -
2017 Poster: Optimized Pre-Processing for Discrimination Prevention »
Flavio Calmon · Dennis Wei · Bhanukiran Vinzamuri · Karthikeyan Natesan Ramamurthy · Kush Varshney