Timezone: »

 
Poster
On Locality of Local Explanation Models
Sahra Ghalebikesabi · Lucile Ter-Minassian · Karla DiazOrdaz · Chris C Holmes

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @

Shapley values provide model agnostic feature attributions for model outcome at a particular instance by simulating feature absence under a global population distribution. The use of a global population can lead to potentially misleading results when local model behaviour is of interest. Hence we consider the formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values. By doing so, we find that the Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as a self-normalised importance sampling estimator. Empirically, we observe that Neighbourhood Shapley values identify meaningful sparse feature relevance attributions that provide insight into local model behaviour, complimenting conventional Shapley analysis. They also increase on-manifold explainability and robustness to the construction of adversarial classifiers.

Author Information

Sahra Ghalebikesabi (University of Oxford)
Sahra Ghalebikesabi

Sahra Ghalebikesabi is a fourth year PhD student at the University of Oxford, supervised by Chris Holmes. During her PhD, she interned at DeepMind London and Microsoft Research Cambridge. She is also a Microsoft Research PhD Fellow. Her research focusses on generative modelling for robustness, differential privacy and interpretability.

Lucile Ter-Minassian (University of Oxford)
Karla DiazOrdaz (London School of Hygiene and Tropical Medicine)

My primary methodological research area is causal machine learning motivated by high-dimensional electronic health records and genomics data. My work on treatment effect heterogeneity and optimal treatment regimes is funded through a Wellcome Trust-Royal Society Sir Henry Dale Fellowship (2020-2025).

Chris C Holmes (University of Oxford)

More from the Same Authors