Timezone: »
Shapley values provide model agnostic feature attributions for model outcome at a particular instance by simulating feature absence under a global population distribution. The use of a global population can lead to potentially misleading results when local model behaviour is of interest. Hence we consider the formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values. By doing so, we find that the Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as a self-normalised importance sampling estimator. Empirically, we observe that Neighbourhood Shapley values identify meaningful sparse feature relevance attributions that provide insight into local model behaviour, complimenting conventional Shapley analysis. They also increase on-manifold explainability and robustness to the construction of adversarial classifiers.
Author Information
Sahra Ghalebikesabi (University of Oxford)

Sahra Ghalebikesabi is a fourth year PhD student at the University of Oxford, supervised by Chris Holmes. During her PhD, she interned at DeepMind London and Microsoft Research Cambridge. She is also a Microsoft Research PhD Fellow. Her research focusses on generative modelling for robustness, differential privacy and interpretability.
Lucile Ter-Minassian (University of Oxford)
Karla DiazOrdaz (London School of Hygiene and Tropical Medicine)
My primary methodological research area is causal machine learning motivated by high-dimensional electronic health records and genomics data. My work on treatment effect heterogeneity and optimal treatment regimes is funded through a Wellcome Trust-Royal Society Sir Henry Dale Fellowship (2020-2025).
Chris C Holmes (University of Oxford)
More from the Same Authors
-
2021 : Relaxed-Responsibility Hierarchical Discrete VAEs »
Matthew Willetts · Xenia Miscouridou · Stephen J Roberts · Chris C Holmes -
2022 Workshop: I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification »
Arno Blaas · Sahra Ghalebikesabi · Javier Antorán · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde -
2022 Poster: A Multi-Resolution Framework for U-Nets with Applications to Hierarchical VAEs »
Fabian Falck · Christopher Williams · Dominic Danks · George Deligiannidis · Christopher Yau · Chris C Holmes · Arnaud Doucet · Matthew Willetts -
2021 : Invite Talk 1 Q&A »
Chris C Holmes -
2021 : How to train your model when it's wrong: Bayesian nonparametric learning in M-open »
Chris C Holmes -
2021 Poster: Multi-Facet Clustering Variational Autoencoders »
Fabian Falck · Haoting Zhang · Matthew Willetts · George Nicholson · Christopher Yau · Chris C Holmes -
2021 Poster: Conformal Bayesian Computation »
Edwin Fong · Chris C Holmes -
2021 Poster: Neural Ensemble Search for Uncertainty Estimation and Dataset Shift »
Sheheryar Zaidi · Arber Zela · Thomas Elsken · Chris C Holmes · Frank Hutter · Yee Teh -
2020 : Chris Holmes Q&A »
Chris C Holmes -
2020 : Bayesian nowcasting of COVID-19 regional test results in England »
Chris C Holmes -
2020 Poster: Explicit Regularisation in Gaussian Noise Injections »
Alexander Camuto · Matthew Willetts · Umut Simsekli · Stephen J Roberts · Chris C Holmes -
2018 Poster: Nonparametric learning from Bayesian models with randomized objective functions »
Simon Lyddon · Stephen Walker · Chris C Holmes