Timezone: »

Implications of Model Indeterminacy for Explanations of Automated Decisions
Marc-Etienne Brunet · Ashton Anderson · Richard Zemel

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #906

There has been a significant research effort focused on explaining predictive models, for example through post-hoc explainability and recourse methods. Most of the proposed techniques operate upon a single, fixed, predictive model. However, it is well-known that given a dataset and a predictive task, there may be a multiplicity of models that solve the problem (nearly) equally well. In this work, we investigate the implications of this kind of model indeterminacy on the post-hoc explanations of predictive models. We show how it can lead to explanatory multiplicity, and we explore the underlying drivers. We show how predictive multiplicity, and the related concept of epistemic uncertainty, are not reliable indicators of explanatory multiplicity. We further illustrate how a set of models showing very similar aggregate performance on a test dataset may show large variations in their local explanations, i.e., for a specific input. We explore these effects for Shapley value based explanations on three risk assessment datasets. Our results indicate that model indeterminacy may have a substantial impact on explanations in practice, leading to inconsistent and even contradicting explanations.

Author Information

Marc-Etienne Brunet (University of Toronto (Vector Institute))
Ashton Anderson (University of Toronto)
Richard Zemel (Columbia University)

More from the Same Authors