Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: eXplainable AI approaches for debugging and diagnosis

[S7] DeDUCE: Generating Counterfactual Explanations At Scale

Benedikt Höltgen · Lisa Schut · Jan Brauner · Yarin Gal


Abstract:

When an image classifier outputs a wrong class label, it can be helpful to see what changes in the image would lead to a correct classification. This is the aim of algorithms generating counterfactual explanations. However, there is no easily scalable method to generate such counterfactuals. We develop a new algorithm providing counterfactual explanations for large image classifiers trained with spectral normalisation at low computational cost. We empirically compare this algorithm against baselines from the literature; our novel algorithm consistently finds counterfactuals that are much closer to the original inputs. At the same time, the realism of these counterfactuals is comparable to the baselines.