Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Ginevra Carbone · Luca Bortolussi · Guido Sanguinetti


Abstract:

We consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks in a classification task. Saliency interpretations of deterministic Neural Networks are remarkably brittle even whenthe attacks fail, i.e. for attacks that do not change the classification label. We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and evenunder direct attacks to the explanations. By leveraging recent results, we also provide a theoretical explanation of this result in terms of the geometry of the data manifold. Additionally, we discuss the stability of the interpretations of high level representations of the inputs in the internal layers of a Network. Our results demonstrate that Bayesian methods, in addition to be more robust to adversarialattacks, have the potential to provide more stable and interpretable assessments of Neural Network predictions.

Chat is not available.