Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

Stability Guarantees for Feature Attributions with Multiplicative Smoothing

Anton Xue · Rajeev Alur · Eric Wong


Abstract:

Explanation methods for machine learning models tend not to provide any formal guarantees and may not reflect the underlying decision-making process. In this work, we analyze stability as a property for reliable feature attribution methods. We prove that a relaxed variant of stability is guaranteed if the model is sufficiently Lipschitz with respect to the masking of features. To achieve such a model, we develop a smoothing method called Multiplicative Smoothing (MuS). We show that MuS overcomes the theoretical limitations of standard smoothing techniques and can be integrated with any classifier and feature attribution method. We evaluate MuS on vision and language models with a variety of feature attribution methods, such as LIME and SHAP, and demonstrate that MuS endows feature attributions with non-trivial stability guarantees.

Chat is not available.