Consistent Sufficient Explanations and Minimal Local Rules for explaining the decision of any classifier or regressor
Salim I. Amoukou · Nicolas Brunel
Keywords:
consistency
rule-based models
Learning Theory
tree-based models
Explainable AI
interpretability
Robust and Reliable ML
random forests
Trustworthy ML
2022 Poster
Abstract
To explain the decision of any regression and classification model, we extend the notion of probabilistic sufficient explanations (P-SE). For each instance, this approach selects the minimal subset of features that is sufficient to yield the same prediction with high probability, while removing other features. The crux of P-SE is to compute the conditional probability of maintaining the same prediction. Therefore, we introduce an accurate and fast estimator of this probability via random Forests for any data $(\boldsymbol{X}, Y)$ and show its efficiency through a theoretical analysis of its consistency. As a consequence, we extend the P-SE to regression problems. In addition, we deal with non-discrete features, without learning the distribution of $\boldsymbol{X}$ nor having the model for making predictions. Finally, we introduce local rule-based explanations for regression/classification based on the P-SE and compare our approaches w.r.t other explainable AI methods. These methods are available as a Python Package.
Video
Chat is not available.
Successful Page Load