Poster
Training for Stable Explanation for Free
Chao Chen · Chenghua Guo · Rufeng Chen · Guixiang Ma · Ming Zeng · Xiangwen Liao · Xi Zhang · Sihong Xie
East Exhibit Hall A-C #4402
Abstract:
To foster trust in machine learning models, explanations must be faithful and stable for consistent insights. Existing relevant works rely on the $\ell_p$ distance for stability assessment, which diverges from human perception. Besides, existing adversarial training (AT) associated with intensive computations may lead to an arms race. To address these challenges, we introduce a novel metric to assess the stability of top-$k$ salient features. We introduce R2ET which trains for stable explanation by efficient and effective regularizer, and analyze R2ET by multi-objective optimization to prove numerical and statistical stability of explanations. Moreover, theoretical connections between R2ET and certified robustness justify R2ET's stability in all attacks. Extensive experiments across various data modalities and model architectures show that R2ET achieves superior stability against stealthy attacks, and generalizes effectively across different explanation methods.
Live content is unavailable. Log in and register to view live content