Timezone: »

Robust Attribution Regularization
Jiefeng Chen · Xi Wu · Vaibhav Rastogi · Yingyu Liang · Somesh Jha

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #25

An emerging problem in trustworthy machine learning is to train models that produce robust interpretations for their predictions. We take a step towards solving this problem through the lens of axiomatic attribution of neural networks. Our theory is grounded in the recent work, Integrated Gradients (IG) [STY17], in axiomatically attributing a neural network’s output change to its input change. We propose training objectives in classic robust optimization models to achieve robust IG attributions. Our objectives give principled generalizations of previous objectives designed for robust predictions, and they naturally degenerate to classic soft-margin training for one-layer neural networks. We also generalize previous theory and prove that the objectives for different robust optimization models are closely related. Experiments demonstrate the effectiveness of our method, and also point to intriguing problems which hint at the need for better optimization techniques or better neural network architectures for robust attribution training.

Author Information

Jiefeng Chen (University of Wisconsin-Madison)

I am currently a third year Phd student at University of Wisconsin-Madison, in the Computer Science Department. I am co-advised by Prof. Yingyu Liang and Prof. Somesh Jha. I work on trustworthy machine learning with research questions like "How to make machine learning models produce stable explanations of their predictions?", "How to train models that produce robust predictions under adversarial perturbations?, and "Understand when and why some defense mechanisms work?". I obtained my Bachelor's degree in Computer Science from Shanghai Jiao Tong University (SJTU).

Xi Wu (Google)
Vaibhav Rastogi (Google)
Yingyu Liang (University of Wisconsin Madison)
Somesh Jha (University of Wisconsin, Madison)

More from the Same Authors