We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model fine-tuning step that aims to radically alter the explanations without hurting the accuracy of the original models, e.g., VGG19, ResNet50, and DenseNet121. By incorporating the interpretation results directly in the penalty term of the objective function for fine-tuning, we show that the state-of-the-art saliency map based interpreters, e.g., LRP, Grad-CAM, and SimpleGrad, can be easily fooled with our model manipulation. We propose two types of fooling, Passive and Active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method.
Juyeon Heo (Sungkyunkwan University)
Sunghwan Joo (Sungkyunkwan University)
Taesup Moon (Sungkyunkwan University (SKKU))
Taesup Moon is currently an associate professor at Sungkyunkwan University (SKKU), Korea. Prior to joining SKKU in 2017, he was an assistant professor at DGIST from 2015 to 2017, a research staff member at Samsung Advanced Institute of Technology (SAIT) from 2013 to 2015, a postdoctoral researcher at UC Berkeley, Statistics from 2012 to 2013, and a research scientist at Yahoo! Labs from 2008 to 2012. He got his Ph.D. and MS degrees in Electrical Engineering from Stanford University, CA USA in 2008 and 2004, respectively, and his BS degree in Electrical Engineering from Seoul National University, Korea in 2002. His research interests are in deep learning, statistical machine learning, data science, signal processing, and information theory.
More from the Same Authors
2020 Poster: Continual Learning with Node-Importance based Adaptive Group Sparse Regularization »
Sangwon Jung · Hongjoon Ahn · Sungmin Cha · Taesup Moon
2019 Poster: Uncertainty-based Continual Learning with Adaptive Regularization »
Hongjoon Ahn · Sungmin Cha · Donggyu Lee · Taesup Moon
2016 Poster: Neural Universal Discrete Denoiser »
Taesup Moon · Seonwoo Min · Byunghan Lee · Sungroh Yoon