Timezone: »
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers.
Author Information
Hadi Mohaghegh Dolatabadi (University of Melbourne)
Sarah Erfani (University of Melbourne)
Christopher Leckie (University of Melbourne)
More from the Same Authors
-
2021 Poster: $\alpha$-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression »
JIABO HE · Sarah Erfani · Xingjun Ma · James Bailey · Ying Chi · Xian-Sheng Hua -
2021 Poster: Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks »
Hanxun Huang · Yisen Wang · Sarah Erfani · Quanquan Gu · James Bailey · Xingjun Ma