Poster
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
Fengpeng Li · Kemou Li · Haiwei Wu · Jinyu Tian · Jiantao Zhou
East Exhibit Hall A-C #4500
To protect deep neural networks (DNNs) from adversarial attacks, adversarial training (AT) is developed by incorporating adversarial examples (AEs) into model training. Recent studies show that adversarial attacks disproportionately impact the patterns within the phase of the sample's frequency spectrum---typically containing crucial semantic information---more than those in the amplitude, resulting in the model's erroneous categorization of AEs. We find that, by mixing the amplitude of training samples' frequency spectrum with those of distractor images for AT, the model can be guided to focus on phase patterns unaffected by adversarial perturbations. As a result, the model's robustness can be improved. Unfortunately, it is still challenging to select appropriate distractor images, which should mix the amplitude without affecting the phase patterns. To this end, in this paper, we propose an optimized \emph{Adversarial Amplitude Generator (AAG)} to achieve a better tradeoff between improving the model's robustness and retaining phase patterns. Based on this generator, together with an efficient AE production procedure, we design a new \emph{Dual Adversarial Training (DAT)} strategy. Experiments on various datasets show that our proposed DAT leads to significantly improved robustness against diverse adversarial attacks. The source code is in supplementary materials.
Live content is unavailable. Log in and register to view live content