BatchNorm Layers have an Outsized Effect on Adversarial Robustness
Noam Zeise · Tiffany Vlaar
Abstract
Training different layers differently may affect resulting adversarial robustness and clean accuracyin adversarial training. We focus on the BatchNorm layers and study their unique role in adversarialtraining. Through a partial adversarial (pre-)training methodology we investigate how differentoptimization strategies for the BatchNorm layers affect adversarial robustness, and interplay withother model design choices.
Chat is not available.
Successful Page Load