Poster
RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations
Enyi Jiang · Gagandeep Singh
East Exhibit Hall A-C #2207
Abstract:
Most existing works focus on improving robustness against adversarial attacks bounded by a single $l_p$ norm using adversarial training (AT). However, the multiple-norm robustness (union accuracy) of these AT models is still low. The tradeoffs among robustness against multiple $l_p$ perturbations and accuracy/robustness make obtaining good union and clean accuracy challenging. By analyzing the tradeoffs from the lens of distribution shifts, we design a logit pairing loss to improve the union accuracy. We connect natural training (NT) with AT via gradient projection, to incorporate useful information from NT into AT, where we empirically and theoretically show it moderates the accuracy/robustness tradeoff. Combining our contributions, we propose a training framework \textbf{RAMP}, to boost the robustness against multiple $l_p$ perturbations. We show \textbf{RAMP} can be easily adapted for both robust fine-tuning and full AT. For robust fine-tuning, \textbf{RAMP} obtains a union accuracy up to $53.5\%$ on CIFAR-10, and $29.5\%$ on ImageNet. For training from scratch, \textbf{RAMP} achieves a union accuracy of $44.6\%$ and good clean accuracy of $81.2\%$ on ResNet-18 against AutoAttack on CIFAR-10.
Live content is unavailable. Log in and register to view live content