Skip to yearly menu bar Skip to main content

Workshop: Meta-Learning

Learning to Generate Noise for Multi-Attack Robustness

Divyam Madaan

Abstract: The majority of existing adversarial defense methods are tailored to defend against a single category of adversarial perturbation (e.g. $\ell_\infty$-attack). However, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. Moreover, training on multiple perturbations simultaneously significantly increases the computational overhead during training. To address these challenges, we propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks. Its key component is Meta Noise Generator (MNG) that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. By utilizing samples generated by MNG, we train a model by enforcing the label consistency across multiple perturbations. We validate the robustness of models trained by our scheme on various datasets and against a wide variety of perturbations, demonstrating that it significantly outperforms the baselines across multiple perturbations with a marginal computational cost.

Chat is not available.