Skip to yearly menu bar Skip to main content


Poster

Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

Fanny Yang · Zuowen Wang · Christina Heinze-Deml

East Exhibition Hall B, C #17

Keywords: [ Algorithms ] [ Adversarial Learning ] [ Regularization ] [ Deep Learning -> Optimization for Deep Networks; Theory ]


Abstract:

This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness). Evaluated on these adversarially transformed examples, standard and adversarial training with such regularizers achieves a relative error reduction of 20% for CIFAR-10 with the same computational budget. This even surpasses handcrafted spatial-equivariant networks. Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set. We prove that this no-trade-off phenomenon holds for adversarial examples from transformation groups.

Live content is unavailable. Log in and register to view live content