Skip to yearly menu bar Skip to main content


Poster

Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation

Ruize Zhang · Sheng Tang · Juan Cao

East Exhibit Hall A-C #1305
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Recently, there have been some works studying self-supervised adversarial training, a learning paradigm that learns robust features without labels. While those works have narrowed the performance gap between self-supervised adversarial training (SAT) and supervised adversarial training (supervised AT), a well-established formulation of SAT and its connection with supervised AT is under-explored. Based on a simple SAT benchmark, we find that SAT still faces the problem of large robust generation gap and degradation on natural samples. We hypothesize this is due to the lack of data complexity and model regularization and propose a method named as DAQ-SDP (Diverse Augmented Queries Self-supervised Double Perturbation). We first challenge the previous conclusion that complex data augmentations degrade robustness in SAT by using diversely augmented samples as queries to guide adversarial training. Inspired by previous works in supervised AT, we then incorporate a self-supervised double perturbation scheme to self-supervised learning (SSL), which promotes robustness transferable to downstream classification. Our work can be seamlessly combined with models pretrained by different SSL frameworks without revising the learning objectives and helps to bridge the gap between SAT and AT. Our method also improves both robust and natural accuracies across different SSL frameworks.

Live content is unavailable. Log in and register to view live content