Timezone: »

Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Jianan Zhou · Jianing Zhu · Jingfeng ZHANG · Tongliang Liu · Gang Niu · Bo Han · Masashi Sugiyama


Adversarial training (AT) with imperfect supervision is significant but receives limited attention. To push AT towards more practical scenarios, we explore a brand new yet challenging setting, i.e., AT with complementary labels (CLs), which specify a class that a data sample does not belong to. However, the direct combination of AT with existing methods for CLs results in consistent failure, but not on a simple baseline of two-stage training. In this paper, we further explore the phenomenon and identify the underlying challenges of AT with CLs as intractable adversarial optimization and low-quality adversarial examples. To address the above problems, we propose a new learning strategy using gradually informative attacks, which consists of two critical components: 1) Warm-up Attack (Warm-up) gently raises the adversarial perturbation budgets to ease the adversarial optimization with CLs; 2) Pseudo-Label Attack (PLA) incorporates the progressively informative model predictions into a corrected complementary loss. Extensive experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets. The code is publicly available at: https://github.com/RoyalSkye/ATCL.

Author Information

Jianan Zhou (Nanyang Technological University)
Jianing Zhu (HKBU)
Tongliang Liu (The University of Sydney)
Gang Niu (RIKEN)
Masashi Sugiyama (RIKEN / University of Tokyo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors