Timezone: »
Adversarial training of Deep Neural Networks is known to be significantly more data-hungry when compared to standard training. Furthermore, complex data augmentations such as AutoAugment, which have led to substantial gains in standard training of image classifiers, have not been successful with Adversarial Training. We first explain this contrasting behavior by viewing augmentation during training as a problem of domain generalization, and further propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training. We aim to handle the conflicting goals of enhancing the diversity of the training dataset and training with data that is close to the test distribution by using a combination of simple and complex augmentations with separate batch normalization layers during training. We further utilize the popular Jensen-Shannon divergence loss to encourage the \emph{joint} learning of the \emph{diverse augmentations}, thereby allowing simple augmentations to guide the learning of complex ones. Lastly, to improve the computational efficiency of the proposed method, we propose and utilize a two-step defense, Ascending Constraint Adversarial Training (ACAT), that uses an increasing epsilon schedule and weight-space smoothing to prevent gradient masking. The proposed method DAJAT achieves substantially better robustness-accuracy trade-off when compared to existing methods on the RobustBench Leaderboard on ResNet-18 and WideResNet-34-10. The code for implementing DAJAT is available here: https://github.com/val-iisc/DAJAT
Author Information
Sravanti Addepalli (Indian Institute of Science)
Samyak Jain (Indian Institute of Technology (BHU), Varanasi)
Venkatesh Babu R (Indian Institute of Science)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Efficient and Effective Augmentation Strategy for Adversarial Training »
Thu. Dec 1st through Fri the 2nd Room Hall J #422
More from the Same Authors
-
2022 : Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks »
Sravanti Addepalli · Anshul Nasery · Venkatesh Babu R · Praneeth Netrapalli · Prateek Jain -
2022 Spotlight: Lightning Talks 6A-3 »
Junyu Xie · Chengliang Zhong · Ali Ayub · Sravanti Addepalli · Harsh Rangwani · Jiapeng Tang · Yuchen Rao · Zhiying Jiang · Yuqi Wang · Xingzhe He · Gene Chou · Ilya Chugunov · Samyak Jain · Yuntao Chen · Weidi Xie · Sumukh K Aithal · Carter Fendley · Lev Markhasin · Yiqin Dai · Peixing You · Bastian Wandt · Yinyu Nie · Helge Rhodin · Felix Heide · Ji Xin · Angela Dai · Andrew Zisserman · Bi Wang · Xiaoxue Chen · Mayank Mishra · ZHAO-XIANG ZHANG · Venkatesh Babu R · Justus Thies · Ming Li · Hao Zhao · Venkatesh Babu R · Jimmy Lin · Fuchun Sun · Matthias Niessner · Guyue Zhou · Xiaodong Mu · Chuang Gan · Wenbing Huang -
2022 Spotlight: Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data »
Harsh Rangwani · Sumukh K Aithal · Mayank Mishra · Venkatesh Babu R -
2022 Spotlight: Lightning Talks 1B-3 »
Chaofei Wang · Qixun Wang · Jing Xu · Long-Kai Huang · Xi Weng · Fei Ye · Harsh Rangwani · shrinivas ramasubramanian · Yifei Wang · Qisen Yang · Xu Luo · Lei Huang · Adrian G. Bors · Ying Wei · Xinglin Pan · Sho Takemori · Hong Zhu · Rui Huang · Lei Zhao · Yisen Wang · Kato Takashi · Shiji Song · Yanan Li · Rao Anwer · Yuhei Umeda · Salman Khan · Gao Huang · Wenjie Pei · Fahad Shahbaz Khan · Venkatesh Babu R · Zenglin Xu -
2022 Spotlight: Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics »
Harsh Rangwani · shrinivas ramasubramanian · Sho Takemori · Kato Takashi · Yuhei Umeda · Venkatesh Babu R -
2022 Poster: Subsidiary Prototype Alignment for Universal Domain Adaptation »
Jogendra Nath Kundu · Suvaansh Bhambri · Akshay R Kulkarni · Hiran Sarkar · Varun Jampani · Venkatesh Babu R -
2022 Poster: Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data »
Harsh Rangwani · Sumukh K Aithal · Mayank Mishra · Venkatesh Babu R -
2022 Poster: Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics »
Harsh Rangwani · shrinivas ramasubramanian · Sho Takemori · Kato Takashi · Yuhei Umeda · Venkatesh Babu R -
2021 Poster: Towards Efficient and Effective Adversarial Training »
Gaurang Sriramanan · Sravanti Addepalli · Arya Baburaj · Venkatesh Babu R -
2021 Poster: Non-local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation »
Jogendra Nath Kundu · Siddharth Seth · Anirudh Jamkhandi · Pradyumna YM · Varun Jampani · Anirban Chakraborty · Venkatesh Babu R -
2021 Poster: Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery »
Ramesha Rakesh Mugaludi · Jogendra Nath Kundu · Varun Jampani · Venkatesh Babu R -
2020 Poster: Your Classifier can Secretly Suffice Multi-Source Domain Adaptation »
Naveen Venkat · Jogendra Nath Kundu · Durgesh Singh · Ambareesh Revanur · Venkatesh Babu R -
2020 Poster: Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses »
Gaurang Sriramanan · Sravanti Addepalli · Arya Baburaj · Venkatesh Babu R -
2020 Spotlight: Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses »
Gaurang Sriramanan · Sravanti Addepalli · Arya Baburaj · Venkatesh Babu R