Timezone: »
We investigate the robustness of vision transformers (ViTs) through the lens of their special patch-based architectural structure, i.e., they process an image as a sequence of image patches. We find that ViTs are surprisingly insensitive to patch-based transformations, even when the transformation largely destroys the original semantics and makes the image unrecognizable by humans. This indicates that ViTs heavily use features that survived such transformations but are generally not indicative of the semantic class to humans. Further investigations show that these features are useful but non-robust, as ViTs trained on them can achieve high in-distribution accuracy, but break down under distribution shifts. From this understanding, we ask: can training the model to rely less on these features improve ViT robustness and out-of-distribution performance? We use the images transformed with our patch-based operations as negatively augmented views and offer losses to regularize the training away from using non-robust features. This is a complementary view to existing research that mostly focuses on augmenting inputs with semantic-preserving transformations to enforce models' invariance. We show that patch-based negative augmentation consistently improves robustness of ViTs on ImageNet based robustness benchmarks across 20+ different experimental settings. Furthermore, we find our patch-based negative augmentation are complementary to traditional (positive) data augmentation techniques and batch-based negative examples in contrastive learning.
Author Information
Yao Qin (Google Research)
Chiyuan Zhang (Google Research)
Ting Chen (Google Brain)
Balaji Lakshminarayanan (Google Brain)
Balaji Lakshminarayanan is a research scientist at Google Brain. Prior to that, he was a research scientist at DeepMind. He received his PhD from the Gatsby Unit, University College London where he worked with Yee Whye Teh. His recent research has focused on probabilistic deep learning, specifically, uncertainty estimation, out-of-distribution robustness and deep generative models. Notable contributions relevant to the tutorial include developing state-of-the-art methods for calibration under dataset shift (such as deep ensembles and AugMix) and showing that deep generative models do not always know what they don't know. He has co-organized several workshops on "Uncertainty and Robustness in deep learning" and served as Area Chair for NeurIPS, ICML, ICLR and AISTATS.
Alex Beutel (Google Research)
Xuezhi Wang (Google)
More from the Same Authors
-
2021 : Understanding and Improving Robustness of VisionTransformers through patch-based NegativeAugmentation »
Yao Qin · Chiyuan Zhang · Ting Chen · Balaji Lakshminarayanan · Alex Beutel · Xuezhi Wang -
2022 : Out-of-Distribution Detection and Selective Generation for Conditional Language Models »
Jie Ren · Jiaming Luo · Yao Zhao · Kundan Krishna · Mohammad Saleh · Balaji Lakshminarayanan · Peter Liu -
2022 : Reliability benchmarks for image segmentation »
Estefany Kelly Buchanan · Michael Dusenberry · Jie Ren · Kevin Murphy · Balaji Lakshminarayanan · Dustin Tran -
2022 : Pushing the Accuracy-Fairness Tradeoff Frontier with Introspective Self-play »
Jeremiah Liu · Krishnamurthy Dvijotham · Jihyeon Lee · Quan Yuan · Martin Strobel · Balaji Lakshminarayanan · Deepak Ramachandran -
2022 : Striving for data-model efficiency: Identifying data externalities on group performance »
Esther Rolf · Ben Packer · Alex Beutel · Fernando Diaz -
2022 : Improving Zero-shot Generalization and Robustness of Multi-modal Models »
Yunhao Ge · Jie Ren · Ming-Hsuan Yang · Yuxiao Wang · Andrew Gallagher · Hartwig Adam · Laurent Itti · Balaji Lakshminarayanan · Jiaping Zhao -
2022 : Improving the Robustness of Conditional Language Models by Detecting and Removing Input Noise »
Kundan Krishna · Yao Zhao · Jie Ren · Balaji Lakshminarayanan · Jiaming Luo · Mohammad Saleh · Peter Liu -
2022 : Out-of-Distribution Detection and Selective Generation for Conditional Language Models »
Jie Ren · Jiaming Luo · Yao Zhao · Kundan Krishna · Mohammad Saleh · Balaji Lakshminarayanan · Peter Liu -
2022 Poster: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models »
Jason Wei · Xuezhi Wang · Dale Schuurmans · Maarten Bosma · brian ichter · Fei Xia · Ed Chi · Quoc V Le · Denny Zhou -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2022 Poster: A Unified Sequence Interface for Vision Tasks »
Ting Chen · Saurabh Saxena · Lala Li · Tsung-Yi Lin · David Fleet · Geoffrey Hinton -
2022 Poster: Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures »
Emmanuel Abbe · Samy Bengio · Elisabetta Cornacchia · Jon Kleinberg · Aryo Lotfi · Maithra Raghu · Chiyuan Zhang -
2021 Poster: Why Do Better Loss Functions Lead to Less Transferable Features? »
Simon Kornblith · Ting Chen · Honglak Lee · Mohammad Norouzi -
2021 Poster: Improving Contrastive Learning on Imbalanced Data via Open-World Sampling »
Ziyu Jiang · Tianlong Chen · Ting Chen · Zhangyang Wang -
2021 Poster: Intriguing Properties of Contrastive Losses »
Ting Chen · Calvin Luo · Lala Li -
2021 Poster: Deep Learning with Label Differential Privacy »
Badih Ghazi · Noah Golowich · Ravi Kumar · Pasin Manurangsi · Chiyuan Zhang -
2021 Poster: Improved Transformer for High-Resolution GANs »
Long Zhao · Zizhao Zhang · Ting Chen · Dimitris Metaxas · Han Zhang -
2021 Poster: Improving Calibration through the Relationship with Adversarial Robustness »
Yao Qin · Xuezhi Wang · Alex Beutel · Ed Chi -
2021 Poster: Do Vision Transformers See Like Convolutional Neural Networks? »
Maithra Raghu · Thomas Unterthiner · Simon Kornblith · Chiyuan Zhang · Alexey Dosovitskiy -
2020 Poster: What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation »
Vitaly Feldman · Chiyuan Zhang -
2020 Spotlight: What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation »
Vitaly Feldman · Chiyuan Zhang -
2020 Tutorial: (Track2) Practical Uncertainty Estimation and Out-of-Distribution Robustness in Deep Learning Q&A »
Dustin Tran · Balaji Lakshminarayanan · Jasper Snoek -
2020 Poster: Fairness without Demographics through Adversarially Reweighted Learning »
Preethi Lahoti · Alex Beutel · Jilin Chen · Kang Lee · Flavien Prost · Nithum Thain · Xuezhi Wang · Ed Chi -
2020 Poster: What is being transferred in transfer learning? »
Behnam Neyshabur · Hanie Sedghi · Chiyuan Zhang -
2020 Tutorial: (Track2) Practical Uncertainty Estimation and Out-of-Distribution Robustness in Deep Learning »
Dustin Tran · Balaji Lakshminarayanan · Jasper Snoek -
2019 Poster: Transfusion: Understanding Transfer Learning for Medical Imaging »
Maithra Raghu · Chiyuan Zhang · Jon Kleinberg · Samy Bengio -
2017 Poster: Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles »
Balaji Lakshminarayanan · Alexander Pritzel · Charles Blundell -
2017 Spotlight: Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles »
Balaji Lakshminarayanan · Alexander Pritzel · Charles Blundell