Timezone: »
Real-world applications of machine learning require a model to be capable of dealing with domain shifts that might occur at test time due to natural perturbations to the data distribution induced by, for example, changes in the data collection conditions, or synthetic distortions such as adversarial attacks. While a learning system might be simultaneously vulnerable to natural and hand-engineered perturbations, previous work has mainly focused on developing techniques to alleviate the effects of specific types of distribution shifts. In this work, we propose a unified and versatile approach to mitigate both natural and artificial domain shifts via the use of random projections. We show that such projections, implemented as convolutional layers with random weights placed at the input of a model, are capable of increasing the overlap between the different distributions that may appear at training/testing time. We evaluate the proposed approach on settings where different types of distribution shifts occur, and show it provides gains in terms of improved out-of-distribution generalization in the domain generalization setting, as well as increased robustness to two types of adversarial perturbations on the CIFAR-10 dataset without requiring adversarial training.
Author Information
Isabela Albuquerque (Institut National de la Recherche Scientifique)
Joao Monteiro (Institut National de la Recherche Scientifique)
Tiago H Falk (INRS-EMT)
More from the Same Authors
-
2021 : A versatile and efficient approach to summarize speech into utterance-level representations »
Joao Monteiro · JAHANGIR ALAM · Tiago H Falk -
2022 : Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement »
Heitor Guimarães · Arthur Pimentel · Anderson R. Avila · Mehdi Rezaghoizadeh · Tiago H Falk -
2022 : Discovering Bugs in Vision Models using Off-the-shelf Image Generation and Captioning »
Olivia Wiles · Isabela Albuquerque · Sven Gowal -
2022 : Constraining Low-level Representations to Define Effective Confidence Scores »
Joao Monteiro · Pau Rodriguez · Pierre-Andre Noel · Issam Hadj Laradji · David Vázquez -
2023 Poster: CADet: Fully Self-Supervised Out-Of-Distribution Detection With Contrastive Learning »
Charles Guille-Escuret · Pau Rodriguez · David Vazquez · Ioannis Mitliagkas · Joao Monteiro -
2023 Poster: Group Robust Classification Without Any Group Information »
Christos Tsirigotis · Joao Monteiro · Pau Rodriguez · David Vazquez · Aaron Courville -
2022 : Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement »
Heitor Guimarães · Arthur Pimentel · Anderson R. Avila · Mehdi Rezaghoizadeh · Tiago H Falk -
2021 : [O2] Not too close and not too far: enforcing monotonicity requires penalizing the right points »
Joao Monteiro · · Hossein Hajimirsadeghi · Greg Mori -
2021 : A versatile and efficient approach to summarize speech into utterance-level representations »
Joao Monteiro · JAHANGIR ALAM · Tiago H Falk -
2018 : Poster spotlight #2 »
Nicolo Fusi · Chidubem Arachie · Joao Monteiro · Steffen Wolf