Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

Randomly projecting out distribution shifts for improved robustness

Isabela Albuquerque · Joao Monteiro · Tiago H Falk


Abstract:

Real-world applications of machine learning require a model to be capable of dealing with domain shifts that might occur at test time due to natural perturbations to the data distribution induced by, for example, changes in the data collection conditions, or synthetic distortions such as adversarial attacks. While a learning system might be simultaneously vulnerable to natural and hand-engineered perturbations, previous work has mainly focused on developing techniques to alleviate the effects of specific types of distribution shifts. In this work, we propose a unified and versatile approach to mitigate both natural and artificial domain shifts via the use of random projections. We show that such projections, implemented as convolutional layers with random weights placed at the input of a model, are capable of increasing the overlap between the different distributions that may appear at training/testing time. We evaluate the proposed approach on settings where different types of distribution shifts occur, and show it provides gains in terms of improved out-of-distribution generalization in the domain generalization setting, as well as increased robustness to two types of adversarial perturbations on the CIFAR-10 dataset without requiring adversarial training.

Chat is not available.