Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

Test Time Robustification of Deep Models via Adaptation and Augmentation

Marvin Zhang · Sergey Levine · Chelsea Finn


Abstract:

We study the problem of test time robustification, i.e., using the test input to improve model robustness. In this work, we aim to study and devise methods that make no assumptions about the model training process and are broadly applicable at test time. We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable: when presented with a test example, perform different data augmentations on the data point, and then adapt (all of) the model parameters by minimizing the entropy of the model's average, or marginal, output distribution across the augmentations. In our experiments, we demonstrate that this approach consistently improves robust ResNet and vision transformer models. We achieve several new state-of-the-art results for test shifts caused by image corruptions (ImageNet-C), renditions of common objects (ImageNet-R), and, among ResNet-50 models, adversarially chosen natural examples (ImageNet-A).

Chat is not available.