Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

Turn Down the Noise: Leveraging Diffusion Models for Test-time Adaptation via Pseudo-label Ensembling

Mrigank Raman · Rohan Shah · Akash Kannan · Pranit Chawla

Keywords: [ distribution shifts ] [ diffusion models ] [ Test-time adaptation ] [ Self Supervised Learning ]


Abstract:

The goal of test-time adaptation is to adapt a source-pretrained model to a target domain without relying on any source data. Typically, this is either done by updating the parameters of the model (model adaptation) using inputs from the target domain or by modifying the inputs themselves (input adaptation). However, methods that modify the model suffer from the issue of compounding noisy updates whereas methods that modify the input need to adapt to every new data point from scratch while also struggling with certain distribution shifts. We introduce D-TAPE (Diffusion infused Test-time Adaptation via Pseudo-label Ensembling) which leverages a pre-trained diffusion model to project the target domain images closer to the source domain and iteratively updates the model via a pseudo-label ensembling scheme. D-TAPE combines the advantages of model and input adaptations while mitigating their shortcomings. Our experiments on CIFAR-10C demonstrate D-TAPE's superiority, outperforming the strongest baseline by an average of 1.7% across 15 diverse corruptions and surpassing the strongest input adaptation baseline by an average of 18%.

Chat is not available.