NeurIPS 2023
Skip to yearly menu bar Skip to main content


Workshop

Workshop on Distribution Shifts: New Frontiers with Foundation Models

Rebecca Roelofs · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Pang Wei Koh · Shiori Sagawa · Tatsunori Hashimoto · Yoonho Lee

Room R06-R09 (level 2)
[ Abstract ] Workshop Website
Fri 15 Dec, 7 a.m. PST

Tagline: This workshop focuses on distribution shifts in the context of foundation models.Distribution shifts---where a model is deployed on a data distribution different from what it was trained on---pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in a wide range of applications. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics. Training models that are robust to such distribution shifts is a rapidly growing area of interest in the ML community, and the goal of our workshop is to foster discussions and further research on distribution shifts. In the context of distribution shifts, our workshop this year focuses on foundation models: large pretrained models that can be adapted for a wide range of tasks. Foundation models open up an exciting new frontier in the study of distribution shifts, raising open research questions such as how pre-training improves robustness, how to finetune foundation models for increased robustness, how to leverage foundation models’ generative capabilities for robustness, and how to handle discrepancies between standard pre-training distributions and downstream distributions of interest. We aim to facilitate discussions around these topics by bringing together researchers working on distribution shifts and foundation models.

Chat is not available.
Timezone: America/Los_Angeles

Schedule