Skip to yearly menu bar Skip to main content


Workshop

Distribution shifts: connecting methods and applications (DistShift)

Shiori Sagawa · Pang Wei Koh · Fanny Yang · Hongseok Namkoong · Jiashi Feng · Kate Saenko · Percy Liang · Sarah Bird · Sergey Levine

Mon 13 Dec, 9 a.m. PST

Distribution shifts---where a model is deployed on a data distribution different from what it was trained on---pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in applications such as biomedicine, wildlife conservation, sustainable development, robotics, education, and criminal justice. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics. Despite the ubiquity of distribution shifts in ML applications, work on these types of real-world shifts is currently underrepresented in the ML research community, with prior work generally focusing instead on synthetic shifts. However, recent work has shown that models that are robust to one kind of shift need not be robust to another, underscoring the importance and urgency of studying the types of distribution shifts that arise in real-world ML deployments. With this workshop, we aim to facilitate deeper exchanges between domain experts in various ML application areas and more methods-oriented researchers, and ground the development of methods for characterizing and mitigating distribution shifts in real-world application contexts.

Chat is not available.
Timezone: America/Los_Angeles

Schedule