live talk
in
Workshop: Algorithmic Fairness through the lens of Causality and Robustness

Invited Talk: Lessons from robust machine learning

Aditi Raghunathan

Abstract:

Current machine learning (ML) methods are primarily centered around improving in-distribution generalization where models are evaluated on new points drawn from nearly the same distribution as the training data. On the other hand, robustness and fairness involve reasoning about out-of-distribution performance such as accuracy on protected groups or perturbed inputs, and reliability even in the presence of spurious correlations. In this talk, I will describe an important lesson from robustness: in order to improve out-of-distribution performance, we often need to question the common assumptions in ML. In particular, we will see that ‘more data’, ‘bigger models’, or ‘fine-tuning pretrained features’ which improve in-distribution generalization often fail out-of-distribution.