Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

Boosting worst-group accuracy without group annotations

Vincent Bardenhagen · Alexandru Tifrea · Fanny Yang


Abstract:

Despite having good average test accuracy, classification models can have poor performance on subpopulations that are not well represented in the training set. In this work we introduce a method to improve prediction accuracy on underrepresented groups that does not require any group labels for training or validation, unlike existing approaches. We provide a sound empirical investigation of our procedure and show that it recovers the worst-group performance of methods that use oracle group annotations.

Chat is not available.