Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Women in Machine Learning

Improving Robustness to Distribution Shift with Methods from Differential Privacy

Neha Hulkund


Abstract:

As machine learning models become widely considered in safety critical settings, it is important to understand when models may fail after deployment. One cause of model failure is distribution shift, where the training and test data distributions differ. In this paper we investigate the benefits of training models using methods from differential privacy (DP) toward improving model robustness. We compare the performance of DP trained models to standard empirical risk minimization (ERM) across a variety of possible distribution shifts - specifically covariate and label shifts. We find that DP models consistently have a lower generalization gap across various types of shifts and shift severities, as well as a higher absolute test performance in label shift.

Chat is not available.