Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

When Personalization Harms: Reconsidering the Use of Group Attributes of Prediction

Vinith Suriyakumar · Marzyeh Ghassemi · Berk Ustun


Abstract:

Machine learning models often use group attributes to assign personalized predictions. In this work, we show that models that use group attributes can assign unnecessarily inaccurate predictions to specific groups -- i.e., that training a model with group attributes can reduce performance for specific groups. We propose formal conditions to ensure the ``fair use" of group attributes in prediction models -- i.e., collective preference guarantees that can be checked by training one additional model. We characterize how machine learning models can exhibit fair use due to standard practices in specification, training, and deployment. We study the prevalence of fair use violations in clinical prediction models. Our results highlight the inability to resolve fair use violations, underscore the need to measure the gains of personalization for all groups who provide personal data and illustrate actionable interventions to mitigate harm.

Chat is not available.