Skip to yearly menu bar Skip to main content


Poster

Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer

David Madras · Toni Pitassi · Richard Zemel

Room 517 AB #131

Keywords: [ Fairness, Accountability, and Transparency ] [ Classification ]


Abstract:

In many machine learning applications, there are multiple decision-makers involved, both automated and human. The interaction between these agents often goes unaddressed in algorithmic development. In this work, we explore a simple version of this interaction with a two-stage framework containing an automated model and an external decision-maker. The model can choose to say PASS, and pass the decision downstream, as explored in rejection learning. We extend this concept by proposing "learning to defer", which generalizes rejection learning by considering the effect of other agents in the decision-making process. We propose a learning algorithm which accounts for potential biases held by external decision-makers in a system. Experiments demonstrate that learning to defer can make systems not only more accurate but also less biased. Even when working with inconsistent or biased users, we show that deferring models still greatly improve the accuracy and/or fairness of the entire system.

Live content is unavailable. Log in and register to view live content