Skip to yearly menu bar Skip to main content


Poster

Online Learning with an Unknown Fairness Metric

Stephen Gillen · Christopher Jung · Michael Kearns · Aaron Roth

Room 210 #91

Keywords: [ Bandit Algorithms ] [ Online Learning ] [ Similarity and Distance Learning ] [ Fairness, Accountability, and Transparency ] [ Metric Learning ]


Abstract:

We consider the problem of online learning in the linear contextual bandits setting, but in which there are also strong individual fairness constraints governed by an unknown similarity metric. These constraints demand that we select similar actions or individuals with approximately equal probability DHPRZ12, which may be at odds with optimizing reward, thus modeling settings where profit and social policy are in tension. We assume we learn about an unknown Mahalanobis similarity metric from only weak feedback that identifies fairness violations, but does not quantify their extent. This is intended to represent the interventions of a regulator who "knows unfairness when he sees it" but nevertheless cannot enunciate a quantitative fairness metric over individuals. Our main result is an algorithm in the adversarial context setting that has a number of fairness violations that depends only logarithmically on T, while obtaining an optimal O(sqrt(T)) regret bound to the best fair policy.

Live content is unavailable. Log in and register to view live content