Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Women in Machine Learning

Estimating Fairness in the Absence of Ground-Truth Labels

Michelle Bao · Jessica Dai · Keegan Hines · John Dickerson


Abstract:

In a post-deployment setting, in the absence of ground- truth labels and possibly in the presence of distribution shift, how might we estimate fairness performance? We focus on two main questions: first, we evaluate how existing performance estimationmethods might extend to fairness metric estimation; and second, we show initial attempts at identifying a method which most effectively estimates fairness performance. For the first question, in addition to extending the implementations of existing methods, we determine criteria for how well these extensions work in a fairness context; for the second question, we apply this criteria to discuss how one method might work better over others.

Chat is not available.