Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Learning in Presence of Strategic Behavior

Models of fairness in federated learning

Kate Donahue · Jon Kleinberg


Abstract:

In many real-world situations, data is distributed across multiple locations and can't be combined for training. For example, multiple hospitals might have comparable data related to patient outcomes. Federated learning is a novel distributed learning approach that allows multiple federating agents to jointly learn a model. While this approach might reduce the error each agent experiences, it also raises questions of fairness: to what extent can the error experienced by one agent be significantly lower than the error experienced by another agent? In this work, we consider two notions of fairness that each may be appropriate in different circumstances: \emph{egalitarian fairness} (which aims to bound how dissimilar error rates can be) and \emph{proportional fairness} (which aims to reward players for contributing more data). For egalitarian fairness, we obtain a tight multiplicative bound on how widely error rates can diverge between agents federating together. For proportional fairness, we show that sub-proportional error (relative to the number of data points contributed) is guaranteed for any individually rational federating coalition.

Chat is not available.