Timezone: »

 
Information Discrepancy in Strategic Learning
Yahav Bechavod · Chara Podimata · Steven Wu · Juba Ziani

We study the effects of information discrepancy across sub-populations on their ability to simultaneously improve their features in strategic learning settings. Specifically, we consider a game where a principal deploys a decision rule in an attempt to optimize the whole population's welfare, and agents strategically adapt to it to receive better scores. Inspired by real-life settings, such as loan approvals and college admissions, we remove the typical assumption made in the strategic learning literature that the decision rule is fully known to the agents, and focus on settings where it is inaccessible. In their lack of knowledge, individuals try to infer this rule by learning from their peers (e.g., friends and acquaintances who previously applied for a loan), naturally forming groups in the population, each with possibly different type and level of information about the decision rule. In our equilibrium analysis, we show that the principal's decision rule optimizing the welfare across subgroups} may cause a surprising negative externality; the true quality of some of the subgroups can actually deteriorate. On the positive side, we show that in many natural cases, optimal improvement is guaranteed simultaneously for all subgroups in equilibrium. We also characterize the disparity in improvements across subgroups via a measure of their informational overlap. Finally, we complement our theoretical analysis with experiments on real-world datasets.

Author Information

Yahav Bechavod (Hebrew University)

Yahav Bechavod is a PhD candidate at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, advised by Amit Daniely and Katrina Ligett. He is an Apple PhD fellow in AI/ML, and a recipient of the Charles Clore Foundation PhD Fellowship. He also holds an MS (Computer Science) and a BS (Mathematics and Computer Science), both from the Hebrew University. Yahav's research explores foundational questions in the field of algorithmic fairness, such as: (1) characterizing the amount of friction between utility and fairness in various settings, (2) providing novel algorithms guaranteeing high utility and fairness in the face of limited or partial feedback, and (3) making clever use of human feedback in the learning loop in auditing for unfairness.

Chara Podimata (Harvard University)
Steven Wu (Carnegie Mellon University)
Steven Wu

I am an Assistant Professor in the School of Computer Science at Carnegie Mellon University. My broad research interests are in algorithms and machine learning. These days I am excited about: - Foundations of responsible AI, with emphasis on privacy and fairness considerations. - Interactive learning, including contextual bandits and reinforcement learning, and its interactions with causal inference and econometrics. - Economic aspects of machine learning, with a focus on learning in the presence of strategic agents.

Juba Ziani (University of Pennsylvania)

More from the Same Authors