Timezone: »

 
Bayesian Persuasion for Algorithmic Recourse
Keegan Harris · Valerie Chen · Joon Kim · Ameet S Talwalkar · Hoda Heidari · Steven Wu

When faced with (automated) assessment rules, individuals can modify their observable features strategically to obtain better decisions. In many situations, decision-makers deliberately keep the underlying assessment rule secret to avoid gaming. This forces the decision subjects to rely on incomplete information when making strategic feature modifications. We capture such settings as a game of Bayesian persuasion, in which the decision-maker sends a signal, i.e., an action recommendation, to the decision subject to incentivize them to take desirable actions. We formulate the principal's problem of finding the optimal Bayesian incentive-compatible signaling policy as an optimization problem and characterize it via a linear program. Through this characterization, we observe that while finding a BIC strategy can be simplified dramatically, the computational complexity of solving this linear program is closely tied to (1) the relative size of the agent's action space, and (2) the number of features utilized by the underlying decision rule.

Author Information

Keegan Harris (Carnegie Mellon University)
Valerie Chen (Carnegie Mellon University)
Joon Kim (Carnegie Mellon University)
Ameet S Talwalkar (CMU)
Hoda Heidari (Carnegie Mellon University)
Steven Wu (Carnegie Mellon University)
Steven Wu

I am an Assistant Professor in the School of Computer Science at Carnegie Mellon University. My broad research interests are in algorithms and machine learning. These days I am excited about: - Foundations of responsible AI, with emphasis on privacy and fairness considerations. - Interactive learning, including contextual bandits and reinforcement learning, and its interactions with causal inference and econometrics. - Economic aspects of machine learning, with a focus on learning in the presence of strategic agents.

More from the Same Authors