Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Learning and Decision-Making with Strategic Feedback (StratML)

Bayesian Persuasion for Algorithmic Recourse

Keegan Harris · Valerie Chen · Joon Kim · Ameet S Talwalkar · Hoda Heidari · Steven Wu


Abstract:

When faced with (automated) assessment rules, individuals can modify their observable features strategically to obtain better decisions. In many situations, decision-makers deliberately keep the underlying assessment rule secret to avoid gaming. This forces the decision subjects to rely on incomplete information when making strategic feature modifications. We capture such settings as a game of Bayesian persuasion, in which the decision-maker sends a signal, i.e., an action recommendation, to the decision subject to incentivize them to take desirable actions. We formulate the principal's problem of finding the optimal Bayesian incentive-compatible signaling policy as an optimization problem and characterize it via a linear program. Through this characterization, we observe that while finding a BIC strategy can be simplified dramatically, the computational complexity of solving this linear program is closely tied to (1) the relative size of the agent's action space, and (2) the number of features utilized by the underlying decision rule.

Chat is not available.