Robust Strategic Classification under Decision-Dependent Cost Uncertainty
Abstract
Humans have been found to manipulate their inputs to algorithmic decision systems to receive favorable outcomes. This has motivated a line of work on ``strategic classification,'' wherein algorithmic decision rules are selected to prevent undesirable strategic responses. Prior works typically assume that the cost of such strategic behavior is fixed and independent of the classifier's decision. In practice, however, manipulation costs depend on past decisions: today's algorithmic decisions influence tomorrow's costs of strategic response. To capture this dependency, we propose to formulate the problem of strategic classification as a two-stage robust optimization problem with a decision-dependent uncertainty set. We formalize this problem, develop approximations and reformulations to solve it, and numerically illustrate our algorithm's ability to mitigate gaming of the algorithmic system.