Skip to yearly menu bar Skip to main content


Poster

Policy Poisoning in Batch Reinforcement Learning and Control

Yuzhe Ma · Xuezhou Zhang · Wen Sun · Jerry Zhu

East Exhibition Hall B, C #21

Keywords: [ Reinforcement Learning and Planning ] [ Algorithms ] [ Adversarial Learning ]


Abstract:

We study a security threat to batch reinforcement learning and control where the attacker aims to poison the learned policy. The victim is a reinforcement learner / controller which first estimates the dynamics and the rewards from a batch data set, and then solves for the optimal policy with respect to the estimates. The attacker can modify the data set slightly before learning happens, and wants to force the learner into learning a target policy chosen by the attacker. We present a unified framework for solving batch policy poisoning attacks, and instantiate the attack on two standard victims: tabular certainty equivalence learner in reinforcement learning and linear quadratic regulator in control. We show that both instantiation result in a convex optimization problem on which global optimality is guaranteed, and provide analysis on attack feasibility and attack cost. Experiments show the effectiveness of policy poisoning attacks.

Live content is unavailable. Log in and register to view live content