Skip to yearly menu bar Skip to main content


Poster

Surrogate Objectives for Batch Policy Optimization in One-step Decision Making

Minmin Chen · Ramki Gummadi · Chris Harris · Dale Schuurmans

East Exhibition Hall B + C #215

Keywords: [ Algorithms -> Bandit Algorithms; Algorithms -> Classification; Algorithms -> Regression; Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning ] [ Decision and Control ]


Abstract:

We investigate batch policy optimization for cost-sensitive classification and contextual bandits---two related tasks that obviate exploration but require generalizing from observed rewards to action selections in unseen contexts. When rewards are fully observed, we show that the expected reward objective exhibits suboptimal plateaus and exponentially many local optima in the worst case. To overcome the poor landscape, we develop a convex surrogate that is calibrated with respect to entropy regularized expected reward. We then consider the partially observed case, where rewards are recorded for only a subset of actions. Here we generalize the surrogate to partially observed data, and uncover novel objectives for batch contextual bandit training. We find that surrogate objectives remain provably sound in this setting and empirically demonstrate state-of-the-art performance.

Live content is unavailable. Log in and register to view live content