Timezone: »

Off-Policy Risk Assessment in Contextual Bandits
Audrey Huang · Liu Leqi · Zachary Lipton · Kamyar Azizzadenesheli

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None
Even when unable to run experiments, practitioners can evaluate prospective policies, using previously logged data. However, while the bandits literature has adopted a diverse set of objectives, most research on off-policy evaluation to date focuses on the expected reward. In this paper, we introduce Lipschitz risk functionals, a broad class of objectives that subsumes conditional value-at-risk (CVaR), variance, mean-variance, many distorted risks, and CPT risks, among others. We propose Off-Policy Risk Assessment (OPRA), a framework that first estimates a target policy's CDF and then generates plugin estimates for any collection of Lipschitz risks, providing finite sample guarantees that hold simultaneously over the entire class. We instantiate OPRA with both importance sampling and doubly robust estimators. Our primary theoretical contributions are (i) the first uniform concentration inequalities for both CDF estimators in contextual bandits and (ii) error bounds on our Lipschitz risk estimates, which all converge at a rate of $O(1/\sqrt{n})$.

Author Information

Audrey Huang (Carnegie Mellon University)
Liu Leqi (Carnegie Mellon University)
Zachary Lipton (Carnegie Mellon University)
Kamyar Azizzadenesheli (Purdue University)

More from the Same Authors