Timezone: »

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding
Hongseok Namkoong · Ramtin Keramati · Steve Yadlowsky · Emma Brunskill

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #539

When observed decisions depend only on observed features, off-policy policy evaluation (OPE) methods for sequential decision problems can estimate the performance of evaluation policies before deploying them. However, this assumption is frequently violated due to unobserved confounders, unrecorded variables that impact both the decisions and their outcomes. We assess robustness of OPE methods under unobserved confounding by developing worst-case bounds on the performance of an evaluation policy. When unobserved confounders can affect every decision in an episode, we demonstrate that even small amounts of per-decision confounding can heavily bias OPE methods. Fortunately, in a number of important settings found in healthcare, policy-making, and technology, unobserved confounders may directly affect only one of the many decisions made, and influence future decisions/rewards only through the directly affected decision. Under this less pessimistic model of one-decision confounding, we propose an efficient loss-minimization-based procedure for computing worst-case bounds, and prove its statistical consistency. On simulated healthcare examples---management of sepsis and interventions for autistic children---where this is a reasonable model, we demonstrate that our method invalidates non-robust results and provides meaningful certificates of robustness, allowing reliable selection of policies under unobserved confounding.

Author Information

Hong Namkoong (Stanford University)
Ramtin Keramati (Stanford University)
Steve Yadlowsky (Stanford University)
Emma Brunskill (Stanford University)

More from the Same Authors