Timezone: »

Factored DRO: Factored Distributionally Robust Policies for Contextual Bandits
Tong Mu · Yash Chandak · Tatsunori Hashimoto · Emma Brunskill

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #518

While there has been extensive work on learning from offline data for contextual multi-armed bandit settings, existing methods typically assume there is no environment shift: that the learned policy will operate in the same environmental process as that of data collection. However, this assumption may limit the use of these methods for many practical situations where there may be distribution shifts. In this work we propose Factored Distributionally Robust Optimization (Factored-DRO), which is able to separately handle distribution shifts in the context distribution and shifts in the reward generating process. Prior work that either ignores potential shifts in the context, or considers them jointly, can lead to performance that is too conservative, especially under certain forms of reward feedback. Our Factored-DRO objective mitigates this by considering the shifts separately, and our proposed estimators are consistent and converge asymptotically. We also introduce a practical algorithm and demonstrate promising empirical results in environments based on real-world datasets, such as voting outcomes and scene classification.

Author Information

Tong Mu (OpenAI)
Yash Chandak (Stanford University)
Tatsunori Hashimoto (Stanford)
Emma Brunskill (Stanford University)

More from the Same Authors