Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Offline Reinforcement Learning

Personalization for Web-based Services using Offline Reinforcement Learning

Pavlos A Apostolopoulos · Zehui Wang · Hanson Wang · Chad Zhou · Kittipat Virochsiri · Norm Zhou · Igor Markov


Abstract:

Large-scale Web-based services present opportunities for improving UI policies based on observed user interactions. We investigate both the sequential and non-sequential formulations, highlighting their benefits and drawbacks. In the sequential setting, we address challenges of learning such policies through model-free offline Reinforcement Learning (RL) with off-policy training. Deployed in a production system for user authentication in a major social network, it significantly improves long-term objectives. We articulate practical challenges, compare several ML techniques, provide insights on training and evaluation of RL models, and discuss generalizations.

Chat is not available.