Timezone: »

Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Yunhao Tang · Tadashi Kozuno · Mark Rowland · Remi Munos · Michal Valko

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Model-agnostic meta-reinforcement learning requires estimating the Hessian matrix of value functions. This is challenging from an implementation perspective, as repeatedly differentiating policy gradient estimates may lead to biased Hessian estimates. In this work, we provide a unifying framework for estimating higher-order derivatives of value functions, based on off-policy evaluation. Our framework interprets a number of prior approaches as special cases and elucidates the bias and variance trade-off of Hessian estimates. This framework also opens the door to a new family of estimates, which can be easily implemented with auto-differentiation libraries, and lead to performance gains in practice.

Author Information

Yunhao Tang (Columbia University)

I am a PhD student at Columbia IEOR. My research interests are reinforcement learning and approximate inference.

Tadashi Kozuno (University of Alberta)

Tadashi Kozuno is a postdoc at the University of Alberta. He obtained bachelor and master degrees on neuroscience from Osaka university, and a PhD degree from Okinawa Inst. of Sci. and Tech. His main interest lies in efficient decision making from both theoretical and biological sides.

Mark Rowland (DeepMind)
Remi Munos (DeepMind)
Michal Valko (DeepMind Paris / Inria / ENS Paris-Saclay)

Michal is a research scientist in DeepMind Paris and SequeL team at Inria Lille - Nord Europe, France, lead by Philippe Preux and Rémi Munos. He also teaches the course Graphs in Machine Learning at l'ENS Cachan. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimising the data that humans need spend inspecting, classifying, or “tuning” the algorithms. Another important feature of machine learning algorithms should be the ability to adapt to changing environments. That is why he is working in domains that are able to deal with minimal feedback, such as semi-supervised learning, bandit algorithms, and anomaly detection. The common thread of Michal's work has been adaptive graph-based learning and its application to the real world applications such as recommender systems, medical error detection, and face recognition. His industrial collaborators include Intel, Technicolor, and Microsoft Research. He received his PhD in 2011 from University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos.

More from the Same Authors