Skip to yearly menu bar Skip to main content


Poster

Transfer of Value Functions via Variational Methods

Andrea Tirinzoni · Rafael Rodriguez Sanchez · Marcello Restelli

Room 517 AB #126

Keywords: [ Reinforcement Learning ] [ Multitask and Transfer Learning ]


Abstract:

We consider the problem of transferring value functions in reinforcement learning. We propose an approach that uses the given source tasks to learn a prior distribution over optimal value functions and provide an efficient variational approximation of the corresponding posterior in a new target task. We show our approach to be general, in the sense that it can be combined with complex parametric function approximators and distribution models, while providing two practical algorithms based on Gaussians and Gaussian mixtures. We theoretically analyze them by deriving a finite-sample analysis and provide a comprehensive empirical evaluation in four different domains.

Live content is unavailable. Log in and register to view live content