Skip to yearly menu bar Skip to main content


Poster

Transfer from Multiple MDPs

Alessandro Lazaric · Marcello Restelli


Abstract:

Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them in the training set used to solve a target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.

Live content is unavailable. Log in and register to view live content