Skip to yearly menu bar Skip to main content


Poster

Adaptive Auxiliary Task Weighting for Reinforcement Learning

Xingyu Lin · Harjatin Baweja · George Kantor · David Held

East Exhibition Hall B + C #191

Keywords: [ Reinforcement Learning and Planning ] [ Reinforcement Learning ] [ Algorithms ] [ Online Learning ]


Abstract:

Reinforcement learning is known to be sample inefficient, preventing its application to many real-world problems, especially with high dimensional observations like images. Transferring knowledge from other auxiliary tasks is a powerful tool for improving the learning efficiency. However, the usage of auxiliary tasks has been limited so far due to the difficulty in selecting and combining different auxiliary tasks. In this work, we propose a principled online learning algorithm that dynamically combines different auxiliary tasks to speed up training for reinforcement learning. Our method is based on the idea that auxiliary tasks should provide gradient directions that, in the long term, help to decrease the loss of the main task. We show in various environments that our algorithm can effectively combine a variety of different auxiliary tasks and achieves significant speedup compared to previous heuristic approches of adapting auxiliary task weights.

Live content is unavailable. Log in and register to view live content