Timezone: »

Adaptive Auxiliary Task Weighting for Reinforcement Learning
Xingyu Lin · Harjatin Baweja · George Kantor · David Held

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #191

Reinforcement learning is known to be sample inefficient, preventing its application to many real-world problems, especially with high dimensional observations like images. Transferring knowledge from other auxiliary tasks is a powerful tool for improving the learning efficiency. However, the usage of auxiliary tasks has been limited so far due to the difficulty in selecting and combining different auxiliary tasks. In this work, we propose a principled online learning algorithm that dynamically combines different auxiliary tasks to speed up training for reinforcement learning. Our method is based on the idea that auxiliary tasks should provide gradient directions that, in the long term, help to decrease the loss of the main task. We show in various environments that our algorithm can effectively combine a variety of different auxiliary tasks and achieves significant speedup compared to previous heuristic approches of adapting auxiliary task weights.

Author Information

Xingyu Lin (Carnegie Mellon University)
Harjatin Baweja (CMU)
George Kantor (CMU)
David Held (CMU)

More from the Same Authors