Timezone: »

Learning Representations for Reinforcement Learning with Hierarchical Forward Models
Trevor McInroe · Lukas Schäfer · Stefano Albrecht
Event URL: https://openreview.net/forum?id=gVrMhmYo7k »
Learning control from pixels is difficult for reinforcement learning (RL) agents because representation learning and policy learning are intertwined. Previous approaches remedy this issue with auxiliary representation learning tasks, but they either do not consider the temporal aspect of the problem or only consider single-step transitions, which may miss relevant information if important environmental changes take many steps to manifest. We propose Hierarchical $k$-Step Latent (HKSL), an auxiliary task that learns representations via a hierarchy of forward models that operate at varying magnitudes of step skipping while also learning to communicate between levels in the hierarchy. We evaluate HKSL in a suite of 30 robotic control tasks with and without distractors and a task of our creation. We find that HKSL either converges to higher or optimal episodic returns more quickly than several alternative representation learning approaches. Furthermore, we find that HKSL's representations capture task-relevant details accurately across timescales (even in the presence of distractors) and that communication channels between hierarchy levels organize information based on both sides of the communication process, both of which improve sample efficiency.

Author Information

Trevor McInroe (The University of Edinburgh)
Trevor McInroe

Trevor McInroe is a PhD student at The University of Edinburgh, advised by Amos Storkey. His interests include deep reinforcement learning, representation learning, and world models.

Lukas Schäfer (University of Edinburgh)
Stefano Albrecht (University of Edinburgh)

More from the Same Authors