Timezone: »
Learning Representations for Reinforcement Learning with Hierarchical Forward Models
Trevor McInroe · Lukas Schäfer · Stefano Albrecht
Event URL: https://openreview.net/forum?id=gVrMhmYo7k »
Learning control from pixels is difficult for reinforcement learning (RL) agents because representation learning and policy learning are intertwined. Previous approaches remedy this issue with auxiliary representation learning tasks, but they either do not consider the temporal aspect of the problem or only consider single-step transitions, which may miss relevant information if important environmental changes take many steps to manifest. We propose Hierarchical $k$-Step Latent (HKSL), an auxiliary task that learns representations via a hierarchy of forward models that operate at varying magnitudes of step skipping while also learning to communicate between levels in the hierarchy. We evaluate HKSL in a suite of 30 robotic control tasks with and without distractors and a task of our creation. We find that HKSL either converges to higher or optimal episodic returns more quickly than several alternative representation learning approaches. Furthermore, we find that HKSL's representations capture task-relevant details accurately across timescales (even in the presence of distractors) and that communication channels between hierarchy levels organize information based on both sides of the communication process, both of which improve sample efficiency.
Learning control from pixels is difficult for reinforcement learning (RL) agents because representation learning and policy learning are intertwined. Previous approaches remedy this issue with auxiliary representation learning tasks, but they either do not consider the temporal aspect of the problem or only consider single-step transitions, which may miss relevant information if important environmental changes take many steps to manifest. We propose Hierarchical $k$-Step Latent (HKSL), an auxiliary task that learns representations via a hierarchy of forward models that operate at varying magnitudes of step skipping while also learning to communicate between levels in the hierarchy. We evaluate HKSL in a suite of 30 robotic control tasks with and without distractors and a task of our creation. We find that HKSL either converges to higher or optimal episodic returns more quickly than several alternative representation learning approaches. Furthermore, we find that HKSL's representations capture task-relevant details accurately across timescales (even in the presence of distractors) and that communication channels between hierarchy levels organize information based on both sides of the communication process, both of which improve sample efficiency.
Author Information
Trevor McInroe (The University of Edinburgh)

Trevor McInroe is a PhD student at The University of Edinburgh, advised by Amos Storkey. His interests include deep reinforcement learning, representation learning, and world models.
Lukas Schäfer (University of Edinburgh)
Stefano Albrecht (University of Edinburgh)
More from the Same Authors
-
2021 : Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks »
Georgios Papoudakis · Filippos Christianos · Lukas Schäfer · Stefano Albrecht -
2021 : Robust On-Policy Data Collection for Data-Efficient Policy Evaluation »
Rujie Zhong · Josiah Hanna · Lukas Schäfer · Stefano Albrecht -
2022 : Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings »
Guy Azran · Mohamad Hosein Danesh · Stefano Albrecht · Sarah Keren -
2022 : Verifiable Goal Recognition for Autonomous Driving with Occlusions »
Cillian Brewitt · Massimiliano Tamborski · Stefano Albrecht -
2022 : Sample Relationships through the Lens of Learning Dynamics with Label Information »
Shangmin Guo · Yi Ren · Stefano Albrecht · Kenny Smith -
2022 : Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning »
Mhairi Dunion · Trevor McInroe · Kevin Sebastian Luck · Josiah Hanna · Stefano Albrecht -
2023 Poster: Conditional Mutual Information for Disentangled Representations in Reinforcement Learning »
Mhairi Dunion · Trevor McInroe · Kevin Sebastian Luck · Josiah Hanna · Stefano Albrecht -
2022 Poster: Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning »
Rujie Zhong · Duohan Zhang · Lukas Schäfer · Stefano Albrecht · Josiah Hanna -
2021 Poster: Agent Modelling under Partial Observability for Deep Reinforcement Learning »
Georgios Papoudakis · Filippos Christianos · Stefano Albrecht -
2020 Poster: Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning »
Filippos Christianos · Lukas Schäfer · Stefano Albrecht