Skip to yearly menu bar Skip to main content

Workshop: Goal-Conditioned Reinforcement Learning

Multi-Resolution Skill Discovery for Hierarchical Reinforcement Learning

Shashank Sharma · Vinay Namboodiri · Janina A. Hoffmann

Keywords: [ hierarchical reinforcement learning ] [ offline reinforcement learning ] [ Skill discovery ] [ Reinforcement Learning ]


Learning abstract actions can be beneficial for goal-conditioned reinforcement learning.Offline discovery of primitives has effectively leveraged large static datasets in reinforcement learning.While using abstract skills has performed well, the agents usually lack finesse in motion.Humans and animals, in contrast, can learn motor skills at different levels of temporal resolution, fine-grained skills such as piano playing, or gross skills such as running.We propose a solution to the problem of representing multiple temporal resolutions to enhance skill abstraction.We do so by encoding multiple temporal resolutions of skills and through an appropriate choice mechanism learned by an actor-critic framework.Our work builds on top of a recent work by Director and shows improved performance.We evaluate the method on the DeepMind control suite task 'walker_walk', resulting in qualitative and quantitative performance gains.

Chat is not available.