Skip to yearly menu bar Skip to main content


Poster

DAC: The Double Actor-Critic Architecture for Learning Options

Shangtong Zhang · Shimon Whiteson

East Exhibition Hall B + C #195

Keywords: [ Hierarchical RL; Reinforcem ] [ Reinforcement Learning and Planning -> Decision and Control; Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning ]


Abstract:

We reformulate the option framework as two parallel augmented MDPs. Under this novel formulation, all policy optimization algorithms can be used off the shelf to learn intra-option policies, option termination conditions, and a master policy over options. We apply an actor-critic algorithm on each augmented MDP, yielding the Double Actor-Critic (DAC) architecture. Furthermore, we show that, when state-value functions are used as critics, one critic can be expressed in terms of the other, and hence only one critic is necessary. We conduct an empirical study on challenging robot simulation tasks. In a transfer learning setting, DAC outperforms both its hierarchy-free counterpart and previous gradient-based option learning algorithms.

Live content is unavailable. Log in and register to view live content