Timezone: »
We say an algorithm is batch size-invariant if changes to the batch size can largely be compensated for by changes to other hyperparameters. Stochastic gradient descent is well-known to have this property at small batch sizes, via the learning rate. However, some policy optimization algorithms (such as PPO) do not have this property, because of how they control the size of policy updates. In this work we show how to make these algorithms batch size-invariant. Our key insight is to decouple the proximal policy (used for controlling policy updates) from the behavior policy (used for off-policy corrections). Our experiments help explain why these algorithms work, and additionally show how they can make more efficient use of stale data.
Author Information
Jacob Hilton (OpenAI)
Karl Cobbe (OpenAI)
John Schulman (OpenAI)
More from the Same Authors
-
2022 : John Schulman »
John Schulman -
2022 Poster: Training language models to follow instructions with human feedback »
Long Ouyang · Jeffrey Wu · Xu Jiang · Diogo Almeida · Carroll Wainwright · Pamela Mishkin · Chong Zhang · Sandhini Agarwal · Katarina Slama · Alex Ray · John Schulman · Jacob Hilton · Fraser Kelton · Luke Miller · Maddie Simens · Amanda Askell · Peter Welinder · Paul Christiano · Jan Leike · Ryan Lowe -
2020 : Introduction to the Procgen Benchmark »
Karl Cobbe -
2019 : Environments and Data Sets »
Karl Cobbe · Gianni De Fabritiis · Denys Makoviichuk -
2017 Poster: #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning »
Haoran Tang · Rein Houthooft · Davis Foote · Adam Stooke · OpenAI Xi Chen · Yan Duan · John Schulman · Filip DeTurck · Pieter Abbeel