Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Broadening Research Collaborations

Democratizing RL Research by Reusing Prior Computation

Rishabh Agarwal


Abstract:

Learning tabula rasa, that is without any prior knowledge, is the prevalent workflow in reinforcement learning (RL) research. Unfortunately, the inefficiency of deep RL typically excludes researchers without access to industrial-scale resources from tackling computationally-demanding problems. Furthermore, as RL research moves toward more complex benchmarks, the computational barrier to entry would further increase. To address these issues, we present reincarnating RL as an alternative workflow or class of problem settings, where prior computational work (e.g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another. RRL can democratize research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. To demonstrate this, we present a case study on Atari games showing how superhuman Atari agents can be trained using only a few hours, as opposed to few days on a single GPU. Finally, we address reproducibility and generalizability concerns in this research workflow. Overall, this work argues for an alternate approach to RL research, which we believe could significantly improve real-world RL adoption and help democratize it further.

Chat is not available.