Novelty Search in Representational Space for Sample Efficient Exploration
David Tao, Vincent Francois-Lavet, Joelle Pineau
Oral presentation: Orals & Spotlights Track 04: Reinforcement Learning
on 2020-12-07T18:15:00-08:00 - 2020-12-07T18:30:00-08:00
on 2020-12-07T18:15:00-08:00 - 2020-12-07T18:30:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives. Our approach uses intrinsic rewards that are based on the distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space for hard exploration tasks with sparse rewards. One key element of our approach is the use of information theoretic principles to shape our representations in a way so that our novelty reward goes beyond pixel similarity. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines.