Skip to yearly menu bar Skip to main content


Poster

Simple random search of static linear policies is competitive for reinforcement learning

Horia Mania · Aurelia Guy · Benjamin Recht

Room 517 AB #111

Keywords: [ Reinforcement Learning and Planning ] [ Non-Convex Optimization ] [ Decision and Control ] [ Reinforcement Learning ]


Abstract:

Model-free reinforcement learning aims to offer off-the-shelf solutions for controlling dynamical systems without requiring models of the system dynamics. We introduce a model-free random search algorithm for training static, linear policies for continuous control problems. Common evaluation methodology shows that our method matches state-of-the-art sample efficiency on the benchmark MuJoCo locomotion tasks. Nonetheless, more rigorous evaluation reveals that the assessment of performance on these benchmarks is optimistic. We evaluate the performance of our method over hundreds of random seeds and many different hyperparameter configurations for each benchmark task. This extensive evaluation is possible because of the small computational footprint of our method. Our simulations highlight a high variability in performance in these benchmark tasks, indicating that commonly used estimations of sample efficiency do not adequately evaluate the performance of RL algorithms. Our results stress the need for new baselines, benchmarks and evaluation methodology for RL algorithms.

Live content is unavailable. Log in and register to view live content