Skip to yearly menu bar Skip to main content


Poster

A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning

Jacob Adkins · Michael Bowling · Adam White

West Ballroom A-D #6500
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

The number of hyperparameters used in deep reinforcement learning algorithms has expanded rapidly. Hyperparameters often have complex nonlinear interactions, significantly impact performance, and are difficult to tune across sets of environments. This creates a challenge for practitioners who wish to apply reinforcement learning algorithms to new domains. Several methods have been proposed to study the relationship between algorithms and their hyperparameters, but the community lacks a widely accepted measure for characterizing hyperparameter sensitivity across sets of environments. We propose an empirical methodology for studying the relationship between an algorithm’s hyperparameters and its performance over sets of environments. Our methodology enables practitioners to better understand the degree to which an algorithm's reported performance is attributable to per-environment hyperparameter tuning. We use our empirical methodology to assess how several commonly used normalization variants affect the hyperparameter sensitivity of PPO. The results suggest that the evaluated normalization variants, which improve performance, also increase hyperparameter sensitivity, indicating that several algorithmic performance improvements may be a result of an increased reliance on hyperparameter tuning.

Live content is unavailable. Log in and register to view live content