Timezone: »

 
Hyperparameters in Contextual RL are Highly Situational
Theresa Eimer · Carolin Benjamins · Marius Lindauer

Although Reinforcement Learning (RL) has shown impressive results in games and simulation, real-world application of RL suffers from its instability under changing environment conditions and hyperparameters. We give a first impression of the extent of this instability by showing that the hyperparameters found by automatic hyperparameter optimization (HPO) methods are not only dependent on the problem at hand, but even on how well the state describes the environment dynamics. Specifically, we show that agents in contextual RL require different hyperparameters if they are shown how environmental factors change. In addition, finding adequate hyperparameter configurations is not equally easy for both settings, further highlighting the need for research into how hyperparameters influence learning and generalization in RL.

Author Information

Theresa Eimer (Leibniz Universität Hannover)
Carolin Benjamins (Leibniz University Hanover)
Marius Lindauer (Leibniz University Hannover)

More from the Same Authors