Efficient Restarts in Non-Stationary Model-Free Reinforcement Learning
Hiroshi Nonaka · Simon Ambrozak · Sofia Miskala-Dinc · Amedeo Ercole · Aviva Prins
Abstract
In this work, we propose three efficient restart paradigms for model-free non-stationary reinforcement learning (RL). We identify two core issues with the restart design of Mao et al. (2022)'s RestartQ-UCB algorithm: (1) complete forgetting, where all the information learned about an environment is lost after a restart, and (2) scheduled restarts, in which restarts occur only at predefined timings, regardless of the incompatibility of the policy with the current environment dynamics. We introduce three approaches, which we call partial, adaptive, and selective restarts to modify the algorithms RestartQ-UCB and RANDOMIZEDQ (Wang et al., 2025). We find near-optimal empirical performance in multiple different environments, decreasing dynamic regret by up to $91$\% relative to RestartQ-UCB.
Chat is not available.
Successful Page Load