Timezone: »

 
Poster
Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning
Wenjie Shi · Shiji Song · Hui Wu · Ya-Chu Hsu · Cheng Wu · Gao Huang

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #213

Model-free deep reinforcement learning (RL) algorithms have been widely used for a range of complex control tasks. However, slow convergence and sample inefficiency remain challenging problems in RL, especially when handling continuous and high-dimensional state spaces. To tackle this problem, we propose a general acceleration method for model-free, off-policy deep RL algorithms by drawing the idea underlying regularized Anderson acceleration (RAA), which is an effective approach to accelerating the solving of fixed point problems with perturbations. Specifically, we first explain how policy iteration can be applied directly with Anderson acceleration. Then we extend RAA to the case of deep RL by introducing a regularization term to control the impact of perturbation induced by function approximation errors. We further propose two strategies, i.e., progressive update and adaptive restart, to enhance the performance. The effectiveness of our method is evaluated on a variety of benchmark tasks, including Atari 2600 and MuJoCo. Experimental results show that our approach substantially improves both the learning speed and final performance of state-of-the-art deep RL algorithms.

Author Information

Wenjie Shi (Tsinghua University)

Wenjie Shi received the B.S. degree from Huazhong University of Science and Technology, Wuhan, China, in 2016. He is currently pursuing the Ph.D. degree in control science and engineering with the Department of Automation, Tsinghua University, Beijing, China. His current research interests include deep reinforcement learning and robot control.

Shiji Song (Department of Automation, Tsinghua University)
Hui Wu (Tsinghua University)
Ya-Chu Hsu (Tsinghua University)
Cheng Wu (Tsinghua)
Gao Huang (Tsinghua)

More from the Same Authors