Poster

Continuous Deep Q-Learning in Optimal Control Problems: Normalized Advantage Functions Analysis

Anton Plaksin · Stepan Martyanov

Hall J #814

Keywords: [ normalized advantage functions ] [ continuous reinforcement learning ] [ optimal control problems ] [ deep q-learning ]

[ Abstract ]
[ Poster [ OpenReview
Thu 1 Dec 9 a.m. PST — 11 a.m. PST
 
Spotlight presentation: Lightning Talks 1B-4
Tue 6 Dec 10:30 a.m. PST — 10:45 a.m. PST

Abstract:

One of the most effective continuous deep reinforcement learning algorithms is normalized advantage functions (NAF). The main idea of NAF consists in the approximation of the Q-function by functions quadratic with respect to the action variable. This idea allows to apply the algorithm to continuous reinforcement learning problems, but on the other hand, it brings up the question of classes of problems in which this approximation is acceptable. The presented paper describes one such class. We consider reinforcement learning problems obtained by the discretization of certain optimal control problems. Based on the idea of NAF, we present a new family of quadratic functions and prove its suitable approximation properties. Taking these properties into account, we provide several ways to improve NAF. The experimental results confirm the efficiency of our improvements.

Chat is not available.