NIPS 2006
Skip to yearly menu bar Skip to main content


Workshop

Towards a New Reinforcement Learning?

Jan Peters · Stefan Schaal · Drew Bagnell

Sutcliffe B

During the last decade, many areas of statistical machine learning have reached a high level of maturity with novel, efficient, and theoretically well founded algorithms that increasingly removed the need for heuristics and manual parameter tuning, which dominated the early days of neural networks. Reinforcement learning (RL) has also made major progress in theory and algorithms, but is somehow lagging behind the success stories of classification, supervised, and unsupervised learning. Besides the long-standing question for scalability of RL to larger and real world problems, even in simpler scenarios, a significant amount of manual tuning and human insight is needed to achieve good performance, e.g., as in exemplified in issues like eligibility factors, learning rates, the choice of function approximators and their basis functions for policy and/or value functions, etc. Some of the reasons for the progress of other statistical learning disciplines comes from connections to well- established fundamental learning approaches, like maximum-likelihood with EM, Bayesian statistics, linear regression, linear and quadratic programming, graph theory, function space analysis, etc. Therefore, the main question of this workshop is to discuss, how other statistical learning techniques may be used to developed new RL approaches in order to achieve properties including higher numerical robustness, easier use in terms of open parameters, probabilistic and Bayesian interpretations, better scalability, the inclusions of prior knowledge, etc.

Live content is unavailable. Log in and register to view live content