NIPS 2013
Skip to yearly menu bar Skip to main content


Workshop

Advances in Machine Learning for Sensorimotor Control

Thomas Walsh · Alborz Geramifard · Marc Deisenroth · Jonathan How · Jan Peters

Harvey's Emerald Bay 1

Closed-loop control of systems based on sensor readings in uncertain domains is a hallmark of research in the Control, Artificial Intelligence, and Neuroscience communities. Various sensorimotor frameworks have been effective at controlling physical and biological systems, from flying airplanes to moving artificial limbs, but many techniques rely on accurate models or other concrete domain knowledge to derive useful policies. In systems where such specifications are not available, the task of generating usable models or even directly deriving controllers from data often falls in the purview of machine learning algorithms.

Advances in machine learning, including non-parametric Bayesian modeling/inference and reinforcement learning have increased the range, accuracy, and speed of deriving models and policies from data. However, incorporating modern machine learning techniques into real-world sensorimotor control systems can still be challenging due to the learner's underlying assumptions, the need to model uncertainty, and the scale of such problems. More specifically, many advanced machine learning algorithms rely either on strong distributional assumptions or random access to all possible data points, neither of which may be guaranteed when used with a specific control algorithm on a physical or biological system. In addition, planners need to consider, and learners need to indicate, uncertainty in the learned model/policy since some parameters may initially be uncertain but become known over time. Finally, most real-world sensorimotor control situations take place in continuous or high-dimensional environments and require real-time interaction, all of which are problematic for classical learning techniques. In order to overcome these difficulties, the modeling, learning, and planning components of a fully adaptive decision making system may need significant modifications.

This workshop will bring together researchers from machine learning, control, and neuroscience that bridge this gap between effective planning and learning systems to produce better sensorimotor control. The workshop will be particularly concerned with the integration of machine learning and control components and the challenges of learning from limited data, modeling uncertainty, real-time execution, and the use of real-world data in complex sensorimotor environments. In addition to applications for mechanical systems, recent developments in biological motor control might be helpful to transfer to mechanical control systems and will also be a focus of the workshop. The workshop’s domains of interest include a range of biological and physical systems with multiple sensors, including autonomous robots and vehicles, as well as complex real world systems, such as neural control, prosthetics, or healthcare where actions may take place over a longer timescale.


High-level questions to be addressed (from a theoretical and practical perspective) include, but are not limited to:

-How can we scale learning and planning techniques for the domain sizes encountered in real physical and biological systems?

-How can online machine learning be used in high-frequency control of real-world systems?
-How should planners use uncertainty measurements from approximate learned models for better exploration or to produce better plans in general?

-How can successful supervised or unsupervised learning techniques be ported to sensorimotor control problems?

- How can prior knowledge, including expert knowledge, user demonstrations, or distributional assumptions be incorporated into the learning/planning framework?

- How can safety and risk-sensitivity be incorporated into a planning/learning architecture?

-How do biological systems deal with modeling, planning, and control under uncertainty?

-How can we transfer biological insights to mechanical systems?

-Do engineering insights have a biological explanation?

- What lessons can be learned across disciplines between the control, neuroscience, and reinforcement learning communities, especially in their use of learning models?

Website: http://acl.mit.edu/amlsc

Live content is unavailable. Log in and register to view live content