Neural Dynamic Policies for End-to-End Sensorimotor Learning
Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak
Spotlight presentation: Orals & Spotlights Track 31: Reinforcement Learning
on 2020-12-10T07:10:00-08:00 - 2020-12-10T07:20:00-08:00
on 2020-12-10T07:10:00-08:00 - 2020-12-10T07:20:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces such as torque, joint angle, or end-effector position. This forces the agent to make decision at each point in training, and hence, limits the scalability to continuous, high-dimensional, and long-horizon tasks. In contrast, research in classical robotics has, for a long time, exploited dynamical systems as a policy representation to learn robot behaviors via demonstrations. These techniques, however, lack the flexibility and generalizability provided by deep learning or deep reinforcement learning and have remained under-explored in such settings. In this work, we begin to close this gap and embed dynamics structure into deep neural network-based policies by reparameterizing action spaces with differential equations. We propose Neural Dynamic Policies (NPDs) that make predictions in trajectory distribution space as opposed to prior policy learning methods where action represents the raw control space. The embedded structure allows us to perform end-to-end policy learning under both reinforcement and imitation learning setups. We show that NDPs achieve better or comparable performance to state-of-the-art approaches on many robotic control tasks using both reward-based training and demonstrations. Project video and code are available at: https://shikharbahl.github.io/neural-dynamic-policies/.