Demonstration
Deep Reinforcement Learning for Robotics in DIANNE
Steven Bohez · Elias De Coninck · Sam Leroux · Tim Verbelen
Area 5 + 6 + 7 + 8
While deep RL has experienced major progress the last years, especially for robotics, integration of learning frameworks with physical and simulated systems is not trivial. This demo will show a practical application of deep RL in robotics using the DIANNE framework. A KUKA YouBot will be tasked to find and retrieve certain objects (e.g. soda cans) within a confined area, relying on a combination of (high-dimensional) sensor inputs. Sensors will be attached to both the robot itself as well as fixed in the environment. For efficiency (and safety), initial training and exploration is performed in a simulated environment using VREP, in which a virtual YouBot gathers experience in order to learn and improve a deep neural network policy. Once sufficiently trained, this policy is than transferred to a physical YouBot in order to finetune it to the real setup. To assist the physical YouBot in evaluating the deep policy, it is equipped with a Nvidia Jetson TX1 embedded GPU. Under the hood, this setup is automated using the DIANNE framework (http://dianne.intec.ugent.be, http://hdl.handle.net/1854/LU-8080319), which on the one hand facilitates designing and training deep learning models, and on the other hand easily integrates with e.g. ROS and VREP to set up environments for reinforcement learning. DIANNE can automatically collect experience from RL agents, use that experience to train RL policies and models and finally update the agent to the newest policy parameters.
Live content is unavailable. Log in and register to view live content