Poster Session
in
Workshop: Acting and Interacting in the Real World: Challenges in Robot Learning
Abstract
Spotlights: Deep Object-Centric Representations for Generalizable Robot Learning < Coline Devin>
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Learning Deep Composable Maximum-Entropy Policies for Real-World Robotic Manipulation
SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Control
Learning Flexible and Reusable Locomotion Primitives for a Microrobot
Policy Search using Robust Bayesian Optimization
Learning Robotic Assembly from CAD
Learning Robot Skill Embeddings
Self-Supervised Visual Planning with Temporal Skip Connections
Overcoming Exploration in Reinforcement Learning with Demonstrations
Deep Reinforcement Learning for Vision-Based Robotic Grasping Soft Value Iteration Networks for Planetary Rover Path Planning Posters:
One-Shot Visual Imitation Learning via Meta-Learning One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay < Jake Bruce; Niko Suenderhauf; Piotr Mirowski; Raia Hadsell; Michael Milford > Bayesian Active Edge Evaluation on Expensive Graphs < Sanjiban Choudhury > Sim-to-Real Transfer of Accurate Grasping with Eye-In-Hand Observations and Continuous Control < Mengyuan Yan; Iuri Frosio*; Stephen Tyree; Kautz Jan > Learning Robotic Manipulation of Granular Media < Connor Schenck*; Jonathan Tompson; Dieter Fox; Sergey Levin> End-to-End Learning of Semantic Grasping < Eric Jang > Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation Efficient Robot Task Learning and Transfer via Informed Search in Movement Parameter Space < Nemanja Rakicevic*; Kormushev Petar > Metrics for Deep Generative Models based on Learned Skills Unsupervised Hierarchical Video Prediction < Nevan wichers*; Dumitru Erhan; Honglak Lee > Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation Domain Randomization and Generative Models for Robotic Grasping Learning to Grasp from Vision and Touch Neural Network Dynamics Models for Control of Under-actuated Legged Millirobots On the Importance of Uncertainty for Control with Deep Dynamics Models Increasing Sample-Efficiency via Online Meta-Learning Stochastic Variational Video Prediction (Author information copied from CMT please contact the workshop organisers under nips17robotlearning@gmail.com for any changes)