Timezone: »

Predictive-State Decoders: Encoding the Future into Recurrent Networks
Arun Venkatraman · Nicholas Rhinehart · Wen Sun · Lerrel Pinto · Martial Hebert · Byron Boots · Kris Kitani · J. Bagnell

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #112 #None

Recurrent neural networks (RNNs) are a vital modeling technique that rely on internal states learned indirectly by optimization of a supervised, unsupervised, or reinforcement training loss. RNNs are used to model dynamic processes that are characterized by underlying latent states whose form is often unknown, precluding its analytic representation inside an RNN. In the Predictive-State Representation (PSR) literature, latent state processes are modeled by an internal state representation that directly models the distribution of future observations, and most recent work in this area has relied on explicitly representing and targeting sufficient statistics of this probability distribution. We seek to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with Predictive-State Decoders (PSDs), which add supervision to the network's internal state representation to target predicting future observations. PSDs are simple to implement and easily incorporated into existing training pipelines via additional loss regularization. We demonstrate the effectiveness of PSDs with experimental results in three different domains: probabilistic filtering, Imitation Learning, and Reinforcement Learning. In each, our method improves statistical performance of state-of-the-art recurrent baselines and does so with fewer iterations and less data.

Author Information

Arun Venkatraman (Carnegie Mellon University)
Nick Rhinehart (Carnegie Mellon University)

Nick Rhinehart is a Postdoctoral Scholar in the Electrical Engineering and Computer Science Department at the University of California, Berkeley, where he works with Sergey Levine. His work focuses on fundamental and applied research in machine learning and computer vision for behavioral forecasting and control in complex environments, with an emphasis on imitation learning, reinforcement learning, and deep learning methods. Applications of his work include autonomous vehicles and first-person video. He received a Ph.D. in Robotics from Carnegie Mellon University with Kris Kitani, and B.S. and B.A. degrees in Engineering and Computer Science from Swarthmore College. Nick's work has been honored with a Best Paper Award at the ICML 2019 Workshop on AI for Autonomous Driving and a Best Paper Honorable Mention Award at ICCV 2017. His work has been published at a variety of top-tier venues in machine learning, computer vision, and robotics, including AAMAS, CoRL, CVPR, ECCV, ICCV, ICLR, ICML, ICRA, NeurIPS, and PAMI. Nick co-organized the workshop on Machine Learning in Autonomous Driving at NeurIPS 2019, the workshop on Imitation, Intent, and Interaction at ICML 2019, and the Tutorial on Inverse RL for Computer Vision at CVPR 2018.

Wen Sun (Carnegie Mellon University)
Lerrel Pinto
Martial Hebert (cmu)
Byron Boots (Georgia Tech / Google Brain)
Kris Kitani (Carnegie Mellon University)
J. Bagnell (Carnegie Mellon University)

More from the Same Authors