Timezone: »

 
Workshop
NIPS Workshop on Machine Learning for Intelligent Transportation Systems 2018
Li Erran Li · Anca Dragan · Juan Carlos Niebles · Silvio Savarese

Sat Dec 08 05:00 AM -- 03:30 PM (PST) @ Room 514
Event URL: https://sites.google.com/site/nips2018mlits/home »

Our transportation systems are poised for a transformation as we make progress on autonomous vehicles, vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) communication infrastructures, and smart road infrastructures (like smart traffic lights). But many challenges stand in the way of this transformation. For example, how do we make perception accurate and robust enough to accomplish safe autonomous driving? How do we generate policies that equip autonomous cars with adaptive human negotiation skills when merging, overtaking, or yielding? How do we decide when a system is safe enough to deploy? And how do we optimize efficiency through intelligent traffic management and control of fleets?

To meet these requirements in safety, efficiency, control, and capacity, the systems must be automated with intelligent decision making. Machine learning will be an essential component of that. Machine learning has made rapid progress in the self-driving domain (e.g., in real-time perception and prediction of traffic scenes); has started to be applied to ride-sharing platforms such as Uber (e.g., demand forecasting); and by crowd-sourced video scene analysis companies such as Nexar (e.g., understanding and avoiding accidents). But to address the challenges arising in our future transportation system, we need to consider the transportation systems as a whole rather than solving problems in isolation, from prediction, to behavior, to infrastructure.

The goal of this workshop is to bring together researchers and practitioners from all areas of intelligent transportations systems to address core challenges with machine learning. These challenges include, but are not limited to
pedestrian detection, intent recognition, and negotiation,
coordination with human-driven vehicles,
machine learning for object tracking,
unsupervised representation learning for autonomous driving,
deep reinforcement learning for learning driving policies,
cross-modal and simulator to real-world transfer learning,
scene classification, real-time perception and prediction of traffic scenes,
uncertainty propagation in deep neural networks,
efficient inference with deep neural networks
predictive modeling of risk and accidents through telematics, modeling, simulation and forecast of demand and mobility patterns in large scale urban transportation systems,
machine learning approaches for control and coordination of traffic leveraging V2V and V2X infrastructures,

The workshop will include invited speakers, panels, presentations of accepted papers, and posters. We invite papers in the form of short, long, and position papers to address the core challenges mentioned above. We encourage researchers and practitioners on self-driving cars, transportation systems and ride-sharing platforms to participate. Since this is a topic of broad and current interest, we expect at least 150 participants from leading university researchers, auto-companies and ride-sharing companies.

This will be the 3rd NIPS workshop in this series. Previous workshops have been very successful and have attracted large numbers of participants from both academia and industry.

Sat 5:45 a.m. - 6:00 a.m. [iCal]
Opening Remark
Li Erran Li, Anca Dragan
Sat 6:00 a.m. - 6:30 a.m. [iCal]

Title: Prediction and Planning Under Uncertainty: The Case of Autonomous Driving

Abstract: In order to achieve a well specified goal an agent may use two distinct approaches: trial & error, or careful planning. In the first case the agent has to fail multiple times before learning a task (e.g. playing a card game), in the second we leverage the knowledge of the environment to avoid any fatal failure (e.g. vehicle collision).

Autonomous driving relies on accurate planning, which requires a good model of the world that also considers other vehicles' future response to our own actions. Effectively learning to predict such response, stochastic by nature, is the key aspect to successfully obtain planning under uncertainty.

Bio: Alfredo Canziani is a Post-Doctoral Deep Learning Research Scientist and Lecturer at NYU Courant Institute of Mathematical Sciences, under the supervision of professors KyungHyun Cho and Yann LeCun. His research mainly focusses on Machine Learning for Autonomous Driving. He has been exploring deep policy networks actions uncertainty estimation and failure detection, and long term planning based on latent forward models, which nicely deal with the stochasticity and multimodality of the surrounding environment. Alfredo obtained both his Bachelor (2009) and Master (2011) degrees in EEng cum laude at Trieste University, his MSc (2012) at Cranfield University, and his PhD (2017) at Purdue University. In his spare time, Alfredo is a professional musician, dancer, and cook, and keeps expanding his free online video course on Deep Learning, Torch, and PyTorch.

Alfredo Canziani
Sat 6:30 a.m. - 7:00 a.m. [iCal]

Bio: James A. (Drew) Bagnell is Chief Technology Officer of Aurora (aurora.tech), where he works with an amazing team to develop and deliver self-driving safely, quickly and broadly. Dr. Bagnell has worked for 19 years at the intersection of machine learning and robotics with expertise in self-driving cars, imitation and reinforcement learning, planning, and computational perception. Aurora was founded in 2017 to enable autonomous driving solutions that will make roads safer, improve lives, revitalize cities, and expand transportation access.

Dr. Bagnell is also an adjunct professor at Carnegie Mellon University’s Robotics Institute and Machine Learning Department. His interests in artificial intelligence range from algorithmic and basic theoretical development to delivering fielded learning-based systems. Bagnell and his research group have received over a dozen research awards for publications in both the robotics and machine learning communities including best paper awards at ICML, RSS, and ICRA.

He received the 2016 Ryan Award, Carnegie Mellon University’s award for Meritorious Teaching, and served as the founding director of the Robotics Institute Summer Scholars program, a summer research experience that has enabled hundreds of undergraduates throughout the world to leap into robotics research.

James Bagnell
Sat 7:00 a.m. - 7:30 a.m. [iCal]

Title: On the generalization of autonomous driving technologies

Abstract: Most L4 autonomous driving companies are working on the solution of 1-2 cities. The generalization from one city to 10 cities is a very challenging problem, in particular, when the 10 cities are very different and even cross different countries. This requires much more generalization ability of the algorithm, including the deep learning algorithms. In this talk, I will talk about the challenge of generalization in autonomous driving, and the interesting problems we encountered when testing the system in different countries.

Yimeng Zhang
Sat 7:30 a.m. - 8:00 a.m. [iCal]
Coffee break: morning (Coffee break)
Sat 8:00 a.m. - 8:30 a.m. [iCal]
Invited Talk: Nathaniel Fairfield, Waymo (Invited talk)
Nathaniel Fairfield
Sat 8:30 a.m. - 9:00 a.m. [iCal]

Title: Guardian Research Challenges and Opportunities

Abstract: This talk will describe research underway at Toyota Research Institute and its partner universities to create the Toyota Guardian system for increasing the safety of human driving by exploiting advanced navigation, perception, prediction, and planning capabilities that are becoming available. The objective of Guardian is to create a highly automated driving system that can act as a safety net for the human driver to help prevent an accident, with three primary aims: (1) stay on the road; (2) don't hit things; (3) don't get hit. We will discuss some of the research challenges and opportunities for realizing such a system, spanning a wide range of topics in computer vision, machine learning, and mobile robotics.

Bio: Dr. John J. Leonard is Samuel C. Collins Professor in the MIT Department of Mechanical Engineering and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research addresses the problems of navigation and mapping for autonomous mobile robots and underwater vehicles. He holds the degrees of B.S.E.E. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994). He was team leader for MIT's DARPA Urban Challenge team, which was one of eleven teams to qualify for the Urban Challenge final event and one of six teams to complete the race. He is the recipient of an NSF Career Award (1998) and the King-Sun Fu Memorial Best Transactions on Robotics Paper Award (2006). He is an IEEE Fellow (2014). Professor Leonard has recently been on partial leave from MIT serving as Vice President of Automated Driving Research at the Toyota Research Institute (TRI).

John Leonard
Sat 9:00 a.m. - 10:30 a.m. [iCal]
Lunch (Lunch Break)
Sat 10:30 a.m. - 11:00 a.m. [iCal]

Title: Altruistic Autonomy: Beating Congestion on Shared Roads

Abstract: Today the emergence of autonomous cars on public roads has become a reality. Autonomous vehicles on roads shared with human-driven cars can create a variety of challenges including influencing traffic flow when the transportation network is under heterogeneous use: when cars of differing levels of autonomy co-exist on the same road. In this talk, we will address some of the challenges of mixed-autonomy traffic networks via leveraging the power of autonomous vehicles. Specifically we will focus on two main approaches that use autonomous cars to positively influence congestion. First we discuss how local interactions between the vehicles can affect the global behavior of a traffic network. We will examine a high-level queuing framework to study the capacity of a mixed-autonomy transportation network, and then outline a lower-level control framework that leverages local interactions between cars to achieve a more efficient traffic flow via intelligent reordering of the cars. We provide theoretical bounds on the capacity that can be achieved by the network for given autonomy level. Second, we formalize the notion of altruistic autonomy—autonomous vehicles that are incentivized to take longer routes in order to alleviate congestion on mixed-autonomy roads. We then study the effects of altruistic autonomy on roads shared between human drivers and autonomous vehicles. We develop a formal model of road congestion on shared roads based on the fundamental diagram of traffic, and discuss algorithms that compute optimal equilibria robust to additional unforeseen demand, and plan for optimal routings when users have varying degrees of altruism. We find that even with arbitrarily small altruism, total latency can be unboundedly lower than without altruism.

Dorsa Sadigh
Sat 11:00 a.m. - 11:30 a.m. [iCal]

Title: On safe and efficient human-robot vehicle interactions via CVAE-based intent modeling and reachability-based safety assurance

Abstract: In this talk I will present a decision-making and control stack for human-robot vehicle interactions. I will first discuss a data-driven approach for learning interaction dynamics between robot-driven and human-driven vehicles, based on recent advances in the theory of conditional variational autoencoders (CVAEs). I will then discuss how to incorporate such a learned interaction model into a real-time, intent-aware decision-making framework, with an emphasis on minimally-interventional strategies rooted in backward reachability analysis for ensuring safety even when other cars defy the robot's predictions. Experiments on a full-scale steer-by-wire platform entailing traffic weaving maneuvers demonstrate how the proposed autonomy stack enables more efficient and anticipative autonomous driving behaviors, while avoiding collisions even when the other cars defy the robot’s predictions and take dangerous actions.

Marco Pavone
Sat 11:30 a.m. - 12:00 p.m. [iCal]

Deep Object Centric Policies for Autonomous Driving, Dequan Wang (presenter), Coline Devin, Qi-Zhi Cai, Fisher Yu, Trevor Darrell

Deep Imitative Models for Flexible Inference, Planning, and Control, Nicholas Rhinehart (presenter), Rowan McAllister, Sergey Levine

Learning to Drive in a Day, Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah (presenter)

Nick Rhinehart, Amar Shah
Sat 12:00 p.m. - 12:30 p.m. [iCal]
Coffee break: afternoon (Coffee break)
Sat 12:30 p.m. - 1:00 p.m. [iCal]

Title: Driving Autonomy - Machine Learning for Intelligent Transportation Systems

Abstract: Intelligent transportation systems, and autonomous driving in particular, have captured the public imagination. Most of us are excited about the art of the possible. Machine learning clearly has a role to play. In this talk I will argue that a systems view of autonomous driving affords the machine learning community particular opportunities - and poses some interesting challenges - beyond an out-of-the-box deployment of strategies and models developed in related fields.

Bio: Prof. Ingmar Posner leads the Applied Artificial Intelligence Lab (A2I) at Oxford University. He also serves as Deputy Director of the Oxford Robotics Institute, which he co-founded in 2016. Ingmar has a significant track record in designing machine learning approaches (shallow and deep) which address core challenges in AI and machine learning. Ingmar's goal is to enable robots to robustly and effectively operate in complex, real-world environments. His research is guided by a vision to create machines which constantly improve through experience. In doing so Ingmar's work explores a number of intellectual challenges at the heart of robot learning, such as machine introspection in perception and decision making, data efficient learning from demonstration, transfer learning and the learning of complex tasks via a set of less complex ones. All the while Ingmar’s intellectual curiosity remains grounded in real-world robotics applications such as autonomous driving, logistics, manipulation and space exploration. In 2014 Ingmar co-founded Oxbotica, a leading provider of mobile autonomy software solutions.

Ingmar Posner
Sat 1:00 p.m. - 1:30 p.m. [iCal]

Title: Sensing and simulating the real world for next generation autonomous mobility

Abstract: Zoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. A key part of our design is safety: in addition to providing a great user experience, we aim to design robots that are significantly safer than human drivers. To ensure this, it is critical to maintain accurate perception of objects in the world that our robots need to react to. To that end, Zoox has taken a holistic approach to its sensor choice and placement, computational power, and algorithms. In the first half of our talk we will describe some of the sensors and algorithms we use to ensure that the robot is able to perceive all objects that it needs to react to, and that it is able to do so with sufficiently low latency.

In addition to developing the necessary technology, it is imperative to be able to validate it. Zoox is developing an advanced 3D simulation framework to help verify that our vehicle is safe while also being able to complete its missions successfully. This framework provides the foundation for generating highly realistic simulated data which is used as ground truth for testing algorithms as well as to train machine learning algorithms in cases when sufficient real-world data is not readily available. The second part of the talk will provide an overview of this framework and, in particular, discuss how we quantify the fidelity of the various simulated sensor types used by our robot in its perception stack.

Bios: Sarah is the Director of Vision Detection and Tracking at Zoox, where her team focuses on perception for cameras, including detecting and tracking objects of interest reliably and in real time. Sarah has been at Zoox for over three years and before that she has almost a decade of experience working at NVIDIA across multiple roles. Amongst her many achievements at NVIDIA, she contributed to the implementation of novel real-time simulation and rendering of algorithms for video games, managed a team working on profiling and optimizing code for high performance computing and super computers, and served as a technical lead for the computer vision team focusing on self-driving technology.

Ekaterina is a senior research engineer at Zoox. Her goal is to quantify how realistic simulated sensors need to be to to enable end-to-end testing of the software stack and create synthetic training data to help improve perception models. Before Zoox, she was a postdoc with Tony Jebara and Rafael Yuste at Columbia University, where she developed large scale graphical models to quantify neural activity in the mouse visual cortex. Ekaterina obtained her PhD with Martial Hebert and Fernando De la Torre at Carnegie Mellon University, where her thesis work was on action classification and segmentation in videos.

Ekaterina Taralova, Sarah Tariq
Sat 1:30 p.m. - 2:15 p.m. [iCal]

Discussion on key challenges and approaches of AI for autonomous driving

Yimeng Zhang, Alfredo Canziani, Marco Pavone, Dorsa Sadigh, Kurt Keutzer
Sat 2:15 p.m. - 3:00 p.m. [iCal]

Deep Reinforcement Learning for Intelligent Transportation Systems, Xiao-Yang Liu, Zihan Ding (presenter), Sem Borst, Anwar Walid

GANtruth – an unpaired image-to-image translation method for driving scenarios Sebastian Bujwid (presenter), Miquel Martí Rabadan, Hossein Azizpour, Alessandro Pieropan

Controlling the Crowd: Inducing Efficient Equilibria in Multi-Agent Systems
David Mguni (presenter), Joel Jennings, Sergio Valcarcel Macua, Sofia Ceppi, Enrique Munoz de Cote

Distributed Fleet Control with Maximum Entropy Deep Reinforcement Learning
Takuma Oda (presenter), Yulia Tachibana (presenter)

Robust Auto-parking: Reinforcement Learning based Real-time Planning Approach with Domain Template
Yuzheng Zhuang (presenter), Qiang Gu, Bin Wang, Jun Luo, Hongbo Zhang, Wulong Liu

Approximate Robust Control of Uncertain Dynamical Systems
Edouard Leurent (presenter), Yann Blanco, Denis Efimov, Odalric-Ambrym Maillard

Towards Comprehensive Maneuver Decisions for Lane Change Using Reinforcement Learning
Chen Chen, Jun Qian, Hengshuai Yao, Jun Luo, Hongbo Zhang, Wulong Liu (presenter)

Investigating performance of neural networks and gradient boosting models approximating microscopic traffic simulations in traffic optimization tasks
Paweł Gora (presenter), Maciej Brzeski, Marcin Możejko, Arkadiusz Klemenko, Adrian Kochański

Taxi Demand-Supply Forecasting: Impact of Spatial Partitioning on the Performance of Neural Networks
Neema Davis (presenter), Gaurav Raina, Krishna Jagannathan

Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets
Fang-Chieh Chou (presenter), Tsung-Han Lin, Henggang Cui, Vladan Radosavljevic, Thi Nguyen, Tzu-Kuo Huang, Matthew Niedoba, Jeff Schneider, Nemanja Djuric (presenter)

Towards Practical Hierarchical Reinforcement Learning for Multi-lane Autonomous Driving
Masoud S. Nosrati, Elmira Amirloo Abolfathi (presenter), Mohammed Elmahgiubi, Peyman Yadmellat, Jun Luo, Yunfei Zhang, Hengshuai Yao, Hongbo Zhang, Anas Jamil

Risk-averse Behavior Planning for Autonomous Driving under Uncertainty
Mohammad Naghshvar (presenter), Ahmed K. Sadek, Auke J. Wiggers

Zihan Ding, David Mguni, Yuzheng Zhuang, Edouard Leurent, Takuma Oda, Yu Tachibana, Paweł Gora, Neema Davis, Nemanja Djuric, Fang-Chieh Chou, elmira amirloo

Author Information

Li Erran Li (Pony.ai)

Li Erran Li is the head of machine learning at Scale and an adjunct professor at Columbia University. Previously, he was chief scientist at Pony.ai. Before that, he was with the perception team at Uber ATG and machine learning platform team at Uber where he worked on deep learning for autonomous driving, led the machine learning platform team technically, and drove strategy for company-wide artificial intelligence initiatives. He started his career at Bell Labs. Li’s current research interests are machine learning, computer vision, learning-based robotics, and their application to autonomous driving. He has a PhD from the computer science department at Cornell University. He’s an ACM Fellow and IEEE Fellow.

Anca Dragan (UC Berkeley)
Juan Carlos Niebles (Stanford University)
Silvio Savarese (Stanford University)

More from the Same Authors