Timezone: »

Reliable Machine Learning in the Wild
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang

Thu Dec 11:00 PM -- 09:30 AM PST @ Room 113
Event URL: https://sites.google.com/site/wildml2016nips/?pli=1 »

When will a system that has performed well in the past continue to do so in the future? How do we design such systems in the presence of novel and potentially adversarial input distributions? What techniques will let us safely build and deploy autonomous systems on a scale where human monitoring becomes difficult or infeasible? Answering these questions is critical to guaranteeing the safety of emerging high stakes applications of AI, such as self-driving cars and automated surgical assistants. This workshop will bring together researchers in areas such as human-robot interaction, security, causal inference, and multi-agent systems in order to strengthen the field of reliability engineering for machine learning systems. We are interested in approaches that have the potential to provide assurances of reliability, especially as systems scale in autonomy and complexity. We will focus on four aspects — robustness (to adversaries, distributional shift, model mis-specification, corrupted data); awareness (of when a change has occurred, when the model might be mis-calibrated, etc.); adaptation (to new situations or objectives); and monitoring (allowing humans to meaningfully track the state of the system). Together, these will aid us in designing and deploying reliable machine learning systems.

11:40 PM Opening Remarks (Talk)|| Jacob Steinhardt
12:00 AM Rules for Reliable Machine Learning (Invited Talk)|| Martin A Zinkevich
12:30 AM What's your ML Test Score? A rubric for ML production systems (Contributed Talk)|| D. Sculley
12:45 AM Poster Spotlights I (Spotlight)||
01:30 AM Robust Learning and Inference (Invited Talk)|| Yishay Mansour
02:00 AM Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition (Invited Talk)|| Jennifer Hill
02:30 AM Robust Covariate Shift Classification Using Multiple Feature Views (Contributed Talk)|| Angie Liu
02:45 AM Poster Spotlights II (Spotlight)||
04:15 AM Doug Tygar (Invited Talk)|| Doug Tygar
04:45 AM Adversarial Examples and Adversarial Training (Invited Talk)|| Ian Goodfellow
05:15 AM Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning (Contributed Talk)|| Octavian Suciu
05:30 AM Poster Spotlights III (Spotlight)||
05:45 AM Poster Session
06:30 AM Learning Reliable Objectives (Invited Talk)|| Anca Dragan
07:00 AM Building and Validating the AI behind the Next-Generation Aircraft Collision Avoidance System (Invited Talk)|| Mykel J Kochenderfer
07:30 AM Online Prediction with Selfish Experts (Contributed Talk)|| Okke Schrijvers
07:45 AM TensorFlow Debugger: Debugging Dataflow Graphs for Machine Learning (Contributed Talk)|| D. Sculley
08:00 AM What are the challenges to making machine learning reliable in practice? (Panel Discussion)||

Author Information

Dylan Hadfield-Menell (UC Berkeley)
Adrian Weller (University of Cambridge)

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, where he is also a Turing Fellow leading work on safe and ethical AI. He is a Senior Research Fellow in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he leads the project on Trust and Transparency. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. Previously, Adrian held senior roles in finance.

David Duvenaud (University of Toronto)
Jacob Steinhardt (UC Berkeley)
Percy Liang (Stanford University)

More from the Same Authors