`

Timezone: »

 
Poster spotlights
Hiroshi Kuwajima · Masayuki Tanaka · Qingkai Liang · Matthieu Komorowski · Fanyu Que · Thalita F Drumond · Aniruddh Raghu · Leo Anthony Celi · Christina Göpfert · Andrew Ross · Sarah Tan · Rich Caruana · Yin Lou · Devinder Kumar · Graham Taylor · Forough Poursabzi-Sangdeh · Jennifer Wortman Vaughan · Hanna Wallach

[1] "Network Analysis for Explanation" [2] "Using prototypes to improve convolutional networks interpretability" [3] "Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning" [4] "Deep Reinforcement Learning for Sepsis Treatment" [5] "Analyzing Feature Relevance for Linear Reject Option SVM using Relevance Intervals" [6] "The Neural LASSO: Local Linear Sparsity for Interpretable Explanations" [7] "Detecting Bias in Black-Box Models Using Transparent Model Distillation" [8] "Data masking for privacy-sensitive learning" [9] "CLEAR-DR: Interpretable Computer Aided Diagnosis of Diabetic Retinopathy" [10] "Manipulating and Measuring Model Interpretability"

Author Information

Hiroshi Kuwajima (DENSO CORPORATION)

Research staff of the artificial intelligence research for company-wide common basic technologies I focus on quality aspects of artificial intelligence. I am in charge of direction of the company wide researcher/engineer community, and management and research of research projects such as techniques for V&V (verification and validation) of systems using machine learning, standardization of efficient development process for systems using machine learning, techniques for quality evaluation (completeness, sufficiency, etc.) of training data sets, and machine learning models transparent to human inspection for safety-critical systems. Before starting research on AI, I worked on Formal Verification and Testing, Systems Engineering, Software Product Line Engineering, Model Based Development, AUTOSAR.

Masayuki Tanaka (National Institute of Advanced Industrial Science and Technology)
Qingkai Liang (MIT)

Qingkai Liang is currently a PhD candidate at MIT. His research interests include safe reinforcement learning and multi-agent reinforcement learning with applications in network control, autonomous systems, etc.

Matthieu Komorowski (Imperial College London / MIT)

I hold full board certification in anesthesiology and critical care in both France and the UK. A former medical research fellow at the European Space Agency, I completed a Master of Research in Biomedical Engineering at Imperial College London. I currently pursue a PhD at Imperial College and a research fellowship in intensive care at Charing Cross Hospital in London, supervised by Professor Anthony Gordon and Dr Aldo Faisal. A visiting scholar at the Laboratory for Computational Physiology at MIT, I collaborate with the MIT Critical Data group (Professor Leo Celi) on numerous projects involving secondary analysis of healthcare records. My research brings together my expertise in machine learning and critical care to generate new medical evidence and build decision support systems. My particular interest is sepsis, the number one killer in intensive care and the single most expensive condition treated in hospitals.

Fanyu Que (Boston College)
Thalita F Drumond (INRIA Bordeaux Sud-Ouest)
Aniruddh Raghu (Massachusetts Institute of Technology)
Leo Anthony Celi (Massachusetts Institute of Technology)
Christina Göpfert (Bielefeld University)
Andrew Ross (Harvard University)
Sarah Tan (Cornell University / UCSF)

Research scientist at Facebook working on causal inference and interpretability

Rich Caruana (Microsoft)
Yin Lou (Airbnb)
Devinder Kumar (University of Waterloo)

PhD Student

Graham Taylor (University of Guelph)
Forough Poursabzi-Sangdeh (University of Colorado Boulder)
Jennifer Wortman Vaughan (Microsoft Research)
Jennifer Wortman Vaughan

Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.

Hanna Wallach (MSR NYC)

More from the Same Authors