Timezone: »
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
Author Information
Wanqian Yang (Harvard University)
Lars Lorch (Harvard)
Moritz Graule (Harvard University)
Himabindu Lakkaraju (Harvard)
Hima Lakkaraju is an Assistant Professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in criminal justice and healthcare to understand the real world implications of explainable and fair ML. Hima has recently been named one of the 35 innovators under 35 by MIT Tech Review, and has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS. She has given invited workshop talks at ICML, NeurIPS, AAAI, and CVPR, and her research has also been covered by various popular media outlets including the New York Times, MIT Tech Review, TIME, and Forbes. For more information, please visit: https://himalakkaraju.github.io/
Finale Doshi-Velez (Harvard)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Incorporating Interpretable Output Constraints in Bayesian Neural Networks »
Fri. Dec 11th 03:50 -- 04:00 AM Room Orals & Spotlights: Neuroscience/Probabilistic
More from the Same Authors
-
2021 Spotlight: Learning MDPs from Features: Predict-Then-Optimize for Sequential Decision Making by Reinforcement Learning »
Kai Wang · Sanket Shah · Haipeng Chen · Andrew Perrault · Finale Doshi-Velez · Milind Tambe -
2021 : Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation »
Ramtin Keramati · Omer Gottesman · Leo Celi · Finale Doshi-Velez · Emma Brunskill -
2022 : An Empirical Analysis of the Advantages of Finite vs.~Infinite Width Bayesian Neural Networks »
Jiayu Yao · Yaniv Yacoby · Beau Coker · Weiwei Pan · Finale Doshi-Velez -
2022 : Feature-Level Synthesis of Human and ML Insights »
Isaac Lage · Sonali Parbhoo · Finale Doshi-Velez -
2022 : What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML »
Varshini Subhash · Zixi Chen · Marton Havasi · Weiwei Pan · Finale Doshi-Velez -
2022 : What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML »
Zixi Chen · Varshini Subhash · Marton Havasi · Weiwei Pan · Finale Doshi-Velez -
2022 : (When) Are Contrastive Explanations of Reinforcement Learning Helpful? »
Sanjana Narayanan · Isaac Lage · Finale Doshi-Velez -
2022 : Leveraging Human Features at Test-Time »
Isaac Lage · Sonali Parbhoo · Finale Doshi-Velez -
2022 : An Empirical Analysis of the Advantages of Finite v.s. Infinite Width Bayesian Neural Networks »
Jiayu Yao · Yaniv Yacoby · Beau Coker · Weiwei Pan · Finale Doshi-Velez -
2022 : A Human-Centric Take on Model Monitoring »
Murtuza Shergadwala · Himabindu Lakkaraju · Krishnaram Kenthapadi -
2022 : What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML »
Varshini Subhash · Zixi Chen · Marton Havasi · Weiwei Pan · Finale Doshi-Velez -
2022 Poster: Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers »
Wanqian Yang · Polina Kirichenko · Micah Goldblum · Andrew Wilson -
2022 Poster: Addressing Leakage in Concept Bottleneck Models »
Marton Havasi · Sonali Parbhoo · Finale Doshi-Velez -
2022 Poster: Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : Invited talk (Dr Hima Lakkaraju) - "A Brief History of Explainable AI: From Simple Rules to Large Pretrained Models" »
Himabindu Lakkaraju -
2021 : Retrospective Panel »
Sergey Levine · Nando de Freitas · Emma Brunskill · Finale Doshi-Velez · Nan Jiang · Rishabh Agarwal -
2021 : Panel II: Machine decisions »
Anca Dragan · Karen Levy · Himabindu Lakkaraju · Ariel Rosenfeld · Maithra Raghu · Irene Y Chen -
2021 : Q/A Session »
Alexander Feldman · Himabindu Lakkaraju -
2021 : [IT3] Towards Reliable and Robust Model Explanations »
Himabindu Lakkaraju -
2021 : LAF | Panel discussion »
Aaron Snoswell · Jake Goldenfein · Finale Doshi-Velez · Evi Micha · Ivana Dusparic · Jonathan Stray -
2021 : LAF | The Role of Explanation in RL Legitimacy, Accountability, and Feedback »
Finale Doshi-Velez -
2021 : Invited talk #2: Finale Doshi-Velez »
Finale Doshi-Velez -
2021 : Invited Talk: Towards Reliable and Robust Model Explanations »
Himabindu Lakkaraju -
2021 Poster: Learning MDPs from Features: Predict-Then-Optimize for Sequential Decision Making by Reinforcement Learning »
Kai Wang · Sanket Shah · Haipeng Chen · Andrew Perrault · Finale Doshi-Velez · Milind Tambe -
2020 : Batch RL Models Built for Validation »
Finale Doshi-Velez -
2020 : Panel »
Emma Brunskill · Nan Jiang · Nando de Freitas · Finale Doshi-Velez · Sergey Levine · John Langford · Lihong Li · George Tucker · Rishabh Agarwal · Aviral Kumar -
2020 : Q & A and Panel Session with Tom Mitchell, Jenn Wortman Vaughan, Sanjoy Dasgupta, and Finale Doshi-Velez »
Tom Mitchell · Jennifer Wortman Vaughan · Sanjoy Dasgupta · Finale Doshi-Velez · Zachary Lipton -
2020 Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning »
Jessica Forde · Francisco Ruiz · Melanie Fernandez Pradier · Aaron Schein · Finale Doshi-Velez · Isabel Valera · David Blei · Hanna Wallach -
2020 Tutorial: (Track2) Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities Q&A »
Himabindu Lakkaraju · Julius Adebayo · Sameer Singh -
2020 Poster: Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses »
Kaivalya Rawal · Himabindu Lakkaraju -
2020 Poster: Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs »
Jianzhun Du · Joseph Futoma · Finale Doshi-Velez -
2020 Tutorial: (Track2) Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities »
Himabindu Lakkaraju · Julius Adebayo · Sameer Singh -
2020 : Discussion Panel: Hugo Larochelle, Finale Doshi-Velez, Devi Parikh, Marc Deisenroth, Julien Mairal, Katja Hofmann, Phillip Isola, and Michael Bowling »
Hugo Larochelle · Finale Doshi-Velez · Marc Deisenroth · Devi Parikh · Julien Mairal · Katja Hofmann · Phillip Isola · Michael Bowling -
2019 : Panel - The Role of Communication at Large: Aparna Lakshmiratan, Jason Yosinski, Been Kim, Surya Ganguli, Finale Doshi-Velez »
Aparna Lakshmiratan · Finale Doshi-Velez · Surya Ganguli · Zachary Lipton · Michela Paganini · Anima Anandkumar · Jason Yosinski -
2019 : Invited talk #4 »
Finale Doshi-Velez -
2019 : Finale Doshi-Velez: Combining Statistical methods with Human Input for Evaluation and Optimization in Batch Settings »
Finale Doshi-Velez -
2018 : Finale Doshi-Velez »
Finale Doshi-Velez -
2018 : Panel on research process »
Zachary Lipton · Charles Sutton · Finale Doshi-Velez · Hanna Wallach · Suchi Saria · Rich Caruana · Thomas Rainforth -
2018 : Finale Doshi-Velez »
Finale Doshi-Velez -
2018 Poster: Human-in-the-Loop Interpretability Prior »
Isaac Lage · Andrew Ross · Samuel J Gershman · Been Kim · Finale Doshi-Velez -
2018 Spotlight: Human-in-the-Loop Interpretability Prior »
Isaac Lage · Andrew Ross · Samuel J Gershman · Been Kim · Finale Doshi-Velez -
2018 Poster: Representation Balancing MDPs for Off-policy Policy Evaluation »
Yao Liu · Omer Gottesman · Aniruddh Raghu · Matthieu Komorowski · Aldo Faisal · Finale Doshi-Velez · Emma Brunskill -
2017 : Panel Session »
Neil Lawrence · Finale Doshi-Velez · Zoubin Ghahramani · Yann LeCun · Max Welling · Yee Whye Teh · Ole Winther -
2017 : Finale Doshi-Velez »
Finale Doshi-Velez -
2017 : Automatic Model Selection in BNNs with Horseshoe Priors »
Finale Doshi-Velez -
2017 : Coffee break and Poster Session I »
Nishith Khandwala · Steve Gallant · Gregory Way · Aniruddh Raghu · Li Shen · Aydan Gasimova · Alican Bozkurt · William Boag · Daniel Lopez-Martinez · Ulrich Bodenhofer · Samaneh Nasiri GhoshehBolagh · Michelle Guo · Christoph Kurz · Kirubin Pillay · Kimis Perros · George H Chen · Alexandre Yahi · Madhumita Sushil · Sanjay Purushotham · Elena Tutubalina · Tejpal Virdi · Marc-Andre Schulz · Samuel Weisenthal · Bharat Srikishan · Petar Veličković · Kartik Ahuja · Andrew Miller · Erin Craig · Disi Ji · Filip Dabek · Chloé Pou-Prom · Hejia Zhang · Janani Kalyanam · Wei-Hung Weng · Harish Bhat · Hugh Chen · Simon Kohl · Mingwu Gao · Tingting Zhu · Ming-Zher Poh · Iñigo Urteaga · Antoine Honoré · Alessandro De Palma · Maruan Al-Shedivat · Pranav Rajpurkar · Matthew McDermott · Vincent Chen · Yanan Sui · Yun-Geun Lee · Li-Fang Cheng · Chen Fang · Sibt ul Hussain · Cesare Furlanello · Zeev Waks · Hiba Chougrad · Hedvig Kjellstrom · Finale Doshi-Velez · Wolfgang Fruehwirt · Yanqing Zhang · Lily Hu · Junfang Chen · Sunho Park · Gatis Mikelsons · Jumana Dakka · Stephanie Hyland · yann chevaleyre · Hyunwoo Lee · Xavier Giro-i-Nieto · David Kale · Michael Hughes · Gabriel Erion · Rishab Mehra · William Zame · Stojan Trajanovski · Prithwish Chakraborty · Kelly Peterson · Muktabh Mayank Srivastava · Amy Jin · Heliodoro Tejeda Lemus · Priyadip Ray · Tamas Madl · Joseph Futoma · Enhao Gong · Syed Rameel Ahmad · Eric Lei · Ferdinand Legros -
2017 : Contributed talk: Beyond Sparsity: Tree-based Regularization of Deep Models for Interpretability »
Mike Wu · Sonali Parbhoo · Finale Doshi-Velez -
2017 : Invited talk: The Role of Explanation in Holding AIs Accountable »
Finale Doshi-Velez -
2017 Poster: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Oral: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2016 : BNNs for RL: A Success Story and Open Questions »
Finale Doshi-Velez -
2015 Workshop: Machine Learning From and For Adaptive User Technologies: From Active Learning & Experimentation to Optimization & Personalization »
Joseph Jay Williams · Yasin Abbasi Yadkori · Finale Doshi-Velez -
2015 : Data Driven Phenotyping for Diseases »
Finale Doshi-Velez -
2015 Poster: Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction »
Been Kim · Julie A Shah · Finale Doshi-Velez