Timezone: »
Author Information
Himabindu Lakkaraju (Harvard)
Hima Lakkaraju is an Assistant Professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in criminal justice and healthcare to understand the real world implications of explainable and fair ML. Hima has recently been named one of the 35 innovators under 35 by MIT Tech Review, and has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS. She has given invited workshop talks at ICML, NeurIPS, AAAI, and CVPR, and her research has also been covered by various popular media outlets including the New York Times, MIT Tech Review, TIME, and Forbes. For more information, please visit: https://himalakkaraju.github.io/
Julius Adebayo (MIT)
Julius Adebayo is a Ph.D. student at MIT working on developing and understanding approaches that seek to make machine learning-based systems reliable when deployed. More broadly, he is interested in rigorous approaches to help develop models that are robust to spurious associations, distribution shifts, and align with 'human' values. Website: https://juliusadebayo.com/
Sameer Singh (University of California, Irvine)
Sameer Singh is an Assistant Professor at UC Irvine working on robustness and interpretability of machine learning. Sameer has presented tutorials and invited workshop talks at EMNLP, Neurips, NAACL, WSDM, ICLR, ACL, and AAAI, and received paper awards at KDD 2016, ACL 2018, EMNLP 2019, AKBC 2020, and ACL 2020. Website: http://sameersingh.org/
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Tutorial: (Track2) Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities »
Mon. Dec 7th through Tue the 8th Room
More from the Same Authors
-
2021 : Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models »
Robert Logan · Ivana Balazevic · Eric Wallace · Fabio Petroni · Sameer Singh · Sebastian Riedel -
2022 : Quantifying Social Biases Using Templates is Unreliable »
Preethi Seshadri · Pouya Pezeshkpour · Sameer Singh -
2022 : TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations »
Dylan Slack · Satyapriya Krishna · Himabindu Lakkaraju · Sameer Singh -
2023 Poster: Post Hoc Explanations of Language Models Can Improve Language Models »
Satyapriya Krishna · Jiaqi Ma · Dylan Slack · Asma Ghandeharioun · Sameer Singh · Himabindu Lakkaraju -
2022 : Contributed Talk: TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations »
Dylan Slack · Satyapriya Krishna · Himabindu Lakkaraju · Sameer Singh -
2022 : A Human-Centric Take on Model Monitoring »
Murtuza Shergadwala · Himabindu Lakkaraju · Krishnaram Kenthapadi -
2022 : Invited talk (Dr Hima Lakkaraju) - "A Brief History of Explainable AI: From Simple Rules to Large Pretrained Models" »
Himabindu Lakkaraju -
2021 : Panel II: Machine decisions »
Anca Dragan · Karen Levy · Himabindu Lakkaraju · Ariel Rosenfeld · Maithra Raghu · Irene Y Chen -
2021 : Q/A Session »
Leilani Gilpin · Julius Adebayo -
2021 : [IT4] Detecting model reliance on spurious signals is challenging for post hoc explanation approaches »
Julius Adebayo -
2021 : Q/A Session »
Alexander Feldman · Himabindu Lakkaraju -
2021 : [IT3] Towards Reliable and Robust Model Explanations »
Himabindu Lakkaraju -
2021 : Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models »
Robert Logan · Ivana Balazevic · Eric Wallace · Fabio Petroni · Sameer Singh · Sebastian Riedel -
2021 : Invited Talk: Towards Reliable and Robust Model Explanations »
Himabindu Lakkaraju -
2021 : PYLON: A PyTorch Framework for Learning with Constraints »
Kareem Ahmed · Tao Li · Nu Mai Thy Ton · Quan Guo · Kai-Wei Chang · Parisa Kordjamshidi · Vivek Srikumar · Guy Van den Broeck · Sameer Singh -
2020 Poster: Incorporating Interpretable Output Constraints in Bayesian Neural Networks »
Wanqian Yang · Lars Lorch · Moritz Graule · Himabindu Lakkaraju · Finale Doshi-Velez -
2020 Spotlight: Incorporating Interpretable Output Constraints in Bayesian Neural Networks »
Wanqian Yang · Lars Lorch · Moritz Graule · Himabindu Lakkaraju · Finale Doshi-Velez -
2020 Poster: Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses »
Kaivalya Rawal · Himabindu Lakkaraju -
2019 Workshop: KR2ML - Knowledge Representation and Reasoning Meets Machine Learning »
Veronika Thost · Christian Muise · Kartik Talamadupula · Sameer Singh · Christopher RĂ© -
2019 Demonstration: AllenNLP Interpret: Explaining Predictions of NLP Models »
Jens Tuyls · Eric Wallace · Matt Gardner · Junlin Wang · Sameer Singh · Sanjay Subramanian