Timezone: »

 
Poster
Learning with Explanation Constraints
Rattana Pukdee · Dylan Sam · J. Zico Kolter · Maria-Florina Balcan · Pradeep Ravikumar

Thu Dec 14 08:45 AM -- 10:45 AM (PST) @ Great Hall & Hall B1+B2 #1717

As larger deep learning models are hard to interpret, there has been a recent focus on generating explanations of these black-box models. In contrast, we may have apriori explanations of how models should behave. In this paper, we formalize this notion as learning from \emph{explanation constraints} and provide a learning theoretic framework to analyze how such explanations can improve the learning of our models. One may naturally ask, ``When would these explanations be helpful?"Our first key contribution addresses this question via a class of models that satisfies these explanation constraints in expectation over new data. We provide a characterization of the benefits of these models (in terms of the reduction of their Rademacher complexities) for a canonical class of explanations given by gradient information in the settings of both linear models and two layer neural networks. In addition, we provide an algorithmic solution for our framework, via a variational approximation that achieves better performance and satisfies these constraints more frequently, when compared to simpler augmented Lagrangian methods to incorporate these explanations. We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.

Author Information

Rattana Pukdee (Carnegie Mellon University)
Dylan Sam (Carnegie Mellon University)

Hi, my name is Dylan! I am currently a 2nd year PhD student in the MLD at CMU, where I am advised by Professor Zico Kotler. I am interested in developing principled machine learning and deep learning algorithms, specifically for settings with limited labeled data. More broadly, my research interests include weakly supervised learning, semi-supervised learning, and self-supervised learning. I am also interested in distribution shift, ensemble methods, and robustness.

J. Zico Kolter (Carnegie Mellon University / Bosch Center for AI)

Zico Kolter is an Assistant Professor in the School of Computer Science at Carnegie Mellon University, and also serves as Chief Scientist of AI Research for the Bosch Center for Artificial Intelligence. His work focuses on the intersection of machine learning and optimization, with a large focus on developing more robust, explainable, and rigorous methods in deep learning. In addition, he has worked on a number of application areas, highlighted by work on sustainability and smart energy systems. He is the recipient of the DARPA Young Faculty Award, and best paper awards at KDD, IJCAI, and PESGM.

Maria-Florina Balcan (Carnegie Mellon University)
Pradeep Ravikumar (Carnegie Mellon University)

More from the Same Authors