Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 14 08:00 AM -- 06:00 PM (PST) @ West Ballroom B
Machine Learning with Guarantees
Ben London · Gintare Karolina Dziugaite · Daniel Roy · Thorsten Joachims · Aleksander Madry · John Shawe-Taylor





Workshop Home Page

As adoption of machine learning grows in high-stakes application areas (e.g., industry, government and health care), so does the need for guarantees: how accurate a learned model will be; whether its predictions will be fair; whether it will divulge information about individuals; or whether it is vulnerable to adversarial attacks. Many of these questions involve unknown or intractable quantities (e.g., risk, regret or posterior likelihood) and complex constraints (e.g., differential privacy, fairness, and adversarial robustness). Thus, learning algorithms are often designed to yield (and optimize) bounds on the quantities of interest. Beyond providing guarantees, these bounds also shed light on black-box machine learning systems.

Classical examples include structural risk minimization (Vapnik, 1991) and support vector machines (Cristianini & Shawe-Taylor, 2000), while more recent examples include non-vacuous risk bounds for neural networks (Dziugaite & Roy, 2017, 2018), algorithms that optimize both the weights and structure of a neural network (Cortes, 2017), counterfactual risk minimization for learning from logged bandit feedback (Swaminathan & Joachims, 2015; London & Sandler, 2019), robustness to adversarial attacks (Schmidt et al., 2018; Wong & Kolter, 2018), differentially private learning (Dwork et al., 2006, Chaudhuri et al., 2011), and algorithms that ensure fairness (Dwork et al., 2012).

This one-day workshop will bring together researchers in both theoretical and applied machine learning, across areas such as statistical learning theory, adversarial learning, fairness and privacy, to discuss the problem of obtaining performance guarantees and algorithms to optimize them. The program will include invited and contributed talks, poster sessions and a panel discussion. We particularly welcome contributions describing fundamentally new problems, novel learning principles, creative bound optimization techniques, and empirical studies of theoretical findings.

Welcome Address (Talk)
Tengyu Ma, "Designing Explicit Regularizers for Deep Models" (Invited Talk)
Vatsal Sharan, "Sample Amplification: Increasing Dataset Size even when Learning is Impossible" (Contributed Talk)
Break / Poster Session 1
Mehryar Mohri, "Learning with Sample-Dependent Hypothesis Sets" (Invited Talk)
James Lucas, "Information-theoretic limitations on novel task generalization" (Contributed Talk)
Lunch Break
Soheil Feizi, "Certifiable Defenses against Adversarial Attacks" (Invited Talk)
Maksym Andriushchenko, "Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks" (Contributed Talk)
Coffee Break / Poster Session 2
Aaron Roth, "Average Individual Fairness" (Invited Talk)
Hussein Mozannar, "Fair Learning with Private Data" (Contributed Talk)
Emma Brünskill, "Some Theory RL Challenges Inspired by Education" (Invited Talk)
Discussion Panel