This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning

Jessica Forde, Francisco Ruiz, Melanie Fernandez Pradier, Aaron Schein, Finale Doshi-Velez, Isabel Valera, David Blei, Hanna Wallach

Sat, Dec 12th, 2020 @ 12:45 – 22:45 GMT
Abstract: We’ve all been there. A creative spark leads to a beautiful idea. We love the idea, we nurture it, and name it. The idea is elegant: all who hear it fawn over it. The idea is justified: all of the literature we have read supports it. But, lo and behold: once we sit down to implement the idea, it doesn’t work. We check our code for software bugs. We rederive our derivations. We try again and still, it doesn’t work. We Can’t Believe It’s Not Better [1].

In this workshop, we will encourage probabilistic machine learning researchers who Can’t Believe It’s Not Better to share their beautiful idea, tell us why it should work, and hypothesize why it does not in practice. We also welcome work that highlights pathologies or unexpected behaviors in well-established practices. This workshop will stress the quality and thoroughness of the scientific procedure, promoting transparency, deeper understanding, and more principled science.

Focusing on the probabilistic machine learning community will facilitate this endeavor, not only by gathering experts that speak the same language, but also by exploiting the modularity of probabilistic framework. Probabilistic machine learning separates modeling assumptions, inference, and model checking into distinct phases [2]; this facilitates criticism when the final outcome does not meet prior expectations. We aim to create an open-minded and diverse space for researchers to share unexpected or negative results and help one another improve their ideas.

Chat

To ask questions please use rocketchat, available only upon registration and login.

Schedule

12:45 – 13:00 GMT
Intro
Aaron Schein, Melanie F. Pradier
13:00 – 13:30 GMT
Max Welling Talk
Max Welling
13:30 – 14:00 GMT
Danielle Belgrave Talk
Danielle Belgrave
14:00 – 14:30 GMT
Mike Hughes Talk
Mike Hughes
14:30 – 14:33 GMT
Margot Selosse---A bumpy journey: exploring deep Gaussian mixture models
Margot Selosse
14:33 – 14:36 GMT
Diana Cai---Power posteriors do not reliably learn the number of components in a finite mixture
Diana Cai
14:36 – 14:39 GMT
W Ronny Huang---Understanding Generalization through Visualizations
W. Ronny Huang
14:39 – 14:42 GMT
Udari Madhushani---It Doesn’t Get Better and Here’s Why: A Fundamental Drawback in Natural Extensions of UCB to Multi-agent Bandits
Udari Madhushani
14:42 – 14:45 GMT
Erik Jones---Selective Classification Can Magnify Disparities Across Groups
Erik Jones
14:45 – 14:48 GMT
Yannick Rudolph---Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?
Yannick Rudolph
14:50 – 15:00 GMT
Coffee Break
15:00 – 16:00 GMT
Poster Session (in gather.town)
16:00 – 16:15 GMT
Charline Le Lan---Perfect density models cannot guarantee anomaly detection
Charline Le Lan
16:15 – 16:30 GMT
Fan Bao---Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models
Fan Bao
16:30 – 16:45 GMT
Emilio Jorge---Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning
Emilio Jorge
17:00 – 18:00 GMT
Lunch Break
18:00 – 18:30 GMT
Andrew Gelman Talk
Andrew Gelman
18:30 – 19:00 GMT
Roger Grosse Talk
Roger Grosse
19:00 – 19:30 GMT
Weiwei Pan Talk
Weiwei Pan
19:30 – 19:33 GMT
Vincent Fortuin---Bayesian Neural Network Priors Revisited
Vincent Fortuin
19:33 – 19:36 GMT
Ziyu Wang---Further Analysis of Outlier Detection with Deep Generative Models
Ziyu Wang
19:36 – 19:39 GMT
Siwen Yan---The Curious Case of Stacking Boosted Relational Dependency Networks
Siwen Yan
19:39 – 19:42 GMT
Maurice Frank - Problems using deep generative models for probabilistic audio source separation
Maurice Frank
19:42 – 19:45 GMT
Ramiro Camino---Oversampling Tabular Data with Deep Generative Models: Is it worth the effort?
Ramiro Camino
19:45 – 19:48 GMT
Ângelo Gregório Lovatto---Decision-Aware Model Learning for Actor-Critic Methods: When Theory Does Not Meet Practice
Ângelo Lovatto
19:50 – 20:00 GMT
Coffee Break
20:00 – 20:15 GMT
Tin D. Nguyen---Independent versus truncated finite approximations for Bayesian nonparametric inference
Stan Nguyen
20:15 – 20:30 GMT
Ricky T. Q. Chen---Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
Ricky T. Q. Chen
20:30 – 20:45 GMT
Elliott Gordon-Rodriguez---Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep Learning
Elliott Gordon-Rodriguez
20:45 – 21:45 GMT
Poster Session (in gather.town)
21:15 – 21:45 GMT
Breakout Discussions (in gather.town)
21:45 – 22:45 GMT
Panel & Closing
Tamara Broderick, Laurent Dinh, Neil Lawrence, Kristian Lum, Hanna Wallach, Sinead Williamson