This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning

Jessica Forde, Francisco Ruiz, Melanie Fernandez Pradier, Aaron Schein, Finale Doshi-Velez, Isabel Valera, David Blei, Hanna Wallach

2020-12-12T04:45:00-08:00 - 2020-12-12T14:45:00-08:00
Abstract: We’ve all been there. A creative spark leads to a beautiful idea. We love the idea, we nurture it, and name it. The idea is elegant: all who hear it fawn over it. The idea is justified: all of the literature we have read supports it. But, lo and behold: once we sit down to implement the idea, it doesn’t work. We check our code for software bugs. We rederive our derivations. We try again and still, it doesn’t work. We Can’t Believe It’s Not Better [1].

In this workshop, we will encourage probabilistic machine learning researchers who Can’t Believe It’s Not Better to share their beautiful idea, tell us why it should work, and hypothesize why it does not in practice. We also welcome work that highlights pathologies or unexpected behaviors in well-established practices. This workshop will stress the quality and thoroughness of the scientific procedure, promoting transparency, deeper understanding, and more principled science.

Focusing on the probabilistic machine learning community will facilitate this endeavor, not only by gathering experts that speak the same language, but also by exploiting the modularity of probabilistic framework. Probabilistic machine learning separates modeling assumptions, inference, and model checking into distinct phases [2]; this facilitates criticism when the final outcome does not meet prior expectations. We aim to create an open-minded and diverse space for researchers to share unexpected or negative results and help one another improve their ideas.


To ask questions please use rocketchat, available only upon registration and login.


2020-12-12T04:45:00-08:00 - 2020-12-12T05:00:00-08:00
Aaron Schein, Melanie F. Pradier
2020-12-12T05:00:00-08:00 - 2020-12-12T05:30:00-08:00
Max Welling Talk
Max Welling
2020-12-12T05:30:00-08:00 - 2020-12-12T06:00:00-08:00
Danielle Belgrave Talk
Danielle Belgrave
2020-12-12T06:00:00-08:00 - 2020-12-12T06:30:00-08:00
Mike Hughes Talk
Mike Hughes
2020-12-12T06:30:00-08:00 - 2020-12-12T06:33:00-08:00
Margot Selosse---A bumpy journey: exploring deep Gaussian mixture models
Margot Selosse
2020-12-12T06:33:00-08:00 - 2020-12-12T06:36:00-08:00
Diana Cai---Power posteriors do not reliably learn the number of components in a finite mixture
Diana Cai
2020-12-12T06:36:00-08:00 - 2020-12-12T06:39:00-08:00
W Ronny Huang---Understanding Generalization through Visualizations
W. Ronny Huang
2020-12-12T06:39:00-08:00 - 2020-12-12T06:42:00-08:00
Udari Madhushani---It Doesn’t Get Better and Here’s Why: A Fundamental Drawback in Natural Extensions of UCB to Multi-agent Bandits
Udari Madhushani
2020-12-12T06:42:00-08:00 - 2020-12-12T06:45:00-08:00
Erik Jones---Selective Classification Can Magnify Disparities Across Groups
Erik Jones
2020-12-12T06:45:00-08:00 - 2020-12-12T06:48:00-08:00
Yannick Rudolph---Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?
Yannick Rudolph
2020-12-12T06:50:00-08:00 - 2020-12-12T07:00:00-08:00
Coffee Break
2020-12-12T07:00:00-08:00 - 2020-12-12T08:00:00-08:00
Poster Session (in
2020-12-12T08:00:00-08:00 - 2020-12-12T08:15:00-08:00
Charline Le Lan---Perfect density models cannot guarantee anomaly detection
Charline Le Lan
2020-12-12T08:15:00-08:00 - 2020-12-12T08:30:00-08:00
Fan Bao---Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models
Fan Bao
2020-12-12T08:30:00-08:00 - 2020-12-12T08:45:00-08:00
Emilio Jorge---Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning
Emilio Jorge
2020-12-12T09:00:00-08:00 - 2020-12-12T10:00:00-08:00
Lunch Break
2020-12-12T10:00:00-08:00 - 2020-12-12T10:30:00-08:00
Andrew Gelman Talk
Andrew Gelman
2020-12-12T10:30:00-08:00 - 2020-12-12T11:00:00-08:00
Roger Grosse Talk
Roger Grosse
2020-12-12T11:00:00-08:00 - 2020-12-12T11:30:00-08:00
Weiwei Pan Talk
Weiwei Pan
2020-12-12T11:30:00-08:00 - 2020-12-12T11:33:00-08:00
Vincent Fortuin---Bayesian Neural Network Priors Revisited
Vincent Fortuin
2020-12-12T11:33:00-08:00 - 2020-12-12T11:36:00-08:00
Ziyu Wang---Further Analysis of Outlier Detection with Deep Generative Models
Ziyu Wang
2020-12-12T11:36:00-08:00 - 2020-12-12T11:39:00-08:00
Siwen Yan---The Curious Case of Stacking Boosted Relational Dependency Networks
Siwen Yan
2020-12-12T11:39:00-08:00 - 2020-12-12T11:42:00-08:00
Maurice Frank - Problems using deep generative models for probabilistic audio source separation
Maurice Frank
2020-12-12T11:42:00-08:00 - 2020-12-12T11:45:00-08:00
Ramiro Camino---Oversampling Tabular Data with Deep Generative Models: Is it worth the effort?
Ramiro Camino
2020-12-12T11:45:00-08:00 - 2020-12-12T11:48:00-08:00
Ângelo Gregório Lovatto---Decision-Aware Model Learning for Actor-Critic Methods: When Theory Does Not Meet Practice
Ângelo Lovatto
2020-12-12T11:50:00-08:00 - 2020-12-12T12:00:00-08:00
Coffee Break
2020-12-12T12:00:00-08:00 - 2020-12-12T12:15:00-08:00
Tin D. Nguyen---Independent versus truncated finite approximations for Bayesian nonparametric inference
Stan Nguyen
2020-12-12T12:15:00-08:00 - 2020-12-12T12:30:00-08:00
Ricky T. Q. Chen---Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
Ricky T. Q. Chen
2020-12-12T12:30:00-08:00 - 2020-12-12T12:45:00-08:00
Elliott Gordon-Rodriguez---Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep Learning
Elliott Gordon-Rodriguez
2020-12-12T12:45:00-08:00 - 2020-12-12T13:45:00-08:00
Poster Session (in
2020-12-12T13:15:00-08:00 - 2020-12-12T13:45:00-08:00
Breakout Discussions (in
2020-12-12T13:45:00-08:00 - 2020-12-12T14:45:00-08:00
Panel & Closing
Tamara Broderick, Laurent Dinh, Neil Lawrence, Kristian Lum, Hanna Wallach, Sinead Williamson