Timezone: »
In nearly all machine learning tasks, decisions must be made given current knowledge (e.g., choose which label to predict). Perhaps surprisingly, always making the best decision is not always the best strategy, particularly while learning. Recently, there is an emerging body of work on learning under different rules that apply perturbations to the decision procedure. These works provide simple and efficient learning rules with improved theoretical guarantees. This workshop will bring together the growing community of researchers interested in different aspects of this area, and it will broaden our understanding of why and how perturbation methods can be useful.
In the last couple of years, at the highly successful NIPS workshops on Perturbations, Optimization, and Statistics, we looked at how injecting perturbations (whether it be random or adversarial “noise”) into learning and inference procedures can be beneficial. The focus was on two angles: first, on how stochastic perturbations can be used to construct new types of probability models for structured data; and second, how deterministic perturbations affect the regularization and the generalization properties of learning algorithms.
The goal of this workshop is to expand the scope of previous workshops and also explore different ways to apply perturbations within optimization and statistics to enhance and improve machine learning approaches. This year, we would like to look at exciting new developments related to the above core themes.
More generally, we shall specifically be interested in understanding the following issues:
Modeling: which models lend efficient learning by perturbations?
Regularization: whether randomness can be replaced by other mathematical object while keeping the computational and statistical guarantees?
* Robust optimization: how stochastic and adversarial perturbations affect the learning outcome?
* Dropout: How stochastic dropout regularizes online learning tasks?
* Sampling: how perturbation can be applied to sample from continuous spaces?
Author Information
Tamir Hazan (Technion)
George Papandreou (Toyota Technological Institute at Chicago)
Danny Tarlow (Google Brain)
More from the Same Authors
-
2021 Spotlight: PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · Pierre-Antoine Manzagol · Daniel Tarlow · Subhodeep Moitra -
2021 Spotlight: Learning Generalized Gumbel-max Causal Mechanisms »
Guy Lorberbom · Daniel D. Johnson · Chris Maddison · Daniel Tarlow · Tamir Hazan -
2022 Poster: On the Importance of Gradient Norm in PAC-Bayesian Bounds »
Itai Gat · Yossi Adi · Alex Schwing · Tamir Hazan -
2021 Workshop: Advances in Programming Languages and Neurosymbolic Systems (AIPLANS) »
Breandan Considine · Disha Shrivastava · David Yu-Tung Hui · Chin-Wei Huang · Shawn Tan · Xujie Si · Prakash Panangaden · Guy Van den Broeck · Daniel Tarlow -
2021 Poster: Structured Denoising Diffusion Models in Discrete State-Spaces »
Jacob Austin · Daniel D. Johnson · Jonathan Ho · Daniel Tarlow · Rianne van den Berg -
2021 Poster: Learning to Combine Per-Example Solutions for Neural Program Synthesis »
Disha Shrivastava · Hugo Larochelle · Daniel Tarlow -
2021 Poster: PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · Pierre-Antoine Manzagol · Daniel Tarlow · Subhodeep Moitra -
2021 Poster: Learning Generalized Gumbel-max Causal Mechanisms »
Guy Lorberbom · Daniel D. Johnson · Chris Maddison · Daniel Tarlow · Tamir Hazan -
2020 Poster: Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies »
Itai Gat · Idan Schwartz · Alex Schwing · Tamir Hazan -
2020 Poster: Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces »
Guy Lorberbom · Chris Maddison · Nicolas Heess · Tamir Hazan · Danny Tarlow -
2019 Poster: Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder »
Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan -
2017 Poster: High-Order Attention Models for Visual Question Answering »
Idan Schwartz · Alex Schwing · Tamir Hazan -
2016 Poster: Constraints Based Convex Belief Propagation »
Yaniv Tenzer · Alex Schwing · Kevin Gimpel · Tamir Hazan -
2014 Poster: Just-In-Time Learning for Fast and Flexible Inference »
S. M. Ali Eslami · Danny Tarlow · Pushmeet Kohli · John Winn -
2014 Poster: A* Sampling »
Chris Maddison · Danny Tarlow · Tom Minka -
2014 Oral: A* Sampling »
Chris Maddison · Danny Tarlow · Tom Minka -
2013 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Sasha Rakhlin · Danny Tarlow -
2013 Poster: Learning Efficient Random Maximum A-Posteriori Predictors with Non-Decomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola -
2013 Poster: Learning to Pass Expectation Propagation Messages »
Nicolas Heess · Danny Tarlow · John Winn -
2013 Poster: On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations »
Tamir Hazan · Subhransu Maji · Tommi Jaakkola -
2012 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Danny Tarlow -
2012 Poster: Bayesian n-Choose-k Models for Classification and Ranking »
Kevin Swersky · Danny Tarlow · Richard Zemel · Ryan Adams · Brendan J Frey -
2012 Poster: Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins »
Alex Schwing · Tamir Hazan · Marc Pollefeys · Raquel Urtasun -
2012 Poster: Cardinality Restricted Boltzmann Machines »
Kevin Swersky · Danny Tarlow · Ilya Sutskever · Richard Zemel · Russ Salakhutdinov · Ryan Adams -
2010 Poster: Gaussian sampling by local perturbations »
George Papandreou · Alan Yuille -
2010 Poster: A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction »
Tamir Hazan · Raquel Urtasun -
2010 Poster: Direct Loss Minimization for Structured Prediction »
David A McAllester · Tamir Hazan · Joseph Keshet -
2006 Poster: Using Combinatorial Optimization within Max-Product Belief Propagation »
John Duchi · Danny Tarlow · Gal Elidan · Daphne Koller -
2006 Spotlight: Using Combinatorial Optimization within Max-Product Belief Propagation »
John Duchi · Danny Tarlow · Gal Elidan · Daphne Koller