Timezone: »
In nearly all machine learning tasks, we expect there to be randomness, or noise, in the data we observe and in the relationships encoded by the model. Usually, this noise is considered undesirable, and we would eliminate it if possible. However, there is an emerging body of work on perturbation methods, showing the benefits of explicitly adding noise into the modeling, learning, and inference pipelines. This workshop will bring together the growing community of researchers interested in different aspects of this area, and will broaden our understanding of why and how perturbation methods can be useful.
More generally, perturbation methods usually provide efficient and principled ways to reason about the neighborhood of possible outcomes when trying to make the best decision. For example, some might want to arrive at the best outcome that is robust to small changes in model parameters. Others might want to find the best choice while compensating for their lack of knowledge by averaging over the different outcomes. Recently, several works influenced by diverse fields of research such as statistics, optimization, machine learning, and theoretical computer science, use perturbation methods in similar ways. The goal of this workshop is to explore different techniques in perturbation methods and their consequences on computation, statistics and optimization. We shall specifically be interested in understanding the following issues:
* Statistical Modeling: What types of statistical models can be defined for structured prediction? How can random perturbations be used to relate computation and statistics?
* Efficient Sampling: What are the computational properties that allow efficient and unbiased sampling? How do perturbations control the geometry of such models and how can we construct sampling methods for these families?
* Approximate Inference: What are the computational and statistical requirements from inference? How can the maximum of random perturbations be used to measure the uncertainty of a system?
* Learning: How can we probabilistically learn model parameters from training data using random perturbations? What are the connections with max-margin and conditional random fields techniques?
* Theory: How does the maximum of a random process relate to its complexity? What are the statistical and computational properties it describes in Gaussian free fields over graphs?
* Pseudo-sampling: How do dynamical systems encode randomness? To what extent do perturbations direct us to the “pseudo-randomness” of its underlying dynamics?
* Robust classification: How can classifiers be learned in a robust way, and how can support vector machines be realized in this context? What are the relations between adversarial perturbations and regularizations and what are their extensions to structured predictions?
* Robust reconstructions: How can information be robustly encoded? In what ways can learning be improved by perturbing the input measurements?
* Adversarial Uncertainty: How can structured prediction be performed in zero-sum game setting? What are the computational qualities of such solutions, and do Nash-equilibria exists in these cases?
Target Audience: The workshop should appeal to NIPS attendees interested in both theoretical aspects such as Bayesian modeling, Monte Carlo sampling, optimization, inference, and learning, as well as practical applications in computer vision and language modeling.
Author Information
Tamir Hazan (Technion)
George Papandreou (Toyota Technological Institute at Chicago)
Daniel Tarlow (Google Brain)
More from the Same Authors
-
2020 Poster: Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies »
Itai Gat · Idan Schwartz · Alexander Schwing · Tamir Hazan -
2020 Poster: Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces »
Guy Lorberbom · Chris J. Maddison · Nicolas Heess · Tamir Hazan · Daniel Tarlow -
2019 Poster: Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder »
Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan -
2017 Poster: High-Order Attention Models for Visual Question Answering »
Idan Schwartz · Alexander Schwing · Tamir Hazan -
2016 Poster: Constraints Based Convex Belief Propagation »
Yaniv Tenzer · Alex Schwing · Kevin Gimpel · Tamir Hazan -
2014 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Daniel Tarlow -
2014 Poster: Just-In-Time Learning for Fast and Flexible Inference »
S. M. Ali Eslami · Daniel Tarlow · Pushmeet Kohli · John Winn -
2014 Poster: A* Sampling »
Chris Maddison · Daniel Tarlow · Tom Minka -
2014 Oral: A* Sampling »
Chris Maddison · Daniel Tarlow · Tom Minka -
2013 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Sasha Rakhlin · Daniel Tarlow -
2013 Poster: Learning Efficient Random Maximum A-Posteriori Predictors with Non-Decomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola -
2013 Poster: Learning to Pass Expectation Propagation Messages »
Nicolas Heess · Daniel Tarlow · John Winn -
2013 Poster: On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations »
Tamir Hazan · Subhransu Maji · Tommi Jaakkola -
2012 Poster: Bayesian n-Choose-k Models for Classification and Ranking »
Kevin Swersky · Daniel Tarlow · Richard Zemel · Ryan Adams · Brendan J Frey -
2012 Poster: Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins »
Alex Schwing · Tamir Hazan · Marc Pollefeys · Raquel Urtasun -
2012 Poster: Cardinality Restricted Boltzmann Machines »
Kevin Swersky · Daniel Tarlow · Ilya Sutskever · Richard Zemel · Russ Salakhutdinov · Ryan Adams -
2010 Poster: Gaussian sampling by local perturbations »
George Papandreou · Alan L Yuille -
2010 Poster: A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction »
Tamir Hazan · Raquel Urtasun -
2010 Poster: Direct Loss Minimization for Structured Prediction »
David A McAllester · Tamir Hazan · Joseph Keshet -
2006 Poster: Using Combinatorial Optimization within Max-Product Belief Propagation »
John Duchi · Daniel Tarlow · Gal Elidan · Daphne Koller -
2006 Spotlight: Using Combinatorial Optimization within Max-Product Belief Propagation »
John Duchi · Daniel Tarlow · Gal Elidan · Daphne Koller