Workshop
Optimization for Machine Learning
Suvrit Sra · Stephen Wright · Sebastian Nowozin

Fri Dec 16th 07:30 AM -- 08:00 PM @ Melia Sierra Nevada: Dauro

Dear NIPS Workshop Chairs,

We propose to organize the workshop

OPT2011 "Optimization for Machine Learning."


This workshop builds on precedent established by our previously very well-received NIPS workshops, OPT2008--OPT2010 (Urls are cited in the last box)

The OPT workshops enjoyed packed (to overpacked) attendance---and this enthusiastic reception underscores the strong interest, relevance, and importance enjoyed by optimization in the ML community.

This continued interest in optimization is readily acknowledged, because optimization lies at the heart of ML algorithms. Sometimes, classical textbook algorithms suffice, but the majority problems require tailored methods that
are based on a deeper understanding of the ML requirements. In fact, ML applications and researchers are driving some of the most cutting-edge developments in optimization today. The intimate relation of optimization with ML is the key motivation for our workshop, which aims to foster discussion,
discovery, and dissemination of the state-of-the-art in optimization.

FURTHER DETAILS
--------------------------------
Optimization is indispensable to many machine learning algorithms. What can we say beyond this obvious realization?

Previous talks at the OPT workshops have covered frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially in the area of SVM training (S. Wright), large-scale learning via stochastic
gradient methods and its tradeoffs (L. Bottou, N. Srebro), exploitation of structured sparsity in optimization (Vandenberghe), randomized methods for extremely large-scale convex optimization (A. Nemirovski), and complexity theoretic foundations of convex optimization (Y. Nesterov), among others.

Several important realizations were brought to the fore by these talks, and many of the dominant ideas will appear in our forthcoming book: "Optimization for Machine learning" (MIT Press, 2011).

Much interest has focused recently on stochastic methods, which can be used in an online setting and in settings where data sets are extremely large and high accuracy is not required. Many aspects of stochastic gradient remain to be
explored, for example, different algorithmic variants, customizing to the data set structure, convergence analysis, sampling techniques, software, choice of regularization and tradeoff parameters, distributed and parallel computation. The need for an up-to-date analysis of algorithms for nonconvex
problems remains an important practical issue, whose importance becomes even more pronounced as ML tackles more and more complex mathematical models.

Finally, we do not wish to ignore the "not particularly large scale" setting, where one does have time to wield substantial computational resources. In this setting, high-accuracy solutions and deep understanding of the lessons contained in the data are needed. Examples valuable to MLers may be
exploration of genetic and environmental data to identify risk factors for disease; or problems dealing with setups where the amount of observed data is not huge, but the mathematical model is complex.


PRELIMINARY CFP (which will be circulated) FOLLOWS

------------------------------------------------------------------------------
OPT 2011
(proposed) NIPS Workshop on Optimization for Machine Learning
NIPS2011 Workshop
URL: http://opt.kyb.tuebingen.mpg.de/index.html
------------------------------------------------------------------------------


Abstract
--------

Optimization is a well-established, mature discipline. But the way we use this discipline is undergoing a rapid transformation: the advent of modern data intensive applications in statistics, scientific computing, or data mining and machine learning, is forcing us to drop theoretically powerful methods in favor of simpler but more scalable ones. This changeover exhibits itself most starkly in machine learning, where we have to often process massive datasets;
this necessitates not only reliance on large-scale optimization techniques, but also the need to develop methods "tuned" to the specific needs of machine learning problems.


Background and Objectives
-------------------------

We build on OPT2008, OPT2009, and OPT2010---the forerunners of this workshop. All three workshops happened as a part of NIPS. Beyond this major precedent, there have been other related workshops such as the "Mathematical
Programming in Machine Learning / Data Mining" series (2005 to 2007) and the BigML NIPS 2007 workshop.

Our workshop has the following major aims:

* Provide a platform for increasing the interaction between researchers from optimization, operations research, statistics, scientific computing, and machine learning;
* Identify key problems and challenges that lie at the intersection of optimization and ML;
* Narrow the gap between optimization and ML, to help reduce rediscovery, and thereby accelerate new advances.


Call for Participation
----------------------

This year we invite two types of submissions to the workshop:

(i) contributed talks and/or posters
(ii) open problems

For the latter, we request the authors to prepare a few slides that clearly
present, motivate, and explain an important open problem --- the main aim here
is to foster active discussion. Our call for open problems is modeled after a
similar session that takes place at COLT. The topics of interest for the open
problem session are the same as those for regular submissions; please see
below for details.

In addition to open problems, we invite high quality submissions for
presentation as talks or poster presentations during the workshop. We are
especially interested in participants who can contribute theory / algorithms,
applications, or implementations with a machine learning focus on the
following topics:

Topics
------

* Stochastic, Parallel and Online Optimization,
- Large-scale learning, massive data sets
- Distributed algorithms
- Optimization on massively parallel architectures
- Optimization using GPUs, Streaming algorithms
- Decomposition for large-scale, message-passing and online learning
- Stochastic approximation
- Randomized algorithms

* Algorithms and Techniques (application oriented)
- Global and Lipschitz optimization
- Algorithms for non-smooth optimization
- Linear and higher-order relaxations
- Polyhedral combinatorics applications to ML problems

* Nonconvex Optimization,
- Nonconvex quadratic programming, including binary QPs
- Convex Concave Decompositions, D.C. Programming, EM
- Training of deep architectures and large hidden variable models
- Approximation Algorithms
- Nonconvex, nonsmooth optimization

* Optimization with Sparsity constraints
- Combinatorial methods for L0 norm minimization
- L1, Lasso, Group Lasso, sparse PCA, sparse Gaussians
- Rank minimization methods
- Feature and subspace selection

* Combinatorial Optimization
- Optimization in Graphical Models
- Structure learning
- MAP estimation in continuous and discrete random fields
- Clustering and graph-partitioning
- Semi-supervised and multiple-instance learning


Important Dates
---------------

* Deadline for submission of papers: 21st October 2011
* Notification of acceptance: 12th November 2011
* Final version of submission: 24th November 2011


Please note that at least one author of each accepted paper must be available
to present the paper at the workshop. Further details regarding the
submission process are available at the workshop homepage.

Workshop
--------
The workshop will be a one-day event with a morning and afternoon session. In
addition to a lunch break, long coffee breaks will be offered both in the
morning and afternoon.


A new session on open problems is proposed for spurring active discussion and
interaction amongst the participants. A key aim of this session will be on
establishing areas and identifying problems of interest to the community.


Invited Speakers
----------------

Stephen Boyd (Stanford)
* Aharon Ben-Tal (Technion)
* Ben Recht (UW Madison)

Workshop Organizers
-------------------

* Suvrit Sra, Max Planck Institute for Intelligent Systems
* Sebastian Nowozin, Microsoft Research, Cambridge, UK
* Stephen Wright, University of Wisconsin, Madison

------------------------------------------------------------------------------

Author Information

Suvrit Sra (MIT)

Suvrit Sra is a faculty member within the EECS department at MIT, where he is also a core faculty member of IDSS, LIDS, MIT-ML Group, as well as the statistics and data science center. His research spans topics in optimization, matrix theory, differential geometry, and probability theory, which he connects with machine learning --- a key focus of his research is on the theme "Optimization for Machine Learning” (http://opt-ml.org)

Stephen Wright (UW-Madison)

Steve Wright is a Professor of Computer Sciences at the University of Wisconsin-Madison. His research interests lie in computational optimization and its applications to science and engineering. Prior to joining UW-Madison in 2001, Wright was a Senior Computer Scientist (1997-2001) and Computer Scientist (1990-1997) at Argonne National Laboratory, and Professor of Computer Science at the University of Chicago (2000-2001). He is the past Chair of the Mathematical Optimization Society (formerly the Mathematical Programming Society), the leading professional society in optimization, and a member of the Board of the Society for Industrial and Applied Mathematics (SIAM). Wright is the author or co-author of four widely used books in numerical optimization, including "Primal Dual Interior-Point Methods" (SIAM, 1997) and "Numerical Optimization" (with J. Nocedal, Second Edition, Springer, 2006). He has also authored over 85 refereed journal papers on optimization theory, algorithms, software, and applications. He is coauthor of widely used interior-point software for linear and quadratic optimization. His recent research includes algorithms, applications, and theory for sparse optimization (including applications in compressed sensing and machine learning).

Sebastian Nowozin (Microsoft Research)

More from the Same Authors