Timezone: »
Most machine learning (ML) methods are based on numerical mathematics (NM)
concepts, from differential equation solvers over dense matrix factorizations
to iterative linear system and eigensolvers. For problems of moderate size,
NM routines can be invoked in a blackbox fashion. However, for a growing
number of realworld ML applications, this separation is insufficient and
turns out to be a limit on further progress.\par
The increasing complexity of realworld ML problems must be met with layered
approaches, where algorithms are longrunning and reliable components rather
than standalone tools tuned individually to each task at hand. Constructing
and justifying dependable reductions requires at least some awareness about NM
issues. With more and more basic learning problems being solved sufficiently
well on the level of prototypes, to advance towards realworld practice the
following key properties must be ensured: scalability, reliability, and
numerical robustness. \par
By inviting numerical mathematics researchers with interest in both numerical
methodology and real problems in applications close to machine learning, we
will probe realistic routes out of the prototyping sandbox. Our aim is to
strengthen dialog between NM, signal processing, and ML. Speakers are briefed
to provide specific highlevel examples of interest to ML and to point out
accessible software. We will initiate discussions about how to best bridge gaps
between ML requirements and NM interfaces and terminology. \par
The workshop will reinforce the community's awakening attention towards
critical issues of numerical scalability and robustness in algorithm design
and implementation. Further progress on most realworld ML problems is
conditional on good numerical practices, understanding basic robustness and
reliability issues, and a wider, more informed integration of good numerical
software. As most realworld applications come with reliability and scalability
requirements that are by and large ignored by most current ML methodology, the
impact of pointing out tractable ways for improvement is substantial.
\par\noindent Target audience: \par
Our workshop is targeted towards practitioners from NIPS, but is of interest
to numerical linear algebra researchers as well.
Author Information
Matthias Seeger (Amazon)
Suvrit Sra (MIT)
Suvrit Sra is a faculty member within the EECS department at MIT, where he is also a core faculty member of IDSS, LIDS, MITML Group, as well as the statistics and data science center. His research spans topics in optimization, matrix theory, differential geometry, and probability theory, which he connects with machine learning  a key focus of his research is on the theme "Optimization for Machine Learning” (http://optml.org)
More from the Same Authors

2020 Poster: SGD with shuffling: optimal rates without component convexity and large epoch requirements »
Kwangjun Ahn · Chulhee Yun · Suvrit Sra 
2020 Spotlight: SGD with shuffling: optimal rates without component convexity and large epoch requirements »
Suvrit Sra · Chulhee Yun · Kwangjun Ahn 
2020 Poster: Why are Adaptive Methods Good for Attention Models? »
Jingzhao Zhang · Sai Praneeth Karimireddy · Andreas Veit · Seungyeon Kim · Sashank Reddi · Sanjiv Kumar · Suvrit Sra 
2020 Poster: Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes »
Yi Tian · Jian Qian · Suvrit Sra 
2020 Spotlight: Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes »
Suvrit Sra · Jian Qian · Yi Tian 
2019 Poster: Flexible Modeling of Diversity with Strongly LogConcave Distributions »
Joshua Robinson · Suvrit Sra · Stefanie Jegelka 
2019 Poster: Are deep ResNets provably better than linear predictors? »
Chulhee Yun · Suvrit Sra · Ali Jadbabaie 
2019 Poster: Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning »
Valerio Perrone · Huibin Shen · Matthias Seeger · Cedric Archambeau · Rodolphe Jenatton 
2019 Poster: Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity »
Chulhee Yun · Suvrit Sra · Ali Jadbabaie 
2019 Spotlight: Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity »
Chulhee Yun · Suvrit Sra · Ali Jadbabaie 
2018 Poster: Direct RungeKutta Discretization Achieves Acceleration »
Jingzhao Zhang · Aryan Mokhtari · Suvrit Sra · Ali Jadbabaie 
2018 Spotlight: Direct RungeKutta Discretization Achieves Acceleration »
Jingzhao Zhang · Aryan Mokhtari · Suvrit Sra · Ali Jadbabaie 
2018 Poster: Exponentiated Strongly Rayleigh Distributions »
Zelda Mariet · Suvrit Sra · Stefanie Jegelka 
2018 Tutorial: Negative Dependence, Stable Polynomials, and All That »
Suvrit Sra · Stefanie Jegelka 
2017 Workshop: OPT 2017: Optimization for Machine Learning »
Suvrit Sra · Sashank J. Reddi · Alekh Agarwal · Benjamin Recht 
2017 Poster: Elementary Symmetric Polynomials for Optimal Experimental Design »
Zelda Mariet · Suvrit Sra 
2017 Poster: Polynomial time algorithms for dual volume sampling »
Chengtao Li · Stefanie Jegelka · Suvrit Sra 
2016 Workshop: OPT 2016: Optimization for Machine Learning »
Suvrit Sra · Francis Bach · Sashank J. Reddi · Niao He 
2016 Poster: Fast Mixing Markov Chains for Strongly Rayleigh Measures, DPPs, and Constrained Sampling »
Chengtao Li · Suvrit Sra · Stefanie Jegelka 
2016 Poster: Kronecker Determinantal Point Processes »
Zelda Mariet · Suvrit Sra 
2016 Poster: Proximal Stochastic Methods for Nonsmooth Nonconvex FiniteSum Optimization »
Sashank J. Reddi · Suvrit Sra · Barnabas Poczos · Alexander Smola 
2016 Poster: Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds »
Hongyi Zhang · Sashank J. Reddi · Suvrit Sra 
2016 Tutorial: LargeScale Optimization: Beyond Stochastic Gradient Descent and Convexity »
Suvrit Sra · Francis Bach 
2015 Workshop: Optimization for Machine Learning (OPT2015) »
Suvrit Sra · Alekh Agarwal · Leon Bottou · Sashank J. Reddi 
2015 Poster: Matrix Manifold Optimization for Gaussian Mixtures »
Reshad Hosseini · Suvrit Sra 
2015 Poster: On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants »
Sashank J. Reddi · Ahmed Hefny · Suvrit Sra · Barnabas Poczos · Alexander Smola 
2014 Workshop: OPT2014: Optimization for Machine Learning »
Zaid Harchaoui · Suvrit Sra · Alekh Agarwal · Martin Jaggi · Miro Dudik · Aaditya Ramdas · Jean B Lasserre · Yoshua Bengio · Amir Beck 
2014 Poster: Efficient Structured Matrix Rank Minimization »
Adams Wei Yu · Wanli Ma · Yaoliang Yu · Jaime Carbonell · Suvrit Sra 
2013 Workshop: OPT2013: Optimization for Machine Learning »
Suvrit Sra · Alekh Agarwal 
2013 Poster: Geometric optimisation on positive definite matrices for elliptically contoured distributions »
Suvrit Sra · Reshad Hosseini 
2013 Poster: Reflection methods for userfriendly submodular optimization »
Stefanie Jegelka · Francis Bach · Suvrit Sra 
2012 Workshop: Optimization for Machine Learning »
Suvrit Sra · Alekh Agarwal 
2012 Poster: A new metric on the manifold of kernel matrices with application to matrix geometric means »
Suvrit Sra 
2012 Poster: Scalable nonconvex inexact proximal splitting »
Suvrit Sra 
2011 Workshop: Optimization for Machine Learning »
Suvrit Sra · Stephen Wright · Sebastian Nowozin 
2010 Workshop: Optimization for Machine Learning »
Suvrit Sra · Sebastian Nowozin · Stephen Wright 
2010 Session: Oral Session 6 »
Matthias Seeger 
2009 Workshop: Optimization for Machine Learning »
Sebastian Nowozin · Suvrit Sra · S.V.N Vishwanthan · Stephen Wright 
2009 Poster: Speeding up Magnetic Resonance Image Acquisition by Bayesian MultiSlice Adaptive Compressed Sensing »
Matthias Seeger 
2008 Workshop: Optimization for Machine Learning »
Suvrit Sra · Sebastian Nowozin · Vishwanathan S V N 
2008 Poster: Bayesian Experimental Design of Magnetic Resonance Imaging Sequences »
Matthias Seeger · Hannes Nickisch · Rolf Pohmann · Bernhard Schölkopf 
2008 Spotlight: Bayesian Experimental Design of Magnetic Resonance Imaging Sequences »
Matthias Seeger · Hannes Nickisch · Rolf Pohmann · Bernhard Schölkopf 
2008 Poster: Local Gaussian Process Regression for Real Time Online Model Learning »
Duy NguyenTuong · Matthias Seeger · Jan Peters 
2007 Workshop: Approximate Bayesian Inference in Continuous/Hybrid Models »
Matthias Seeger · David Barber · Neil D Lawrence · Onno Zoeter 
2007 Oral: Bayesian Inference for Spiking Neuron Models with a Sparsity Prior »
Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge 
2007 Poster: Bayesian Inference for Spiking Neuron Models with a Sparsity Prior »
Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge 
2006 Poster: CrossValidation Optimization for Large Scale Hierarchical Classification Kernel Methods »
Matthias Seeger