Timezone: »
Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. Unfortunately, these techniques are unable to deal with stochastic perturbations of input data, induced for example by data augmentation. In such cases, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for these settings when the objective is composite and strongly convex. The convergence rate outperforms SGD with a typically much smaller constant factor, which depends on the variance of gradient estimates only due to perturbations on a single example.
Author Information
Alberto Bietti (Inria)
Julien Mairal (Inria)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Spotlight: Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure »
Wed. Dec 6th 11:25 -- 11:30 PM Room Hall C
More from the Same Authors
-
2021 Spotlight: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2022 Poster: Non-Convex Bilevel Games with Critical Point Selection Maps »
Michael Arbel · Julien Mairal -
2021 Poster: On the Sample Complexity of Learning under Geometric Stability »
Alberto Bietti · Luca Venturi · Joan Bruna -
2021 Poster: A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration »
Theo Bodrito · Alexandre Zouaoui · Jocelyn Chanussot · Julien Mairal -
2021 Poster: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2021 Poster: On the Universality of Graph Neural Networks on Large Random Graphs »
Nicolas Keriven · Alberto Bietti · Samuel Vaiter -
2020 Poster: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments »
Mathilde Caron · Ishan Misra · Julien Mairal · Priya Goyal · Piotr Bojanowski · Armand Joulin -
2020 Poster: A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding »
Bruno Lecouat · Jean Ponce · Julien Mairal -
2020 : Discussion Panel: Hugo Larochelle, Finale Doshi-Velez, Devi Parikh, Marc Deisenroth, Julien Mairal, Katja Hofmann, Phillip Isola, and Michael Bowling »
Hugo Larochelle · Finale Doshi-Velez · Marc Deisenroth · Devi Parikh · Julien Mairal · Katja Hofmann · Phillip Isola · Michael Bowling -
2019 Poster: On the Inductive Bias of Neural Tangent Kernels »
Alberto Bietti · Julien Mairal -
2019 Poster: Recurrent Kernel Networks »
Dexiong Chen · Laurent Jacob · Julien Mairal -
2019 Poster: A Generic Acceleration Framework for Stochastic Composite Optimization »
Andrei Kulunchakov · Julien Mairal -
2018 Poster: Unsupervised Learning of Artistic Styles with Archetypal Style Analysis »
Daan Wynen · Cordelia Schmid · Julien Mairal -
2017 Poster: Learning Neural Representations of Human Cognition across Many fMRI Studies »
Arthur Mensch · Julien Mairal · Danilo Bzdok · Bertrand Thirion · Gael Varoquaux -
2017 Poster: Invariance and Stability of Deep Convolutional Representations »
Alberto Bietti · Julien Mairal -
2016 Poster: End-to-End Kernel Learning with Supervised Convolutional Kernel Networks »
Julien Mairal -
2015 Poster: A Universal Catalyst for First-Order Optimization »
Hongzhou Lin · Julien Mairal · Zaid Harchaoui -
2014 Poster: Convolutional Kernel Networks »
Julien Mairal · Piotr Koniusz · Zaid Harchaoui · Cordelia Schmid -
2014 Spotlight: Convolutional Kernel Networks »
Julien Mairal · Piotr Koniusz · Zaid Harchaoui · Cordelia Schmid -
2013 Poster: Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization »
Julien Mairal -
2010 Poster: Network Flow Algorithms for Structured Sparsity »
Julien Mairal · Rodolphe Jenatton · Guillaume R Obozinski · Francis Bach -
2008 Poster: SDL: Supervised Dictionary Learning »
Julien Mairal · Francis Bach · Jean A Ponce · Guillermo Sapiro · Andrew Zisserman