Timezone: »
We address the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its function values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the online regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.
Author Information
Arya Akhavan (ENSAE - IIT)
Massimiliano Pontil (IIT & UCL)
Alexandre Tsybakov (CREST, ENSAE, Institut Polytechnique de Paris)
More from the Same Authors
-
2023 Poster: Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-Start »
Riccardo Grazzi · Massimiliano Pontil · Saverio Salzo -
2022 Spotlight: A gradient estimator via L1-randomization for online zero-order optimization with two point feedback »
Arya Akhavan · Evgenii Chzhen · Massimiliano Pontil · Alexandre Tsybakov -
2022 Poster: A gradient estimator via L1-randomization for online zero-order optimization with two point feedback »
Arya Akhavan · Evgenii Chzhen · Massimiliano Pontil · Alexandre Tsybakov -
2022 Poster: Group Meritocratic Fairness in Linear Contextual Bandits »
Riccardo Grazzi · Arya Akhavan · John IF Falk · Leonardo Cella · Massimiliano Pontil -
2021 Poster: Distributed Zero-Order Optimization under Adversarial Noise »
Arya Akhavan · Massimiliano Pontil · Alexandre Tsybakov -
2020 Poster: The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning »
Giulia Denevi · Massimiliano Pontil · Carlo Ciliberto -
2020 Poster: Estimating weighted areas under the ROC curve »
Andreas Maurer · Massimiliano Pontil -
2019 Poster: Online-Within-Online Meta-Learning »
Giulia Denevi · Dimitris Stamos · Carlo Ciliberto · Massimiliano Pontil -
2019 Poster: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2019 Spotlight: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2018 Poster: Bilevel learning of the Group Lasso structure »
Jordan Frecon · Saverio Salzo · Massimiliano Pontil -
2018 Poster: Learning To Learn Around A Common Mean »
Giulia Denevi · Carlo Ciliberto · Dimitris Stamos · Massimiliano Pontil -
2018 Spotlight: Bilevel learning of the Group Lasso structure »
Jordan Frecon · Saverio Salzo · Massimiliano Pontil -
2017 : An Efficient Method to Impose Fairness in Linear Models »
Massimiliano Pontil · John Shawe-Taylor -
2017 Workshop: Workshop on Prioritising Online Content »
John Shawe-Taylor · Massimiliano Pontil · Nicolò Cesa-Bianchi · Emine Yilmaz · Chris Watkins · Sebastian Riedel · Marko Grobelnik -
2017 Poster: Consistent Multitask Learning with Nonlinear Output Relations »
Carlo Ciliberto · Alessandro Rudi · Lorenzo Rosasco · Massimiliano Pontil -
2016 Poster: Mistake Bounds for Binary Matrix Completion »
Mark Herbster · Stephen Pasteris · Massimiliano Pontil -
2015 : The Benefit of Multitask Representation Learning »
Massimiliano Pontil -
2014 Poster: Spectral k-Support Norm Regularization »
Andrew McDonald · Massimiliano Pontil · Dimitris Stamos -
2013 Workshop: New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks »
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai Ben-David · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · Yun-Qian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How -
2013 Poster: A New Convex Relaxation for Tensor Completion »
Bernardino Romera-Paredes · Massimiliano Pontil -
2012 Poster: Optimal kernel choice for large-scale two-sample tests »
Arthur Gretton · Bharath Sriperumbudur · Dino Sejdinovic · Heiko Strathmann · Sivaraman Balakrishnan · Massimiliano Pontil · Kenji Fukumizu -
2010 Spotlight: A Family of Penalty Functions for Structured Sparsity »
Charles A Micchelli · Jean M Morales · Massimiliano Pontil -
2010 Poster: A Family of Penalty Functions for Structured Sparsity »
Charles A Micchelli · Jean M Morales · Massimiliano Pontil -
2008 Poster: Fast Prediction on a Tree »
Mark Herbster · Massimiliano Pontil · Sergio Rojas Galeano -
2008 Oral: Fast Prediction on a Tree »
Mark Herbster · Massimiliano Pontil · Sergio Rojas Galeano -
2008 Poster: On-Line Prediction on Large Diameter Graphs »
Mark Herbster · Massimiliano Pontil · Guy Lever -
2007 Spotlight: A Spectral Regularization Framework for Multi-Task Structure Learning »
Andreas Argyriou · Charles A. Micchelli · Massimiliano Pontil · Yiming Ying -
2007 Poster: A Spectral Regularization Framework for Multi-Task Structure Learning »
Andreas Argyriou · Charles A. Micchelli · Massimiliano Pontil · Yiming Ying -
2006 Poster: Prediction on a Graph with a Perceptron »
Mark Herbster · Massimiliano Pontil -
2006 Spotlight: Prediction on a Graph with a Perceptron »
Mark Herbster · Massimiliano Pontil -
2006 Poster: Multi-Task Feature Learning »
Andreas Argyriou · Theos Evgeniou · Massimiliano Pontil