Timezone: »
We consider the parametric learning problem, where the objective of the learner is determined by a parametric loss function. Employing empirical risk minimization with possibly regularization, the inferred parameter vector will be biased toward the training samples. Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance. A classical cross validation strategy is the leave-one-out cross validation (LOOCV) where one sample is left out for validation and training is done on the rest of the samples that are presented to the learner, and this process is repeated on all of the samples. LOOCV is rarely used in practice due to the high computational complexity. In this paper, we first develop a computationally efficient approximate LOOCV (ALOOCV) and provide theoretical guarantees for its performance. Then we use ALOOCV to provide an optimization algorithm for finding the regularizer in the empirical risk minimization framework. In our numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as well as our proposed framework for the optimization of the regularizer.
Author Information
Ahmad Beirami (Harvard University & MIT)
Ahmad Beirami received the B.Sc. degree in electrical engineering from Sharif University of Technology, Tehran, Iran, in 2007 and the M.Sc. and Ph.D. degrees in electrical and computer engineering from the Georgia Institute of Technology, Atlanta, GA, USA, in 2011 and 2014, respectively. He is currently a postdoctoral fellow with the school of engineering and applied sciences at Harvard University, and also with electrical engineering and computer science department at MIT. Previously, he was a postdoctoral associate at Duke University. His research interests broadly include information theory, statistics, machine learning, and networks. He is the recipient of the 2013-2014 School of ECE Graduate Research Excellence Award and the 2015 Sigma Xi Best Ph.D. Thesis Award from the Georgia Institute of Technology.
Meisam Razaviyayn (University of Southern California)
Shahin Shahrampour (Harvard University)
Vahid Tarokh (Harvard University)
More from the Same Authors
-
2021 : Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2022 : Policy gradient finds global optimum of nearly linear-quadratic control systems »
Yinbin Han · Meisam Razaviyayn · Renyuan Xu -
2022 : Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses & Extension to Non-Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2022 : A Stochastic Optimization Framework for Fair Risk Minimization »
Andrew Lowy · Sina Baharlouei · Rakesh Pavan · Meisam Razaviyayn · Ahmad Beirami -
2022 : Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes »
Sina Baharlouei · Fatemeh Sheikholeslami · Meisam Razaviyayn · J. Zico Kolter -
2022 : Stochastic Differentially Private and Fair Learning »
Andrew Lowy · Devansh Gupta · Meisam Razaviyayn -
2020 Poster: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2020 Spotlight: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2019 Poster: Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods »
Maher Nouiehed · Maziar Sanjabi · Tianjian Huang · Jason Lee · Meisam Razaviyayn -
2018 Poster: On the Convergence and Robustness of Training GANs with Regularized Optimal Transport »
Maziar Sanjabi · Jimmy Ba · Meisam Razaviyayn · Jason Lee -
2013 Poster: Online Learning of Dynamic Parameters in Social Networks »
Shahin Shahrampour · Sasha Rakhlin · Ali Jadbabaie