Skip to yearly menu bar Skip to main content


Poster

Connecting Optimization and Regularization Paths

Arun Suggala · Adarsh Prasad · Pradeep Ravikumar

Room 517 AB #113

Keywords: [ Learning Theory ]


Abstract:

We study the implicit regularization properties of optimization techniques by explicitly connecting their optimization paths to the regularization paths of ``corresponding'' regularized problems. This surprising connection shows that iterates of optimization techniques such as gradient descent and mirror descent are \emph{pointwise} close to solutions of appropriately regularized objectives. While such a tight connection between optimization and regularization is of independent intellectual interest, it also has important implications for machine learning: we can port results from regularized estimators to optimization, and vice versa. We investigate one key consequence, that borrows from the well-studied analysis of regularized estimators, to then obtain tight excess risk bounds of the iterates generated by optimization techniques.

Live content is unavailable. Log in and register to view live content