Skip to yearly menu bar Skip to main content


Multiple talks
in
Workshop: OPT2020: Optimization for Machine Learning

Contributed talks in Session 3 (Zoom)

Mark Schmidt · Zhan Gao · Wenjie Li · Preetum Nakkiran · Denny Wu · Chengrun Yang


Abstract:

Join us to hear some new, exciting work at the intersection of optimization and ML. Come and ask questions and join the discussion.

Speakers: Zhan Gao, "Incremental Greedy BFGS: An Incremental Quasi-Newton Method with Explicit Superlinear Rate" Wenjie Li, "Variance Reduction on Adaptive Stochastic Mirror Descent" Preetum Nakkiran, "Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems" Denny Wu, "When Does Preconditioning Help or Hurt Generalization?" Chengrun Yang, "TenIPS: Inverse Propensity Sampling for Tensor Completion"

You can find a video on the NeurIPS website where the speakers discuss in detail their paper.