Timezone: »

 
The Trade-offs of Incremental Linearization Algorithms for Nonsmooth Composite Problems
Krishna Pillutla · Vincent Roulet · Sham Kakade · Zaid Harchaoui

Gauss-Newton methods and their stochastic version have been widely used in machine learning. Their non-smooth counterparts, modified Gauss-Newton or prox-linear algorithms, can lead to contrasted outcomes when compared to gradient descent in large scale settings. We explore the contrasting performance of these two classes of algorithms in theory on a stylized statistical example, and experimentally on learning problems including structured prediction.

Author Information

Krishna Pillutla (Google Research)
Vincent Roulet (UW)
Sham Kakade (Harvard University & Amazon)
Zaid Harchaoui (University of Washington)

More from the Same Authors