Timezone: »

 
Poster
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology
Quynh Nguyen · Marco Mondelli

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #461

Recent works have shown that gradient descent can find a global minimum for over-parameterized neural networks where the widths of all the hidden layers scale polynomially with N (N being the number of training samples). In this paper, we prove that, for deep networks, a single layer of width N following the input layer suffices to ensure a similar guarantee. In particular, all the remaining layers are allowed to have constant widths, and form a pyramidal topology. We show an application of our result to the widely used LeCun's initialization and obtain an over-parameterization requirement for the single wide layer of order N^2.

Author Information

Quynh Nguyen (MPI-MIS)
Marco Mondelli (IST Austria)

More from the Same Authors