Timezone: »

 
Spotlight
Splitting Steepest Descent for Growing Neural Architectures
Lemeng Wu · Dilin Wang · Qiang Liu

Thu Dec 12 10:35 AM -- 10:40 AM (PST) @ West Ballroom A + B
We develop a progressive training approach for neural networks which adaptively grows the network structure by splitting existing neurons to multiple off-springs. By leveraging a functional steepest descent idea, we derive a simple criterion for deciding the best subset of neurons to split and a \emph{splitting gradient} for optimally updating the off-springs. Theoretically, our splitting strategy is a second order functional steepest descent for escaping saddle points in an $\Linfty$-Wasserstein metric space, on which the standard parametric gradient descent is a first-order steepest descent. Our method provides a new computationally efficient approach for optimizing neural network structures, especially for learning lightweight neural architectures in resource-constrained settings.

Author Information

Lemeng Wu (UT Austin)
Dilin Wang (UT Austin)
Qiang Liu (UT Austin)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors