Timezone: »

Beyond Lazy Training for Over-parameterized Tensor Decomposition
Xiang Wang · Chenwei Wu · Jason Lee · Tengyu Ma · Rong Ge

Wed Dec 09 09:00 PM -- 11:00 PM (PST) @ Poster Session 4 #1247
Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an $l$-th order tensor in $(R^d)^{\otimes l}$ of rank $r$ (where $r\ll d$), can variants of gradient descent find a rank $m$ decomposition where $m > r$? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2.5l}\log d)$. Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

Author Information

Xiang Wang (Duke University)
Chenwei Wu (Duke University)
Jason Lee (Princeton University)
Tengyu Ma (Stanford University)
Rong Ge (Duke University)

More from the Same Authors