Poster
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar · Jason Lee · Daniel Soudry · Nati Srebro
Room 210 #82
Keywords: [ Non-Convex Optimization ] [ Optimization for Deep Networks ] [ CNN Architectures ]
[
Abstract
]
Abstract:
We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear SVM solution, regardless of depth.
Live content is unavailable. Log in and register to view live content