Skip to yearly menu bar Skip to main content


Poster

Most Neural Networks Are Almost Learnable

Amit Daniely · Nati Srebro · Gal Vardi


Abstract: We present a PTAS for learning random constant-depth networks. We show that for any fixed ϵ>0 and depth i, there is a poly-time algorithm that for any distribution on dSd1 learns random Xavier networks of depth i, up to an additive error of ϵ. The algorithm runs in time and sample complexity of (d¯)poly(ϵ1), where d¯ is the size of the network. For some cases of sigmoid and ReLU-like activations the bound can be improved to (d¯)polylog(ϵ1), resulting in a quasi-poly-time algorithm for learning constant depth random networks.

Chat is not available.