Timezone: »

Learning Parities with Neural Networks
Amit Daniely · Eran Malach

Tue Dec 08 06:30 AM -- 06:45 AM (PST) @ Orals & Spotlights: Learning Theory

In recent years we see a rapidly growing line of research which shows learnability of various models via common neural network algorithms. Yet, besides a very few outliers, these results show learnability of models that can be learned using linear methods. Namely, such results show that learning neural-networks with gradient-descent is competitive with learning a linear classifier on top of a data-independent representation of the examples. This leaves much to be desired, as neural networks are far more successful than linear methods. Furthermore, on the more conceptual level, linear models don't seem to capture the``deepness" of deep networks. In this paper we make a step towards showing leanability of models that are inherently non-linear. We show that under certain distributions, sparse parities are learnable via gradient decent on depth-two network. On the other hand, under the same distributions, these parities cannot be learned efficiently by linear methods.

Author Information

Amit Daniely (Hebrew University and Google Research)
Eran Malach (Hebrew University Jerusalem Israel)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors