Timezone: »

 
Poster
Intrinsic dimension of data representations in deep neural networks
Alessio Ansuini · Alessandro Laio · Jakob H Macke · Davide Zoccolan

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #169

Deep neural networks progressively transform their inputs across multiple processing layers. What are the geometrical properties of the representations learned by these networks? Here we study the intrinsic dimensionality (ID) of data representations, i.e. the minimal number of parameters needed to describe a representation. We find that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer. Across layers, the ID first increases and then progressively decreases in the final layers. Remarkably, the ID of the last hidden layer predicts classification accuracy on the test set. These results can neither be found by linear dimensionality estimates (e.g., with principal component analysis), nor in representations that had been artificially linearized. They are neither found in untrained networks, nor in networks that are trained on randomized labels. This suggests that neural networks that can generalize are those that transform the data into low-dimensional, but not necessarily flat manifolds.

Author Information

Alessio Ansuini (International School for Advanced Studies (SISSA))

I am a theoretical physicist with a broad interest in biology, neuroscience and artificial intelligence. I am currently working on methods to extract high-level information from biological data, e.g. by analyzing representations in deep learning models, with the aim to make sense of these data and constrain theoretical models. I live in Trieste, Italy, and work as a researcher in Area Science Park.

Alessandro Laio (International School for Advanced Studies (SISSA))
Jakob H Macke (Technical University of Munich, Munich, Germany)
Davide Zoccolan (Visual Neuroscience Lab, International School for Advanced Studies (SISSA))

More from the Same Authors