Timezone: »

 
Tal Yarkoni - What does it mean to 'understand' what a neural network is doing?
Tal Yarkoni

In recent years, researchers have drawn strong parallels between the information-processing architectures and learned representations found in the human brain and in deep neural networks (DNNs). There is increasing interest in trying to use insights gained from either neuroscience or deep learning to reciprocally inform work in the other field. A common claim by practitioners in both fields is that we still do not understand very much about the representations learned by neural networks--whether biological or artificial. In this talk, I argue that this "mysterian" view is both surprising and troubling. It is surprising in that it is often expressed by people who demonstrably do understand an enormous amount about the systems they are studying. And it is troubling in that, if the claim is taken to be true, it does not lend itself to optimism about our future ability to understand what exactly neural networks are learning. I argue that the most productive avenues of research in both neuroscience and deep learning may be those that largely sidestep questions about information content and focus instead on architectural and algorithmic considerations.

Author Information

Tal Yarkoni (University of Texas at Austin)

More from the Same Authors