Timezone: »

 
Primate Inferotemporal Cortex Neurons Generalize Better to Novel Image Distributions Than Analogous Deep Neural Networks Units
Marliawaty I Gusti Bagus · Tiago Marques · Sachi Sanghavi · James J DiCarlo · Martin Schrimpf
Event URL: https://openreview.net/forum?id=iPF7mhoWkOl »

Humans are successfully able to recognize objects in a variety of image distributions. Today's artificial neural networks (ANNs), on the other hand, struggle to recognize objects in many image domains, especially those different from the training distribution. It is currently unclear which parts of the ANNs could be improved in order to close this generalization gap. In this work, we used recordings from primate high-level visual cortex (IT) to isolate whether ANNs lag behind primate generalization capabilities because of their encoder (transformations up to the penultimate layer), or their decoder (linear transformation into class labels). Specifically, we fit a linear decoder on images from one domain and evaluate transfer performance on twelve held-out domains, comparing fitting on primate IT representations vs. representations in ANN penultimate layers. To fairly compare, we scale the number of each ANN's units so that its in-domain performance matches that of the sampled IT population (i.e. 71 IT neural sites, 73% binary-choice accuracy). We find that the sampled primate population achieves, on average, 68% performance on the held-out-domains. Comparably sampled populations from ANN model units generalize less well, maintaining on average 60%. This is independent of the number of sampled units: models' out-of-domain accuracies consistently lag behind primate IT. These results suggest that making ANN model units more like primate IT will improve the generalization performance of ANNs.

Author Information

Marliawaty I Gusti Bagus (MIT)
Tiago Marques (MIT)
Sachi Sanghavi (University of Wisconsin–Madison)
James J DiCarlo (Massachusetts Institute of Technology)

Prof. DiCarlo received his Ph.D. in biomedical engineering and his M.D. from Johns Hopkins in 1998, and did his postdoctoral training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002. He is a Sloan Fellow, a Pew Scholar, and a McKnight Scholar. His lab’s research goal is a computational understanding of the brain mechanisms that underlie object recognition. They use large-scale neurophysiology, brain imaging, optogenetic methods, and high-throughput computational simulations to understand how the primate ventral visual stream is able to untangle object identity from other latent image variables such as object position, scale, and pose. They have shown that populations of neurons at the highest cortical visual processing stage (IT) rapidly convey explicit representations of object identity, and that this ability is reshaped by natural visual experience. They have also shown how visual recognition tests can be used to discover new, high-performing bio-inspired algorithms. This understanding may inspire new machine vision systems, new neural prosthetics, and a foundation for understanding how high-level visual representation is altered in conditions such as agnosia, autism and dyslexia.

Martin Schrimpf (MIT)

More from the Same Authors