Timezone: »

 
Poster
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello · Jenelle Feather · Hang Le · Tiago Marques · David Cox · Josh McDermott · James J DiCarlo · Sueyeon Chung

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @

Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems. Recent work has proposed adding biologically-inspired components to visual neural networks as a way to improve their adversarial robustness. One surprisingly effective component for reducing adversarial vulnerability is response stochasticity, like that exhibited by biological neurons. Here, using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. Next, we generalize these results to the auditory domain, showing that neural stochasticity also makes auditory models more robust to adversarial perturbations. Geometric analysis of the stochastic networks reveals overlap between representations of clean and adversarially perturbed stimuli, and quantitatively demonstrate that competing geometric effects of stochasticity mediate a tradeoff between adversarial and clean performance. Our results shed light on the strategies of robust perception utilized by adversarially trained and stochastic networks, and help explain how stochasticity may be beneficial to machine and biological computation.

Author Information

Joel Dapello (Harvard University)
Jenelle Feather (MIT)
Hang Le (Massachusetts Institute of Technology)
Tiago Marques (MIT)
David Cox (MIT-IBM Watson AI Lab, IBM Research)
Josh McDermott (Massachusetts Institute of Technology)
James J DiCarlo (Massachusetts Institute of Technology)

Prof. DiCarlo received his Ph.D. in biomedical engineering and his M.D. from Johns Hopkins in 1998, and did his postdoctoral training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002. He is a Sloan Fellow, a Pew Scholar, and a McKnight Scholar. His lab’s research goal is a computational understanding of the brain mechanisms that underlie object recognition. They use large-scale neurophysiology, brain imaging, optogenetic methods, and high-throughput computational simulations to understand how the primate ventral visual stream is able to untangle object identity from other latent image variables such as object position, scale, and pose. They have shown that populations of neurons at the highest cortical visual processing stage (IT) rapidly convey explicit representations of object identity, and that this ability is reshaped by natural visual experience. They have also shown how visual recognition tests can be used to discover new, high-performing bio-inspired algorithms. This understanding may inspire new machine vision systems, new neural prosthetics, and a foundation for understanding how high-level visual representation is altered in conditions such as agnosia, autism and dyslexia.

Sueyeon Chung (Columbia University)

More from the Same Authors