Timezone: »

 
Poster
On 1/n neural representation and robustness
Josue Nassar · Piotr Sokol · Sueyeon Chung · Kenneth D Harris · Il Memming Park

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #389

Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning. It is therefore exciting that both fields converge not only on shared questions but also on similar approaches. A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations. In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks. We use adversarial robustness to probe Stringer et al’s theory regarding the causal role of a 1/n covariance spectrum. We empirically investigate the benefits such a neural code confers in neural networks, and illuminate its role in multi-layer architectures. Our results show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks. Moreover, our findings complement the existing theory relating wide neural networks to kernel methods, by showing the role of intermediate representations.

Author Information

Josue Nassar (Stony Brook University)
Piotr Sokol (Stony Brook University)
Sueyeon Chung (Columbia University)
Kenneth D Harris (UCL)
Il Memming Park (Stony Brook University)

More from the Same Authors