Timezone: »
Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning. It is therefore exciting that both fields converge not only on shared questions but also on similar approaches. A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations. In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks. We use adversarial robustness to probe Stringer et al’s theory regarding the causal role of a 1/n covariance spectrum. We empirically investigate the benefits such a neural code confers in neural networks, and illuminate its role in multi-layer architectures. Our results show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks. Moreover, our findings complement the existing theory relating wide neural networks to kernel methods, by showing the role of intermediate representations.
Author Information
Josue Nassar (Stony Brook University)
Piotr Sokol (Stony Brook University)
Sueyeon Chung (Columbia University)
Kenneth D Harris (UCL)
Il Memming Park (Stony Brook University)
More from the Same Authors
-
2021 : Neural Latents Benchmark ‘21: Evaluating latent variable models of neural population activity »
Felix Pei · Joel Ye · David Zoltowski · Anqi Wu · Raeed Chowdhury · Hansem Sohn · Joseph O'Doherty · Krishna V Shenoy · Matthew Kaufman · Mark Churchland · Mehrdad Jazayeri · Lee Miller · Jonathan Pillow · Il Memming Park · Eva Dyer · Chethan Pandarinath -
2021 Poster: Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception »
Joel Dapello · Jenelle Feather · Hang Le · Tiago Marques · David Cox · Josh McDermott · James J DiCarlo · Sueyeon Chung -
2021 Poster: Credit Assignment Through Broadcasting a Global Error Vector »
David Clark · L F Abbott · Sueyeon Chung -
2020 Poster: Rescuing neural spike train models from bad MLE »
Diego Arribas · Yuan Zhao · Il Memming Park -
2016 : Il Memming Park : Dynamical Systems Interpretation of Neural Trajectories »
Il Memming Park -
2016 Poster: Interpretable Nonlinear Dynamic Modeling of Neural Trajectories »
Yuan Zhao · Il Memming Park -
2016 Poster: Fast and accurate spike sorting of high-channel count probes with KiloSort »
Marius Pachitariu · Nicholas A Steinmetz · Shabnam N Kadir · Matteo Carandini · Kenneth D Harris -
2015 Poster: Convolutional spike-triggered covariance analysis for neural subunit models »
Anqi Wu · Il Memming Park · Jonathan Pillow -
2014 Workshop: Large scale optical physiology: From data-acquisition to models of neural coding »
Il Memming Park · Jakob H Macke · Ferran Diego Andilla · Eftychios Pnevmatikakis · Jeremy Freeman -
2013 Poster: Bayesian entropy estimation for binary spike train data using parametric prior knowledge »
Evan Archer · Il Memming Park · Jonathan W Pillow -
2013 Poster: Universal models for binary spike patterns using centered Dirichlet processes »
Il Memming Park · Evan Archer · Kenneth W Latimer · Jonathan W Pillow -
2013 Spotlight: Bayesian entropy estimation for binary spike train data using parametric prior knowledge »
Evan Archer · Il Memming Park · Jonathan W Pillow -
2013 Poster: Spectral methods for neural characterization using generalized quadratic models »
Il Memming Park · Evan Archer · Nicholas Priebe · Jonathan W Pillow -
2012 Poster: Bayesian estimation of discrete entropy with mixtures of stick-breaking priors »
Evan Archer · Jonathan W Pillow · Il Memming Park -
2011 Poster: Bayesian Spike-Triggered Covariance Analysis »
Il Memming Park · Jonathan W Pillow -
2010 Poster: A novel family of non-parametric cumulative based divergences for point processes »
Sohan Seth · Il Memming Park · Austin J Brockmeier · Mulugeta Semework · John S Choi · Joseph T Francis · Jose C Principe