Timezone: »
Understanding the operation of biological and artificial networks remains a difficult and important challenge. To identify general principles, researchers are increasingly interested in surveying large collections of networks that are trained on, or biologically adapted to, similar tasks. A standardized set of analysis tools is now needed to identify how network-level covariates---such as architecture, anatomical brain region, and model organism---impact neural representations (hidden layer activations). Here, we provide a rigorous foundation for these analyses by defining a broad family of metric spaces that quantify representational dissimilarity. Using this framework, we modify existing representational similarity measures based on canonical correlation analysis and centered kernel alignment to satisfy the triangle inequality, formulate a novel metric that respects the inductive biases in convolutional layers, and identify approximate Euclidean embeddings that enable network representations to be incorporated into essentially any off-the-shelf machine learning method. We demonstrate these methods on large-scale datasets from biology (Allen Institute Brain Observatory) and deep learning (NAS-Bench-101). In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
Author Information
Alex H Williams (Stanford University)
Erin Kunz (Stanford University)
Simon Kornblith (Google Brain)
Scott Linderman (Columbia University)
More from the Same Authors
-
2021 Spotlight: Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks »
Aran Nayebi · Alexander Attinger · Malcolm Campbell · Kiah Hardcastle · Isabel Low · Caitlin S Mallory · Gabriel Mel · Ben Sorscher · Alex H Williams · Surya Ganguli · Lisa Giocomo · Dan Yamins -
2021 : Revisiting the Structured Variational Autoencoder »
Yixiu Zhao · Scott Linderman -
2021 : Bayesian Inference in Augmented Bow Tie Networks »
Jimmy Smith · Dieterich Lawson · Scott Linderman -
2022 : Human alignment of neural network representations »
Lukas Muttenthaler · Lorenz Linhardt · Jonas Dippel · Robert Vandermeulen · Simon Kornblith -
2022 Poster: Patching open-vocabulary models by interpolating weights »
Gabriel Ilharco · Mitchell Wortsman · Samir Yitzhak Gadre · Shuran Song · Hannaneh Hajishirzi · Simon Kornblith · Ali Farhadi · Ludwig Schmidt -
2021 Poster: Why Do Better Loss Functions Lead to Less Transferable Features? »
Simon Kornblith · Ting Chen · Honglak Lee · Mohammad Norouzi -
2021 Poster: Meta-learning to Improve Pre-training »
Aniruddh Raghu · Jonathan Lorraine · Simon Kornblith · Matthew McDermott · David Duvenaud -
2021 Poster: Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks »
Aran Nayebi · Alexander Attinger · Malcolm Campbell · Kiah Hardcastle · Isabel Low · Caitlin S Mallory · Gabriel Mel · Ben Sorscher · Alex H Williams · Surya Ganguli · Lisa Giocomo · Dan Yamins -
2021 Poster: Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems »
Jimmy Smith · Scott Linderman · David Sussillo -
2021 Poster: Do Vision Transformers See Like Convolutional Neural Networks? »
Maithra Raghu · Thomas Unterthiner · Simon Kornblith · Chiyuan Zhang · Alexey Dosovitskiy -
2020 Poster: The Origins and Prevalence of Texture Bias in Convolutional Neural Networks »
Katherine L. Hermann · Ting Chen · Simon Kornblith -
2020 Oral: The Origins and Prevalence of Texture Bias in Convolutional Neural Networks »
Katherine L. Hermann · Ting Chen · Simon Kornblith -
2020 Poster: Big Self-Supervised Models are Strong Semi-Supervised Learners »
Ting Chen · Simon Kornblith · Kevin Swersky · Mohammad Norouzi · Geoffrey E Hinton -
2019 : Poster Session »
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou -
2019 Poster: When does label smoothing help? »
Rafael Müller · Simon Kornblith · Geoffrey E Hinton -
2019 Spotlight: When does label smoothing help? »
Rafael Müller · Simon Kornblith · Geoffrey E Hinton -
2019 Poster: Saccader: Improving Accuracy of Hard Attention Models for Vision »
Gamaleldin Elsayed · Simon Kornblith · Quoc V Le -
2018 Poster: Point process latent variable models of larval zebrafish behavior »
Anuj Sharma · Robert Johnson · Florian Engert · Scott Linderman -
2018 Spotlight: Point process latent variable models of larval zebrafish behavior »
Anuj Sharma · Robert Johnson · Florian Engert · Scott Linderman -
2017 : Poster Session 1 »
Magdalena Fuchs · David Lung · Mathias Lechner · Kezhi Li · Andrew Gordus · Vivek Venkatachalam · Shivesh Chaudhary · Jan Hůla · David Rolnick · Scott Linderman · Gonzalo Mena · Liam Paninski · Netta Cohen