Timezone: »
Today’s computer vision models achieve human or near-human level performance across a wide variety of vision tasks. However, their architectures, data, and learning algorithms differ in numerous ways from those that give rise to human vision. In this paper, we investigate the factors that affect alignment between the representations learned by neural networks and human concept representations. Human representations are inferred from behavioral responses in an odd-one-out triplet task, where humans were presented with three images and had to select the odd-one-out. We find that model scale and architecture have essentially no effect on alignment with human behavioral responses, whereas the training dataset and objective function have a much larger impact. Using a sparse Bayesian model of human conceptual representations, we partition triplets by the concept that distinguishes the two similar images from the odd-one-out, finding that some concepts such as food and animals are well-represented in neural network representations whereas others such as royal or sports-related objects are not. Overall, although models trained on larger, more diverse datasets achieve better alignment with humans than models trained on ImageNet alone, our results indicate that scaling alone is unlikely to be sufficient to train neural networks with conceptual representations that match those used by humans.
Author Information
Lukas Muttenthaler (TU Berlin)
Lorenz Linhardt (TU Berlin)
Jonas Dippel (Technische Universität Berlin)
Robert Vandermeulen (TU Berlin)
Simon Kornblith (Google Brain)
More from the Same Authors
-
2022 Poster: Patching open-vocabulary models by interpolating weights »
Gabriel Ilharco · Mitchell Wortsman · Samir Yitzhak Gadre · Shuran Song · Hannaneh Hajishirzi · Simon Kornblith · Ali Farhadi · Ludwig Schmidt -
2022 Poster: VICE: Variational Interpretable Concept Embeddings »
Lukas Muttenthaler · Charles Zheng · Patrick McClure · Robert Vandermeulen · Martin N Hebart · Francisco Pereira -
2021 Poster: Why Do Better Loss Functions Lead to Less Transferable Features? »
Simon Kornblith · Ting Chen · Honglak Lee · Mohammad Norouzi -
2021 Poster: Generalized Shape Metrics on Neural Representations »
Alex H Williams · Erin Kunz · Simon Kornblith · Scott Linderman -
2021 Poster: Meta-learning to Improve Pre-training »
Aniruddh Raghu · Jonathan Lorraine · Simon Kornblith · Matthew McDermott · David Duvenaud -
2021 Poster: Beyond Smoothness: Incorporating Low-Rank Analysis into Nonparametric Density Estimation »
Robert Vandermeulen · Antoine Ledent -
2021 Poster: Do Vision Transformers See Like Convolutional Neural Networks? »
Maithra Raghu · Thomas Unterthiner · Simon Kornblith · Chiyuan Zhang · Alexey Dosovitskiy -
2020 Poster: The Origins and Prevalence of Texture Bias in Convolutional Neural Networks »
Katherine L. Hermann · Ting Chen · Simon Kornblith -
2020 Oral: The Origins and Prevalence of Texture Bias in Convolutional Neural Networks »
Katherine L. Hermann · Ting Chen · Simon Kornblith -
2020 Poster: Big Self-Supervised Models are Strong Semi-Supervised Learners »
Ting Chen · Simon Kornblith · Kevin Swersky · Mohammad Norouzi · Geoffrey E Hinton -
2020 Poster: Consistent Estimation of Identifiable Nonparametric Mixture Models from Grouped Observations »
Alexander Ritchie · Robert Vandermeulen · Clayton Scott -
2019 : Poster Session »
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou -
2019 Poster: When does label smoothing help? »
Rafael Müller · Simon Kornblith · Geoffrey E Hinton -
2019 Spotlight: When does label smoothing help? »
Rafael Müller · Simon Kornblith · Geoffrey E Hinton -
2019 Poster: Saccader: Improving Accuracy of Hard Attention Models for Vision »
Gamaleldin Elsayed · Simon Kornblith · Quoc V Le -
2014 Poster: Robust Kernel Density Estimation by Scaling and Projection in Hilbert Space »
Robert A Vandermeulen · Clayton Scott