Timezone: »
A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model that maps natural images to neural responses. Deep neural networks (DNNs) provide a promising candidate for such a model. However, DNNs require orders of magnitude more training data than neuroscientists can collect because experimental recording time is severely limited. This motivates us to find images to train highly-predictive DNNs with as little training data as possible. We propose high-contrast, binarized versions of natural images---termed gaudy images---to efficiently train DNNs to predict higher-order visual cortical responses. In simulation experiments and analyses of real neural data, we find that training DNNs with gaudy images substantially reduces the number of training images needed to accurately predict responses to natural images. We also find that gaudy images, chosen before training, outperform images chosen during training by active learning algorithms. Thus, gaudy images overemphasize features of natural images that are the most important for efficiently training DNNs. We believe gaudy images will aid in the modeling of visual cortical neurons, potentially opening new scientific questions about visual processing.
Author Information
Benjamin Cowley (Princeton University)
Jonathan Pillow (Princeton University)
More from the Same Authors
-
2021 : Neural Latents Benchmark ‘21: Evaluating latent variable models of neural population activity »
Felix Pei · Joel Ye · David Zoltowski · Anqi Wu · Raeed Chowdhury · Hansem Sohn · Joseph O'Doherty · Krishna V Shenoy · Matthew Kaufman · Mark Churchland · Mehrdad Jazayeri · Lee Miller · Jonathan Pillow · Il Memming Park · Eva Dyer · Chethan Pandarinath -
2022 : Non-exchangeability in Infinite Switching Linear Dynamical Systems »
Victor Geadah · Jonathan Pillow -
2022 Poster: Dynamic Inverse Reinforcement Learning for Characterizing Animal Behavior »
Zoe Ashwood · Aditi Jha · Jonathan Pillow -
2022 Poster: Extracting computational mechanisms from neural data using low-rank RNNs »
Adrian Valente · Jonathan Pillow · Srdjan Ostojic -
2020 Poster: Identifying signal and noise structure in neural population activity with Gaussian process factor models »
Stephen Keeley · Mikio Aoi · Yiyi Yu · Spencer Smith · Jonathan Pillow -
2020 Poster: Inferring learning rules from animal decision-making »
Zoe Ashwood · Nicholas Roy · Ji Hyun Bak · Jonathan Pillow -
2018 Poster: Scaling the Poisson GLM to massive neural datasets through polynomial approximations »
David Zoltowski · Jonathan Pillow -
2018 Poster: Efficient inference for time-varying behavior during learning »
Nicholas Roy · Ji Hyun Bak · Athena Akrami · Carlos Brody · Jonathan Pillow -
2018 Poster: Model-based targeted dimensionality reduction for neuronal population data »
Mikio Aoi · Jonathan Pillow -
2018 Poster: Power-law efficient neural codes provide general link between perceptual bias and discriminability »
Michael J Morais · Jonathan Pillow -
2018 Poster: Learning a latent manifold of odor representations from neural responses in piriform cortex »
Anqi Wu · Stan Pashkovski · Sandeep Datta · Jonathan Pillow -
2017 Poster: Gaussian process based nonlinear latent structure discovery in multivariate spike train data »
Anqi Wu · Nicholas Roy · Stephen Keeley · Jonathan Pillow -
2016 Poster: Bayesian latent structure discovery from multi-neuron recordings »
Scott Linderman · Ryan Adams · Jonathan Pillow -
2016 Poster: Adaptive optimal training of animal behavior »
Ji Hyun Bak · Jung Choi · Ilana Witten · Athena Akrami · Jonathan Pillow -
2016 Poster: A Bayesian method for reducing bias in neural representational similarity analysis »
Mingbo Cai · Nicolas W Schuck · Jonathan Pillow · Yael Niv -
2015 Poster: Convolutional spike-triggered covariance analysis for neural subunit models »
Anqi Wu · Il Memming Park · Jonathan Pillow