Timezone: »
Neural network models of memory and error correction famously include the Hopfield network, which can directly store---and error-correct through its dynamics---arbitrary N-bit patterns, but only for ~N such patterns. On the other end of the spectrum, Shannon's coding theory established that it is possible to represent exponentially many states (~e^N) using N symbols in such a way that an optimal decoder could correct all noise upto a threshold. We prove that it is possible to construct an associative content-addressable network that combines the properties of strong error correcting codes and Hopfield networks: it simultaneously possesses exponentially many stable states, these states are robust enough, with large enough basins of attraction that they can be correctly recovered despite errors in a finite fraction of all nodes, and the errors are intrinsically corrected by the network’s own dynamics. The network is a two-layer Boltzmann machine with simple neural dynamics, low dynamic-range (binary) pairwise synaptic connections, and sparse expander graph connectivity. Thus, quasi-random sparse structures---characteristic of important error-correcting codes---may provide for high-performance computation in artificial neural networks and the brain.
Author Information
Rishidev Chaudhuri (University of California, Davis)
Ila Fiete (Massachusetts Institute of Technology)
More from the Same Authors
-
2022 : See and Copy: Generation of complex compositional movements from modular and geometric RNN representations »
Sunny Duan · Mikail Khona · Adrian Bertagnoli · Sarthak Chandra · Ila Fiete -
2022 Poster: No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit »
Rylan Schaeffer · Mikail Khona · Ila Fiete -
2020 Poster: Reverse-engineering recurrent neural network solutions to a hierarchical inference task for mice »
Rylan Schaeffer · Mikail Khona · Leenoy Meshulam · Brain Laboratory International · Ila Fiete -
2020 Poster: Using noise to probe recurrent neural network structure and prune synapses »
Eli Moore · Rishidev Chaudhuri -
2020 Spotlight: Using noise to probe recurrent neural network structure and prune synapses »
Eli Moore · Rishidev Chaudhuri -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : Invited Talk: Simultaneous rigidity and flexibility through modularity in cognitive maps for navigation »
Ila Fiete -
2019 : Closing Remarks »
Chris Sander · Ila Fiete · Dana Peer