Timezone: »
The brain performs probabilistic Bayesian inference to interpret the external world. The sampling-based view assumes that the brain represents the stimulus posterior distribution via samples of stochastic neuronal responses. Although the idea of sampling-based inference is appealing, it faces a critical challenge of whether stochastic sampling is fast enough to match the rapid computation of the brain. In this study, we explore how latent stimulus sampling can be accelerated in neural circuits. Specifically, we consider a canonical neural circuit model called continuous attractor neural networks (CANNs) and investigate how sampling-based inference of latent continuous variables is accelerated in CANNs. Intriguingly, we find that by including noisy adaptation in the neuronal dynamics, the CANN is able to speed up the sampling process significantly. We theoretically derive that the CANN with noisy adaptation implements the efficient sampling method called Hamiltonian dynamics with friction, where noisy adaption effectively plays the role of momentum. We theoretically analyze the sampling performances of the network and derive the condition when the acceleration has the maximum effect. Simulation results confirm our theoretical analyses. We further extend the model to coupled CANNs and demonstrate that noisy adaptation accelerates the sampling of the posterior distribution of multivariate stimuli. We hope that this study enhances our understanding of how Bayesian inference is realized in the brain.
Author Information
Xingsi Dong (Peking University)
Zilong Ji (Institute of cognitive neuroscience, University College London)
Tianhao Chu (Peking University)
Tiejun Huang (Peking University)
Wenhao Zhang (UT Southwestern Medical Center)
Si Wu (Peking University)
More from the Same Authors
-
2022 Poster: SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training »
Jianhao Ding · Tong Bu · Zhaofei Yu · Tiejun Huang · Jian Liu -
2022 Poster: Training Spiking Neural Networks with Event-driven Backpropagation »
Yaoyu Zhu · Zhaofei Yu · Wei Fang · Xiaodong Xie · Tiejun Huang · Timothée Masquelier -
2022 Poster: Temporal Effective Batch Normalization in Spiking Neural Networks »
Chaoteng Duan · Jianhao Ding · Shiyan Chen · Zhaofei Yu · Tiejun Huang -
2022 Poster: Learning Optical Flow from Continuous Spike Streams »
Rui Zhao · Ruiqin Xiong · Jing Zhao · Zhaofei Yu · Xiaopeng Fan · Tiejun Huang -
2022 : Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells »
Dehong Xu · Ruiqi Gao · Wenhao Zhang · Xue-Xin Wei · Ying Nian Wu -
2022 Spotlight: Training Spiking Neural Networks with Event-driven Backpropagation »
Yaoyu Zhu · Zhaofei Yu · Wei Fang · Xiaodong Xie · Tiejun Huang · Timothée Masquelier -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training »
Jianhao Ding · Tong Bu · Zhaofei Yu · Tiejun Huang · Jian Liu -
2022 Poster: Oscillatory Tracking of Continuous Attractor Neural Networks Account for Phase Precession and Procession of Hippocampal Place Cells »
Tianhao Chu · Zilong Ji · Junfeng Zuo · Wenhao Zhang · Tiejun Huang · Yuanyuan Mi · Si Wu -
2022 Poster: Translation-equivariant Representation in Recurrent Networks with a Continuous Manifold of Attractors »
Wenhao Zhang · Ying Nian Wu · Si Wu -
2021 Poster: Noisy Adaptation Generates Lévy Flights in Attractor Neural Networks »
Xingsi Dong · Tianhao Chu · Tiejun Huang · Zilong Ji · Si Wu -
2021 Poster: Deep Residual Learning in Spiking Neural Networks »
Wei Fang · Zhaofei Yu · Yanqi Chen · Tiejun Huang · Timothée Masquelier · Yonghong Tian -
2020 Poster: UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging »
Chu Zhou · Hang Zhao · Jin Han · Chang Xu · Chao Xu · Tiejun Huang · Boxin Shi -
2020 Poster: Learning Individually Inferred Communication for Multi-Agent Cooperation »
gang Ding · Tiejun Huang · Zongqing Lu -
2020 Oral: Learning Individually Inferred Communication for Multi-Agent Cooperation »
gang Ding · Tiejun Huang · Zongqing Lu -
2019 Poster: A Normative Theory for Causal Inference and Bayes Factor Computation in Neural Circuits »
Wenhao Zhang · Si Wu · Brent Doiron · Tai Sing Lee -
2019 Poster: Push-pull Feedback Implements Hierarchical Information Retrieval Efficiently »
Xiao Liu · Xiaolong Zou · Zilong Ji · Gengshuo Tian · Yuanyuan Mi · Tiejun Huang · K. Y. Michael Wong · Si Wu