Timezone: »
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.
Author Information
Qu Yang (National University of Singapore)
Jibin Wu (National University of Singapore)
Malu Zhang (National University of Singapore)
Yansong Chua (Huawei Technologies Co., Ltd)
Xinchao Wang
Haizhou Li (The Chinese University of Hong Kong (Shenzhen); National University of Singapore)
More from the Same Authors
-
2022 Poster: Inception Transformer »
Chenyang Si · Weihao Yu · Pan Zhou · Yichen Zhou · Xinchao Wang · Shuicheng Yan -
2023 Poster: Generator Born from Classifier »
Runpeng Yu · Xinchao Wang -
2023 Poster: Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation »
Keji He · Chenyang Si · Zhihe Lu · Yan Huang · Liang Wang · Xinchao Wang -
2023 Poster: Structural Pruning for Diffusion Models »
Gongfan Fang · Xinyin Ma · Xinchao Wang -
2023 Poster: Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation »
Dapeng Hu · Jian Liang · Jun Hao Liew · Chuhui Xue · Song Bai · Xinchao Wang -
2023 Poster: GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph »
Xin Li · Dongze Lian · Zhihe Lu · Jiawang Bai · Zhibo Chen · Xinchao Wang -
2023 Poster: Disentangling Voice and Content with Self-Supervision for Speaker Recognition »
TIANCHI LIU · Kong Aik Lee · Qiongqiong Wang · Haizhou Li -
2023 Poster: LLM-Pruner: On the Structural Pruning of Large Language Models »
Xinyin Ma · Gongfan Fang · Xinchao Wang -
2023 Poster: Backprop-Free Dataset Distillation »
Songhua Liu · Xinchao Wang -
2022 Panel: Panel 6A-1: Exploring Example Influence… & Training Spiking Neural… »
Qu Yang · Qing Sun -
2022 Spotlight: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Spotlight: Dataset Distillation via Factorization »
Songhua Liu · Kai Wang · Xingyi Yang · Jingwen Ye · Xinchao Wang -
2022 Spotlight: Inception Transformer »
Chenyang Si · Weihao Yu · Pan Zhou · Yichen Zhou · Xinchao Wang · Shuicheng Yan -
2022 Spotlight: Lightning Talks 2B-1 »
Yehui Tang · Jian Wang · Zheng Chen · man zhou · Peng Gao · Chenyang Si · SHANGKUN SUN · Yixing Xu · Weihao Yu · Xinghao Chen · Kai Han · Hu Yu · Yulun Zhang · Chenhui Gou · Teli Ma · Yuanqi Chen · Yunhe Wang · Hongsheng Li · Jinjin Gu · Jianyuan Guo · Qiman Wu · Pan Zhou · Yu Zhu · Jie Huang · Chang Xu · Yichen Zhou · Haocheng Feng · Guodong Guo · yongbing zhang · Ziyi Lin · Feng Zhao · Ge Li · Junyu Han · Jinwei Gu · Jifeng Dai · Chao Xu · Xinchao Wang · Linghe Kong · Shuicheng Yan · Yu Qiao · Chen Change Loy · Xin Yuan · Errui Ding · Yunhe Wang · Deyu Meng · Jingdong Wang · Chongyi Li -
2022 Poster: Dataset Distillation via Factorization »
Songhua Liu · Kai Wang · Xingyi Yang · Jingwen Ye · Xinchao Wang -
2022 Poster: Deep Model Reassembly »
Xingyi Yang · Daquan Zhou · Songhua Liu · Jingwen Ye · Xinchao Wang -
2022 Poster: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang