Timezone: »

 
Poster
On-Device Training Under 256KB Memory
Ji Lin · Ligeng Zhu · Wei-Ming Chen · Wei-Chen Wang · Chuang Gan · Song Han

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #702

On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backpropagation. To cope with the optimization difficulty, we propose Quantization- Aware Scaling to calibrate the gradient scales and stabilize 8-bit quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/1000 of the memory of PyTorch and TensorFlow while matching the accuracy. Our study enables IoT devices not only to perform inference but also to continuously adapt to new data for on-device lifelong learning. A video demo can be found here: https://youtu.be/XaDCO8YtmBw.

Author Information

Ji Lin (MIT)
Ligeng Zhu (MIT)
Wei-Ming Chen (MIT)
Wei-Chen Wang (Massachusetts Institute of Technology)
Wei-Chen Wang

Wei-Chen Wang received his Ph.D. degree in Computer Science from the Department of Computer Science and Information Engineering at National Taiwan University, Taipei, Taiwan, in June 2021. Previously, he received his B.S. and M.S. degrees in Computer Science from the Department of Computer Science and Information Engineering at National Taiwan University in 2015 and 2017, respectively. Dr. Wang is currently a Postdoctoral Research Fellow at the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Massachusetts, USA. Prior to joining MIT, he served as an Executive Engineer at the Emerging System Laboratory, Macronix International Co., Ltd., Hsinchu, Taiwan. His current research interests include efficient deep learning, TinyML, embedded systems, memory/storage systems, in-memory/in-storage computing, and next-generation memory/storage architecture designs.

Chuang Gan (UMass Amherst/ MIT-IBM Watson AI Lab)
Song Han (MIT)

More from the Same Authors