Timezone: »

 
Poster
Learning Structured Sparsity in Deep Neural Networks
Wei Wen · Chunpeng Wu · Yandan Wang · Yiran Chen · Hai Li

Tue Dec 06 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #172

High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1X and 3.1X speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25% to 92.60%, which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by ~1%.

Author Information

Wei Wen (University of Pittsburgh)

Wei Wen is a Ph.D. student in University of Pittsburgh. His research interests including efficient deep neural networks and neuromorphic computing systems.

Chunpeng Wu (University of Pittsburgh)
Yandan Wang (University of Pittsburgh)
Yiran Chen (University of Pittsburgh)
Hai Li (University of Pittsburg)

More from the Same Authors