Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: NIPS 2018 workshop on Compact Deep Neural Networks with industrial applications

Bandwidth efficient deep learning by model compression

Song Han


Abstract:

n the post-ImageNet era, computer vision and machine learning researchers are solving more complicated AI problems using larger datasets driving the demand for more computation. However, we are in the post-Moore’s Law world where the amount of computation per unit cost and power is no longer increasing at its historic rate. This mismatch between supply and demand for computation highlights the need for co-designing efficient algorithms and hardware. In this talk, I will talk about bandwidth efficient deep learning by model compression, together with efficient hardware architecture support, saving memory bandwidth, networking bandwidth, and engineer bandwidth.

Live content is unavailable. Log in and register to view live content