Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Symbiosis of Deep Learning and Differential Equations

Quantized convolutional neural networks through the lens of partial differential equations

Ido Ben-Yair · Moshe Eliasof · Eran Treister


Abstract:

Quantization of Convolutional Neural Networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs. However, fixed-point arithmetic is not natural to the type of computations involved in neural networks. In our work, we consider symmetric and stable variants of common CNNs for image classification, and Graph Convolutional Networks (GCNs) for graph node-classification. We demonstrate through several experiments that the property of forward stability preserves the action of a network under different quantization rates, allowing stable quantized networks to behave similarly to their non-quantized counterparts while using fewer parameters. We also find that at times, stability aids in improving accuracy. These properties are of particular interest for sensitive, resource-constrained or real-time applications.

Chat is not available.