Skip to yearly menu bar Skip to main content


Poster

Position-based Scaled Gradient for Model Quantization and Pruning

Jangho Kim · KiYoon Yoo · Nojun Kwak

Poster Session 0 #141

Keywords: [ Algorithms -> Multitask and Transfer Learning; Deep Learning ] [ Visualization or Exposition Techniques for Deep Networks ] [ Health ] [ Applications ]


Abstract:

We propose the position-based scaled gradient (PSG) that scales the gradient depending on the position of a weight vector to make it more compression-friendly. First, we theoretically show that applying PSG to the standard gradient descent (GD), which is called PSGD, is equivalent to the GD in the warped weight space, a space made by warping the original weight space via an appropriately designed invertible function. Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning. PSG reduces the gap between the weight distributions of a full-precision model and its compressed counterpart. This enables the versatile deployment of a model either as an uncompressed mode or as a compressed mode depending on the availability of resources. The experimental results on CIFAR-10/100 and ImageNet datasets show the effectiveness of the proposed PSG in both domains of pruning and quantization even for extremely low bits. The code is released in Github.

Chat is not available.