Skip to yearly menu bar Skip to main content


Poster

Causal Context Adjustment Loss for Learned Image Compression

Minghao Han · Shiyin Jiang · Shengxi Li · Xin Deng · Mai Xu · Ce Zhu · Shuhang Gu

East Exhibit Hall A-C #1208
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In recent years, learned image compression (LIC) technologies have surpassed conventional methods notably in terms of rate-distortion (RD) performance. Most present learned techniques are VAE-based with an autoregressive entropy model, which obviously promotes the RD performance by utilizing the decoded causal context. However, existing methods are highly dependent on the fixed hand-crafted causal context. The question of how to obtain a more effective causal context, thereby making autoregressive entropy models more accurate, is worth exploring. In this paper, we make the first attempt in investigating the way to explicitly adjust the causal context with our proposed Causal Context Adjustment loss (CCA-loss). By imposing the CCA-loss, we enable the neural network to spontaneously adjust the causal context, making the estimate of latent representations more accurate by possessing the decoded causal context. Furthermore, as transformer technology develops remarkably, variants of which have been adopted by many state-of-the-art (SOTA) LIC techniques. However, existing computing devices have not adapted the calculation of attention mechanism well, which leads to a burden in computation quantity and inference latency. To overcome it, we attempt to establish a convolutional neural network (CNN) image compression model and adopt the unevenly channel-wise grouped strategy for high efficiency. Ultimately, the proposed CNN-based LIC network trained with our Causal Context Adjustment loss attains a great trade-off between inference latency and rate-distortion performance.

Live content is unavailable. Log in and register to view live content