Timezone: »

AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
Yanze Wu · Xintao Wang · GEN LI · Ying Shan


This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blur, noise, and compression. In this work, we propose to learn such basic operators from real low-quality animation videos, and incorporate the learned ones into the degradation generation pipeline. Such neural-network-based basic operators could help to better capture the distribution of real degradations. Second, a large-scale high-quality animation video dataset, AVC, is built to facilitate comprehensive training and evaluations for animation VSR. Third, we further investigate an efficient multi-scale network structure. It takes advantage of the efficiency of unidirectional recurrent networks and the effectiveness of sliding-window-based methods. Thanks to the above delicate designs, our method, AnimeSR, is capable of restoring real-world low-quality animation videos effectively and efficiently, achieving superior performance to previous state-of-the-art methods.

Author Information

Yanze Wu (ARC Lab, Tencent PCG)
Xintao Wang (Applied Research Center, Tencent PCG)
GEN LI (yonsei)
Ying Shan (Tencent)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors