Timezone: »
Applying image processing algorithms independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior. Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency.
Author Information
Chenyang Lei (HKUST)
Yazhou Xing (HKUST)
Qifeng Chen (HKUST)
More from the Same Authors
-
2021 Poster: Low-Rank Subspaces in GANs »
Jiapeng Zhu · Ruili Feng · Yujun Shen · Deli Zhao · Zheng-Jun Zha · Jingren Zhou · Qifeng Chen -
2018 Poster: Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search »
Zhuwen Li · Qifeng Chen · Vladlen Koltun