Timezone: »

OST: Improving Generalization of DeepFake Detection via One-Shot Test-Time Training
Liang Chen · Yong Zhang · Yibing Song · Jue Wang · Lingqiao Liu

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #133

State-of-the-art deepfake detectors perform well in identifying forgeries when they are evaluated on a test set similar to the training set, but struggle to maintain good performance when the test forgeries exhibit different characteristics from the training images e.g., forgeries are created by unseen deepfake methods. Such a weak generalization capability hinders the applicability of deepfake detectors. In this paper, we introduce a new learning paradigm specially designed for the generalizable deepfake detection task. Our key idea is to construct a test-sample-specific auxiliary task to update the model before applying it to the sample. Specifically, we synthesize pseudo-training samples from each test image and create a test-time training objective to update the model. Moreover, we proposed to leverage meta-learning to ensure that a fast single-step test-time gradient descent, dubbed one-shot test-time training (OST), can be sufficient for good deepfake detection performance. Extensive results across several benchmark datasets demonstrate that our approach performs favorably against existing arts in terms of generalization to unseen data and robustness to different post-processing steps.

Author Information

Liang Chen (University of Adelaide)
Yong Zhang (CASIA)
Yibing Song (Tencent AI Lab)
Jue Wang (Tencent AI Lab)
Lingqiao Liu (The University of Adelaide)

More from the Same Authors