Skip to yearly menu bar Skip to main content


Poster

SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection

Yi Zhu · Surya Koppisetti · Trang Tran · Gaurav Bharaj

East Exhibit Hall A-C #3410
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Audio deepfake detection (ADD) is crucial to combat the misuse of speech synthesized from generative AI models. Existing ADD models suffer from generalization issues, with a large performance discrepancy between in-domain and out-of-domain data. Moreover, the black-box nature of existing models limits their use in real-world scenarios, where explanations are required for model decisions. To alleviate these issues, we introduce a new ADD model that explicitly uses the Style-LInguistics Mismatch (SLIM) in fake speech to separate them from real speech. SLIM first employs self-supervised pretraining on only real samples to learn the style-linguistics dependency in the real class. The learned features are then used in complement with standard pretrained acoustic features (e.g., Wav2vec) to learn a classifier on the real and fake classes.

Live content is unavailable. Log in and register to view live content