Not made for each other- audio-visual dissonance-based deepfake detection and localization

Komal Chugh, Parul Gupta, Abhinav Dhall, Ramanathan Subramanian

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

92 Citations (Scopus)

Abstract

We propose detection of deepfake videos based on the dissimilarity between the audio and visual modalities, termed as the Modality Dissonance Score (MDS). We hypothesize that manipulation of either modality will lead to dis-harmony between the two modalities, e.g., loss of lip-sync, unnatural facial and lip movements, etc. MDS is computed as the mean aggregate of dissimilarity scores between audio and visual segments in a video. Discriminative features are learnt for the audio and visual channels in a chunk-wise manner, employing the cross-entropy loss for individual modalities, and a contrastive loss that models inter-modality similarity. Extensive experiments on the DFDC and DeepFake-TIMIT Datasets show that our approach outperforms the state-of-the-art by up to 7%. We also demonstrate temporal forgery localization, and show how our technique identifies the manipulated video segments.

Original languageEnglish
Title of host publicationProceedings of the 28th ACM International Conference on Multimedia
EditorsPradeep K. Atrey, Zhu Li
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Pages439-447
Number of pages9
ISBN (Electronic)9781450379885
DOIs
Publication statusPublished - 2020
EventACM International Conference on Multimedia 2020 - Online, United States of America
Duration: 12 Oct 202016 Oct 2020
Conference number: 28th
https://dl.acm.org/doi/proceedings/10.1145/3394171 (Proceedings)

Conference

ConferenceACM International Conference on Multimedia 2020
Abbreviated titleMM 2020
Country/TerritoryUnited States of America
Period12/10/2016/10/20
Internet address

Keywords

  • contrastive loss
  • deepfake detection and localization
  • modality dissonance
  • neural networks

Cite this