Making DeepFakes more spurious: Evading deep face forgery detection via trace removal attack

Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)

Abstract

—DeepFakes are raising significant social concerns. Although various DeepFake detectors have been developed as forensic countermeasures, these detectors are still vulnerable to attacks. Recently, a few attacks, principally adversarial attacks, have succeeded in cloaking DeepFake images to evade detection. However, these attacks have typical detector-specific designs, which require prior knowledge about the detector, leading to poor transferability. Moreover, these attacks only consider simple security scenarios. Less is known about how effective they are in high-level scenarios where the detector’s defensive capability or the attacker’s knowledge varies. To address these challenges, in this paper, we propose a novel attack for DeepFake anti-forensics called the trace removal attack. Instead of investigating the detector side, this trace removal attack looks into the original DeepFake creation pipeline, attempting to remove all detectable natural DeepFake traces to render the fake images more “authentic”. This detector-agnostic design allows the attack to be effective against arbitrary or even unknown detectors. To implement this attack, we first perform an in-depth DeepFake trace discovery, which identifies three discernible traces: spatial anomalies, spectral disparities, and noise fingerprints. Then an adversarial learning-based trace removal network (TR-Net) is proposed that involves one generator and multiple discriminators. Each discriminator is responsible for one individual trace representation to avoid cross-trace interference. These multiple discriminators are arranged in parallel, which prompts the generator to remove various traces simultaneously. To evaluate the efficacy of the attack, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense, and the attackers’ background knowledge of data varied. The experimental results show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors while causing only a negligible degradation in visual quality.

Original languageEnglish
Pages (from-to)5182-5196
Number of pages15
JournalIEEE Transactions on Dependable and Secure Computing
Volume20
Issue number6
DOIs
Publication statusPublished - Nov 2023
Externally publishedYes

Keywords

  • adversarial attack
  • anti-forensics
  • DeepFake detection
  • Deepfakes
  • Detectors
  • Faces
  • Fingerprint recognition
  • Generators
  • Image forgery
  • Image reconstruction
  • Pipelines

Cite this