Disrupting and preventing deepfake abuse: exploring criminal law responses to AI-facilitated abuse

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review


Artificial Intelligence (AI) is transforming the landscape of technology-facilitated abuse. In late 2017, a Reddit user uploaded a series of ‘fake’ pornographic videos transposing female celebrities’ faces onto the bodies of pornography actors. This was the first documented example of amateur deepfakes appearing in the mainstream. Since then, the commercialisation of AI-technologies has meant anyone with a social media or online profile—or indeed, who has had an image or video taken of them—is at potential risk of being ‘deepfaked’. AI-technologies have essentially eliminated the need for victims and abusers to have any kind of personal relationship or interaction, which substantially expands the pool of potential deepfake abusers and targets. As a result, new demands exist on the types of interventions needed to prevent, disrupt and respond to this form of abuse. In this chapter, drawing from an analysis of Australian criminal law, we consider whether legal responses are keeping pace with these ever-changing tools to abuse. We conclude by providing recommendations for future, multifaceted responses to deepfake abuse and the need for further research in this space.
Original languageEnglish
Title of host publicationThe Palgrave Handbook of Gendered Violence and Technology
EditorsAnastasia Powell, Asher Flynn, Lisa Sugiura
Place of PublicationCham Switzerland
PublisherPalgrave Macmillan
Number of pages21
ISBN (Electronic)9783030837341
ISBN (Print)9783030837334
Publication statusPublished - 2021


  • Artificial Intelligence
  • AI Facilitated Abuse
  • technology-facilitated abuse
  • deepfakes
  • image-based abuse

Cite this