Diving Deep Into Deepfake Porn (Part 3 of 3)

By Molly Frances Haines

Trigger Warning: Mentions of Sexual Violence, Sexual Assault, Child Sexual Assault, Sexual Exploitation and Pedophilia

In my previous two posts (linked here and here), I have discussed deepfake pornography as it exploits both adults and children, outlining the chilling consequences of AI-generated exploitative sexual imagery. For my final post, I will be outlining the current challenges that prevent deepfakes from being eradicated as well as evaluating measures being taken by platforms and governments to stop the spread of this content.

One of the most crucial issues surrounding deepfake pornography is how anything that is posted online is more or less there forever. While some major websites like Pornhub and Reddit have banned deepfake pornography, smaller sites are able to continue posting this content without consequence. It’s a Sisiphyian battle, and even if one deepfake website were to shut down, a new one could pop up later that same day churning out the same abusive content. And that is just for content featuring adults. The virtual market for CSAM (child sexual abuse material), synthetic or real, is much more difficult to track given the unlawful nature of the content. Presently, there is no legislative standard or digital code of conduct in place that’s strong enough to keep this kind of content off the internet.

So, what can be done? An article written by Karen Hao outlines the current legal landscape and challenges that hinder legal action. In the United States, most states have laws against this kind of content, however only California and Virginia have laws specifically against fake and deepfake porn. Further, while the UK has a ban on revenge porn, there is currently no provision for restricting deepfake content. At this time, no other country has any ban or restriction on fake nonconsensual porn on a national level.

As far as deepfake CSAM is concerned, the laws are not as clear-cut as they are with non-synthetic CSAM. Because CSAM is illegal, it is not sold in the same public markets as adult deepfake pornography, but rather is sold and traded in private circles online. While some CSAM can be tracked online in order to catch predators, the lack of digital traceability for deepfakes poses a huge problem for law enforcement and justice systems as much of this content goes undetected. There is also global legal ambiguity on how to treat deepfake CSAM because it is such a new emergency. However, what we are seeing are laws being amended to include deepfake in their protections, and at this time deepfakes of children are being treated as child abuse material if they show a person under the age of 18 in a sexualized manner.

With rape and sexual assault cases already being difficult to prosecute, even more hurdles come from those seeking legal justice regarding deepfake pornography. In these types of cases, the limited existing laws may only apply in certain circumstances. For example, it’s possible to use IP law if a victim’s face is used from a copyrighted photo, and harassment law can be used if the victim can prove that a perpetrator intended to harm them. However gathering evidence is often impossible because the perpetrators can operate anonymously, it’s hard to prove intent, and a deepfake photo or website might not be traceable online. Because of the lack of national protections across the globe, it often places the onus on the individual victimized by deepfake pornography to get the content removed online. In her continued response to the Atrioc situation (which was discussed in the first post of this series), QTCinderella stated, “It should not be part of my job to have to pay money to get this stuff taken down. It should not be part of my job to be harassed, to see pictures of me ‘nude’ spread around.”

Again, this problem becomes worse when children are the ones being victimized, as they have even less likelihood of discovering or even understanding that they have been sexually violated and it becomes the responsibility of the child’s parent or legal guardian to seek justice. Law enforcement surrounding crimes against children does operate a bit more strictly, however, at this time, there is still not enough being done to prevent children from being sexually exploited online.

What we need first and foremost is a massive overhaul of government legislation regarding nonconsensual sexual material and CSAM. There need to be legal repercussions for individuals and websites that manufacture and distribute this content. Additionally, we need tech companies and platforms to step up and stop hosting this content. Unfortunately, the ship has sailed with AI development, and we can’t undo the damage that has already been caused through synthetic image and photo manipulation software. But, there is still hope. With the EU passing legislation like the Digital Services Act and the UK developing the Online Safety Bill, it is possible that in due time deepfake porn could be criminalized.

Finally, there needs to be better support for victims of deepfakes. Organizations such as DeepTrust Alliance, the Cyber Civil Rights Initiative, and EndTab all work to protect women who are victims of deepfakes and provide resources for aid, education, and activism surrounding digital exploitation. Team Hope, a part of the National Center for Missing and Exploited Children, and the International Center for Missing and Exploited Children both offer support for victims of child sexual abuse and do work to prevent the spread of CSAM online. Given the current sociopolitical climate surrounding sexual assault, women’s rights, and child safety, now more than ever we need to support victims of deepfake sexual violence. Right now, we must take a stand for safety by fighting against this disgusting technology.

Image: Keyboard by Adi Kuntsman