Diving Deep into Deepfake Porn (Part 2 of 3)
Trigger Warning: Detailed Discussions of Child Sexual Abuse (CSA), Child Pornography, and Pedophilia as well as Sexual Violence and Sexual Exploitation
Last week, I wrote about the insidious nature of deepfake pornography and how it serves to inflict sexual violence on its victims. Within my post, I explored the emergency of the consequences and exploitation that deepfakes have for the (majority) women affected. However, there is an even more despicable layer to consider when discussing deepfake porn: deepfaked child sexual abuse material (CSAM). NOTE: I am using the term CSAM throughout this article instead of the phrase “child pornography”. For more information on why, click this link here.
Just a quick recap, deepfakes are AI-generated images using synthetically created false likenesses. Deepfakes of children operate in the same way as any other deepfake content: either by generating artificial children or by superimposing a real child’s likeness onto a fake body. As I stated in my previous blog post, deepfake pornography is a form of sexual violence where individuals consume sexually explicit content without the knowledge or consent of the victims used in the images. However, when it comes to deepfakes of children, the emergency surrounding the issues worsens as it opens the doors for real violence against children, and provides affordances for the pedophiles who seek out and create this content.
According to the National Center for Missing & Exploited Children’s CyberTipline, in 2022, online platforms reported almost 32 million reports of suggested child sexual exploitation which contained 88.3 million images, videos, and other files of CSAM and other child exploitation material. According to the CyberTipline reports, these numbers have steadily climbed over the years.
As of right now, there is no reliable data on the number of child deepfakes being created and spread on the internet. However, as AI imagery capabilities and deepfakes continue to evolve, we will surely see a spike in this type of content being circulated and reported online. Perhaps the worst outcome of the rise of deepfake porn is how it expands the already heinous existing market and makes CSAM more available and accessible for pedophiles. Despite the lack of concrete data surrounding these markets, already there have been several cases of pedophiles being arrested and sentenced to jail for the creation of deepfake CSAM. In February of this year, a computer programmer in Spain was arrested after using AI software to create deepfake CSAM based on real child abuse images he already possessed. Additionally, back in April, a man from Quebec was sentenced to eight years in prison for the creation of deepfakes and possession of CSAM.
Judge Benoit Gagnon, who presided over the Quebec case, called this use of deepfake technology “chilling” and wrote that “a simple video excerpt of a child available on social media, or a video of children taken in a public place, could turn them into potential victims of child pornography”. Gagnon continues, stating that these new abuse images encourage the market, and endanger children by “fuelling fantasies that incite sexual offenses against children”. Studies have shown that there is a link between individuals found guilty of possessing or distributing CSAM and those who commit or attempt to commit physical sexual violence against children. In this way, synthetic sexual abuse images open a pathway toward material harm for children.
Further, regarding how these images are created, they are either made by superimposing a real child’s face into sexually illicit material, or they are completely artificially generated. In the former, the problem comes from real children being sexually victimized and exploited. However, the latter form is just as concerning and dangerous because of how AI generates images. A majority of AI imaging software operates by “training” with existing image sets to create more realistic and accurate images based on descriptor sets. This means that pedophiles who are generating this type of content are using existing CSAM to train AI into developing these deepfakes.
One of my more speculative concerns is how AI technology provides affordances to pedophiles. CSAM deepfakes are more difficult to detect compared to other digitally disseminated CSAM. In the case against the man from Quebec, it was brought up that many CSAM images and videos have a “digital fingerprint” which allows law enforcement to identify and track the content. However, at this time, deepfakes do not have that affordance, making it harder to stop the creation and spread of the material.
Additionally, I fear that deepfakes will allow pedophiles to claim ignorance when it comes to possession charges. In the Quebec case, the man was given three years in prison for creating deepfake CSAM but received an additional four and a half years for possession of non-synthetic CSAM. As I mentioned previously, deepfakes are only getting more and more realistic, and soon it will be impossible to tell the difference between a real image and an AI-generated one. This means that convicted pedophiles could argue they didn’t know CSAM in their possession was real and they thought it was just artificially generated content in order to avoid a harsher sentence.
Though deepfake images and videos are artificial, the violence it causes toward children is very, very real. Developments in AI have caused irreparable sexual harm where anyone, even children, can be sexually exploited, and I fear it is too little too late when it comes to ceasing this type of content. In my next post, I will be looking into the current challenges surrounding the prevention of deepfake pornography, noting the gaps in platform regulation and the legal system that allow this content to propagate. In addition, I will also look at the measures that are being taken to put a stop to deepfake pornography as well as the support available for victims of this abuse.