Advertising

The Dark Side of Generative AI: A Surge in Online Child Sexual Abuse Materials

The proliferation of generative AI technology is exacerbating the issue of online child sexual abuse materials (CSAM), according to a report by the UK’s Internet Watch Foundation (IWF). The report reveals a significant increase in digitally altered or completely synthetic images depicting children in explicit scenarios. One forum alone shared 3,512 images and videos over a 30-day period, with the majority featuring young girls. Offenders were found to be sharing advice and even AI models fed by real images with each other. This alarming trend highlights the urgent need for proper controls to prevent online predators from exploiting generative AI tools for their perverse fantasies.

The IWF CEO, Susie Hargreaves OBE, emphasized the concerning implications of generative AI technology, stating that the organization is already witnessing an increase in this type of material being shared and sold on commercial child sexual abuse websites. The report also reveals a 17 percent increase in online AI-altered CSAM since the fall of 2023, as well as a rise in materials showing extreme and explicit sex acts. These materials include adult pornography altered to show a child’s face, as well as existing child sexual abuse content digitally edited with another child’s likeness on top.

Furthermore, the report highlights the rapid improvement of generative AI technology in generating fully synthetic AI videos of CSAM. Although these videos are not yet sophisticated enough to pass as real footage of child sexual abuse, experts predict that advances in AI will soon render more lifelike videos, just as still images have become photo-realistic. In fact, a review of 12,000 new AI-generated images posted to a dark web forum revealed that 90 percent of them were realistic enough to be assessed under existing laws for real CSAM.

In addition to the IWF report, another investigation by the National Society for the Prevention of Cruelty to Children (NSPCC) alleges that Apple is significantly underreporting the amount of child sexual abuse materials shared via its products. The NSPCC compared official numbers published by Apple to numbers gathered through freedom of information requests and found a stark discrepancy. While Apple reported 267 worldwide cases of CSAM to the National Center for Missing and Exploited Children (NCMEC) in 2023, the NSPCC alleges that the company was implicated in 337 offenses of child abuse images in England and Wales alone, just between April 2022 and March 2023.

Apple’s decision to not scan iCloud photo libraries for CSAM has raised concerns about how the company will manage content created with generative AI. The company has prioritized user security and privacy but has faced criticism for its approach to combating child sexual exploitation. In contrast, Google reported over 1.47 million cases of CSAM to the NCMEC in 2023, while Facebook removed 14.4 million pieces of content related to child sexual exploitation between January and March of this year.

The battle against online child exploitation is already challenging, with child predators exploiting social media platforms and their loopholes to engage with minors. Now, with the increasing power of generative AI in the hands of bad actors, the fight is intensifying. Watchdogs are vigilant but wary, as they recognize the evolving nature of the threat. It is crucial for tech companies and authorities to collaborate and implement effective measures to combat this growing problem and protect vulnerable children online.

If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. Additionally, the CCRI website provides helpful information and a list of international resources.