Child safety experts are warning that the proliferation of sexually explicit images of children generated by predators using artificial intelligence (AI) is overwhelming law enforcement’s capabilities to identify and rescue real-life victims. These AI-generated images are becoming so lifelike that it is difficult to discern whether real children have been harmed in their production, leading to challenges in determining which images depict actual abuse.
The use of AI in creating child sexual abuse material (CSAM) is rapidly growing, with predators leveraging AI technologies to produce thousands of new images quickly and flooding the internet with this harmful content. The National Center for Missing and Exploited Children (NCMEC) has reported cases of offenders using AI in various ways to generate and alter CSAM, posing challenges for investigators and prosecutors.
The legality of possession of AI-generated CSAM varies across states, with only a few states having laws criminalizing the possession of such material. Legislation has been introduced at both the state and federal levels to address this issue. Child safety experts fear that the influx of AI-generated content will strain the resources of organizations like NCMEC, impacting the identification and rescue of victims.
AI companies have been criticized for not actively preventing or detecting the production of CSAM created using their technology. As AI allows predators to create large volumes of new CSAM images with ease, the burden on law enforcement and child safety organizations is expected to increase, threatening an already under-resourced and overwhelmed area of investigation. From changes in legislation to advancements in technology, the fight against AI-generated CSAM remains a complex and evolving challenge for child safety advocates and law enforcement agencies.
Source
Photo credit amp.theguardian.com