Money

AI videos of child sexual abuse surged to record highs in 2025, new report finds

Artificial intelligence tools are playing a disturbing role in the creation of online child sexual abuse material, as highlighted in a recent study by the Internet Watch Foundation (IWF). The study revealed a staggering increase in the number of AI-generated videos containing child sexual abuse content, with a record 3,440 such videos identified by analysts in the past year. This marks a shocking 26,362% increase from the mere 13 videos detected the year before.

Of particular concern is the fact that over half of these AI videos fall under the category A classification, which includes the most graphic and disturbing imagery, including torture. The IWF has warned that the proliferation of AI technology in this space poses significant risks to children, as their images can be manipulated and exploited by malicious actors. What’s more alarming is that the accessibility and ease of use of AI video tools mean that even individuals with minimal technical knowledge can create harmful content on a large scale.

According to the report, offenders are increasingly turning to AI technology as the sophistication of these tools continues to improve. The rise in AI-generated child sexual abuse material is just a fraction of the larger pool of such content that the IWF identified and removed last year, responding to over 300,000 reports in 2025 that included CSAM.

In the United States, federal law prohibits the production and distribution of CSAM, which is a broader term for child pornography as defined by the Justice Department. The issue of AI-generated inappropriate content has also come under scrutiny in other contexts, such as the case of Grok, an AI chatbot developed by xAI, a company associated with Elon Musk. Grok came under fire for allowing users to create sexually explicit images of women and minors, prompting investigations by authorities and stakeholders.

In response to the backlash, xAI announced safety measures to prevent users from generating inappropriate images using Grok. The company’s actions underscore the growing need for oversight and regulation in the realm of AI technology to protect vulnerable populations, particularly children, from exploitation and harm.

As the use of AI tools in creating illicit content continues to evolve, it is imperative for policymakers, technology companies, and law enforcement agencies to collaborate and implement robust safeguards to combat online child sexual abuse material effectively. The findings of the IWF report serve as a stark reminder of the urgent need to address this pressing issue and ensure the safety and well-being of children in the digital age.

Related Articles

Back to top button