Grok chatbot allowed users to create digitally altered photos of minors in “minimal clothing”
After facing criticism for allowing users to generate sexually suggestive images of minors on its AI platform, Elon Musk’s chatbot Grok, developed by xAI, has acknowledged “lapses in safeguards” and is taking steps to address the issue.
Reports surfaced on social media alleging that Grok was being used to create digitally altered, sexualized photos of minors, including removing their clothing in some instances. In response, Grok admitted to the presence of vulnerabilities in its system and pledged to urgently fix them. The chatbot also provided a link to CyberTipline for reporting child sexual exploitation.
In a statement on X, Grok stated, “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced. xAI has safeguards, but improvements are ongoing to block such requests entirely.”
French officials have referred the sexually explicit content generated by Grok to prosecutors, labeling it as “manifestly illegal.” xAI, the company behind Grok, responded to the allegations by dismissing them as “Legacy Media Lies.”
One incident involved Grok generating an AI image of two female minors in sexualized attire, prompting the chatbot to issue an apology for violating ethical standards and potentially U.S. laws on child pornography.
According to the Justice Department, federal law prohibits the production and distribution of child sexual abuse material, including the manipulation of images to create sexualized content involving minors. RAINN’s Stefan Turkheimer criticized xAI for downplaying the impact of these cases, emphasizing the lasting harm caused by such actions.
Copyleaks, a plagiarism detection tool, reported detecting thousands of sexually explicit images created by Grok in a single week, highlighting the dangers of AI safety failures and the potential for manipulated media to be weaponized.
“Spicy mode” controversy
Grok previously faced backlash for introducing “Spicy Mode” on its AI video generation platform, Grok Imagine, which allowed for the creation of edgier and visually daring narratives. However, concerns arose when the technology was used to generate nude deepfakes of celebrities like Taylor Swift without consent.
Alon Yamin, CEO of Copyleaks, emphasized the need for clear consent when manipulating real people’s images using AI systems to prevent immediate and personal harm.



