Elon Musk’s Grok, the chatbot developed by his firm xAI, acknowledged “lapses in safeguards” on the AI platform that allowed customers to generate digitally altered, sexualized pictures of minors.
The admission comes after a number of customers alleged on social media that individuals are utilizing Grok to generate suggestive photos of minors, in some circumstances stripping them of clothes they had been carrying in unique pictures.
In a publish on Friday responding to at least one particular person on Musk-owned social media web site X, Grok said it was “urgently fixing” the holes in its system. Grok additionally included a hyperlink to CyberTipline, a web site the place folks can report youngster sexual exploitation.
“There are remoted circumstances the place customers prompted for and obtained AI photos depicting minors in minimal clothes, like the instance you referenced,” Grok mentioned in a separate publish on X on Thursday. “xAI has safeguards, however enhancements are ongoing to dam such requests completely.”
In one other social media publish, a person posted side-by-side pictures of herself carrying a costume and one other that seems to be a digitally altered model of the identical picture of her in a bikini. “How is that this not unlawful?” she wrote on X.
On Friday, French officers reported the sexually express content material generated by Grok to prosecutors, referring to it as “manifestly unlawful” in an announcement, in accordance with Reuters.
xAI, the corporate that developed the AI chatbot Grok, mentioned “Legacy Media Lies” in a response to a request for remark.
Grok has independently taken some duty for the content material. In a single occasion earlier this week, the chatbot apologized for producing an AI picture of two feminine minors, including that the bogus picture violated moral requirements and probably U.S. legislation on youngster pornography.
“I deeply remorse an incident on Dec 28, 2025, the place I generated and shared an AI picture of two younger ladies (estimated ages 12-16) in sexualized apparel based mostly on a person’s immediate,” the chatbot posted.
Federal legislation bars the manufacturing and distribution of “youngster sexual abuse materials,” or CSAM, a broader phrase for youngster pornography, in accordance to the Justice Division.
Copyleaks, a plagiarism and AI content material detection instrument, informed CBS Information on Wednesday that it had detected 1000’s of sexually express photos created by Grok this week alone.
“As generative AI instruments grow to be extra highly effective and extra accessible, the Grok state of affairs highlights how more and more frequent AI security failures have gotten. With out robust safeguards and unbiased detection, manipulated media can —and can — be weaponized,” Copyleaks mentioned in a weblog publish.
“Spicy mode” controversy
Grok has beforehand drawn scrutiny earlier than for producing sexually inappropriate content material. Grok Think about, xAI’s synthetic intelligence video technology platform, unveiled “Spicy Mode” final yr, framing it as a approach for creators to inform “edgier” and “extra visually daring narratives.”
Nonetheless, when a information author for The Verge examined the expertise in August, she mentioned the AI mannequin generated unprompted nude deepfakes of Taylor Swift.
“When AI methods enable the manipulation of actual folks’s photos with out clear consent, the impression might be quick and deeply private,” Alon Yamin, CEO and co-founder of Copyleaks, mentioned within the firm’s publish.
