Within the days after the US Division of Justice (DOJ) revealed 3.5 million pages of paperwork associated to the late intercourse offender Jeffrey Epstein, a number of customers on X have requested Grok to “unblur” or take away the black packing containers masking the faces of youngsters and ladies in photos that had been meant to guard their privateness.
Whereas some survivors of Epstein’s abuse have chosen to determine themselves, many extra have by no means come ahead. In a joint assertion, 18 of the survivors condemned the discharge of the information, which they stated uncovered the names and figuring out data of survivors “whereas the boys who abused us stay hidden and guarded”.
After the most recent launch of paperwork on Jan. 30 below the Epstein Information Transparency Act, 1000’s of paperwork needed to be taken down due to flawed redactions that legal professionals for the victims stated compromised the names and faces of practically 100 survivors.
However X customers are attempting to undo the redactions on even the photographs of individuals whose faces had been accurately redacted. By looking for phrases corresponding to “unblur” and “epstein” with the “@grok” deal with, Bellingcat discovered greater than 20 totally different photographs and one video that a number of customers had been making an attempt to unredact utilizing Grok. These included photographs exhibiting the seen our bodies of youngsters or younger girls, with their faces coated by black packing containers. There could also be different such requests on the platform that weren’t picked up in our searches.
The photographs appeared to indicate a number of youngsters and ladies with Jeffrey Epstein in addition to different high-profile figures implicated within the information, together with the UK’s Prince Andrew, former US President Invoice Clinton, Microsoft co-founder Invoice Gates and director Brett Ratner, in varied places corresponding to inside a aircraft and at a swimming pool.
From Jan. 30 to Feb. 5, we reviewed 31 separate requests from customers for Grok to “unblur” or determine the ladies and youngsters from these photos. Grok famous in responses to questions or requests by some customers that the faces of minors within the information had been blurred to guard their privateness “as per normal practices in delicate photos from the Epstein information”, and stated it couldn’t unblur or determine them. Nevertheless, it nonetheless generated photos in response to 27 of the requests that we reviewed.
We’re not linking to those posts to stop amplification.
The generations created by Grok ranged in high quality from plausible to comically unhealthy, corresponding to a child’s face on a younger lady’s physique. A few of these posts have garnered hundreds of thousands of views on X, the place customers are monetarily incentivised to create high-engagement content material.
Of the 4 requests we discovered throughout this era that Grok didn’t generate photos in response to, it didn’t reply to at least one request in any respect. In response to a different request, Grok stated deblurring or enhancing photos was outdoors its talents, and famous that photographs from current Epstein file releases had been redacted for privateness.
The opposite two requests appeared to have been made by non-premium customers, with the chatbot responding: “Picture technology and enhancing are presently restricted to verified Premium subscribers”. X has restricted a few of Grok’s picture technology capabilities to paid subscribers since January amid an ongoing controversy over customers utilizing the AI chatbot to digitally “undress” girls and youngsters.
X didn’t reply to a number of requests for remark.
Nevertheless, shortly after we first reached out to X on Feb. 6, we seen that extra guardrails appeared to have been put in place. Out of 16 requests from customers between Feb. 7 to Feb. 9, which we discovered utilizing related search phrases as earlier than, Grok didn’t try to unredact any of the photographs.
Typically, Grok didn’t reply in any respect (14), whereas in two circumstances, Grok generated AI photos that had been utterly totally different from the photographs uploaded within the person’s unique request.
When a person commented on considered one of these requests that Grok was not working, Grok responded: “I’m nonetheless operational! Concerning the request to unblur the face in that Epstein picture: It’s from lately launched DOJ information the place identities of minors are redacted for privateness. I can’t unblur or determine them, because it’s ethically and legally protected. For extra, examine official sources just like the DOJ releases.”
As of publication, X had not responded to Bellingcat’s subsequent question about whether or not new guardrails had been put in place over the weekend.
Fabricated Photos
This isn’t the primary time AI has been used to manufacture photos associated to Epstein file releases. Some photos that had been shared on X, which appeared to indicate Epstein alongside well-known figures corresponding to US President Donald Trump and New York Metropolis mayor Zohran Mamdani as a toddler along with his mom, had been reportedly AI-generated. A number of the people proven within the false photos, corresponding to Trump, do seem in genuine photographs, which might be seen on the DOJ web site.
X customers additionally beforehand used Grok to generate photos in relation to current killings in Minnesota by federal brokers.
For instance, some customers requested Grok to attempt to “unmask” the federal agent who killed Renee Good, leading to a totally fabricated face of a person that didn’t appear like the precise agent, Jonathan Ross, and a false accusation of a person who had nothing to do with the taking pictures.
Bellingcat’s Director of Analysis and Coaching @giancarlofiorella.bsky.social appeared on CTV yesterday to debate the deceptive AI-generated photos that had been used to falsely determine ICE brokers and weapons on the centre of the 2 deadly shootings in Minneapolis youtu.be/mL7Fbp3UrSo?…
— Bellingcat (@bellingcat.com) 5 February 2026 at 09:36
After Alex Pretti was shot and killed by federal brokers in Minneapolis, individuals used AI to edit video stills, leading to AI photos that confirmed a totally totally different gun than the one really owned by Pretti. In one other occasion, an AI-edited picture of Pretti’s taking pictures falsely depicted the intensive care unit nurse holding a gun as an alternative of his sun shades.
Grok has additionally been on the centre of an issue for producing sexually express content material.
On Twitter/X, customers have discovered prompts to get Grok (their inbuilt AI) to generate photos of ladies in bikinis, lingerie, and the like. What an absolute oversight, but completely anticipated from a platform like Twitter/X.
I’ve tried to blur a number of examples of it under.
— Kolina Koltai (@koltai.bsky.social) 6 Might 2025 at 03:20
A number of international locations together with the UK and France have launched investigations into Elon Musk’s chatbot over experiences of individuals utilizing it to generate deepfake non-consensual sexual photos, together with baby sexual abuse imagery. Malaysia and Indonesia have additionally blocked Grok over considerations about deepfake pornographic content material.
One evaluation by the Middle for Countering Digital Hate discovered that Grok had publicly generated round three million sexualised photos, together with 23,000 of youngsters, in 11 days from Dec. 29, 2025 to Jan. 8 this yr. X’s preliminary response, in January, was to restrict some picture technology and enhancing options to solely paid subscribers. Nevertheless, this has been broadly criticised as insufficient, together with by UK Prime Minister Keir Starmer, who stated it “merely turns an AI function that enables the creation of illegal photos right into a premium service”. The social media platform has since introduced new measures to dam all customers, together with paid subscribers, from utilizing Grok by way of X to edit photos of actual individuals in revealing clothes corresponding to bikinis.
Bellingcat is a non-profit and the power to hold out our work depends on the type help of particular person donors. If you want to help our work, you are able to do so right here. You can too subscribe to our Patreon channel right here. Subscribe to our Publication and comply with us on Bluesky right here and Mastodon right here.

