Within the hours after a masked federal agent shot and killed Renee Nicole Good, a 37-year-old girl in Minneapolis, social media customers have been sharing AI-altered photographs they falsely declare “unmask” the officer, revealing their actual id. The agent was later recognized by Division of Homeland Safety spokesperson Tricia McLaughlin as an Immigrations and Customs Enforcement officer.
The taking pictures occurred on Wednesday morning, and social media footage of the scene reveals two masked federal brokers approaching an SUV parked in the midst of the highway in a suburb south of downtown Minneapolis. One of many officers seems to ask the motive force to get out of the car earlier than grabbing the door deal with. At this level, the motive force seems to reverse, earlier than driving ahead and turning. A 3rd masked federal officer, standing close to the entrance of the car, pulls out a gun and fires on the car, killing Good.
The movies of the incident shared on social media within the moments after the taking pictures didn’t embody any footage of any of the masked ICE brokers with their masks off. Nevertheless, a number of photographs exhibiting an unmasked agent started circulating on the web inside hours of the taking pictures.
The pictures seem like screenshots taken from the precise video footage, however altered with synthetic intelligence instruments to create the officer’s face.
WIRED reviewed a number of AI-altered photographs of the unmasked agent shared on each mainstream social media platform, together with X, Fb, Threads, Instagram, BlueSky, and TikTok. “We’d like his title,” Claude Taylor, the founding father of anti-Trump Mad Canine PAC, wrote in a publish on X that includes an AI-altered picture of the agent. The publish has been considered over 1.2 million instances. Taylor didn’t reply to a request for remark.
On Threads, an account referred to as “Influencer_Queeen” posted an AI-altered picture of the agent and wrote: “Let’s get his tackle. However solely concentrate on HIM. Not his youngsters.” The publish has been appreciated nearly 3,500 instances.
“AI-powered enhancement tends to hallucinate facial particulars resulting in an enhanced picture that could be visually clear, however which will even be devoid of actuality with respect to biometric identification,” Hany Farid, a UC-Berkeley professor who has up to now studied AI’s potential to reinforce facial photographs, tells WIRED. “On this scenario the place half of the face is obscured, AI, or every other method, will not be, for my part, capable of precisely reconstruct the facial id.”
Among the individuals posting the pictures additionally claimed, with out proof, to have recognized the agent, sharing the names of actual individuals and, in a variety of circumstances, offering hyperlinks to the social media accounts of those individuals.
WIRED has confirmed that two of the names circulating don’t seem like instantly related to anybody related to ICE. Whereas most of the posts sharing these AI photographs have restricted engagement, some have gained important traction.
One of many names shared on-line with out proof is Steve Grove, the CEO and writer of the Minnesota Star Tribune, who beforehand labored in Minnesota Governor Tim Walz’s administration. “We’re presently monitoring a coordinated on-line disinformation marketing campaign incorrectly figuring out the ICE agent concerned in yesterday’s taking pictures,” Chris Iles, vice chairman of communications on the Star Tribune, tells WIRED. “To be clear, the ICE agent has no recognized affiliation with the Minnesota Star Tribune and is actually not our writer and CEO Steve Grove.”
This isn’t the primary time AI has brought about points within the wake of a taking pictures. The same scenario emerged in September when Charlie Kirk was killed and an AI-altered picture of the shooter, based mostly on grainy video footage launched by regulation enforcement, was shared broadly on-line. The AI picture seemed nothing like the person who was in the end captured and charged with Kirk’s homicide.

