- OpenAI’s robotics chief, Caitlin Kalinowski, resigned after the corporate signed a protection contract with the Pentagon
- She mentioned the settlement raised issues concerning the surveillance of People and autonomous weapons
- The resignation highlights rising tensions contained in the tech business over AI’s position in army methods
OpenAI’s head of robotics has stepped down after elevating issues concerning the firm’s settlement with the U.S. Division of Protection. Caitlin Kalinowski introduced that she had resigned following a deal between OpenAI and the Pentagon to deploy the corporate’s AI fashions in sure authorities methods. At OpenAI, Kalinowski led efforts centered on robotics and bodily methods, an space that many researchers imagine will outline the subsequent main stage of AI.
However she felt compelled to go away no matter that potential, pushed by issues about how rapidly the settlement was reached and its attainable implications for surveillance and autonomous weapons methods. She emphasised that her disagreement centered on governance fairly than private battle.
“Surveillance of People with out judicial oversight and deadly autonomy with out human authorisation are traces that deserved extra deliberation than they bought,” she wrote.
Article continues under
I resigned from OpenAI. I care deeply concerning the Robotics crew and the work we constructed collectively. This wasn’t a simple name. AI has an vital position in nationwide safety. However surveillance of People with out judicial oversight and deadly autonomy with out human authorization are…March 7, 2026
Her resignation highlights the more and more sophisticated relationship between cutting-edge AI firms and the nationwide safety institution. The timing of OpenAI’s deal made it notably noteworthy, because it was introduced solely hours after rival AI firm Anthropic reportedly refused to authorize broad army makes use of of its personal fashions.
Anthropic’s resolution triggered a powerful response from authorities officers, who subsequently designated the corporate as a provide chain danger after it declined to supply unrestricted entry to its expertise.
Offers involving nationwide safety infrastructure sometimes contain prolonged negotiations and cautious oversight. The fast turnaround raised questions on OpenAI’s personal diligence. Kalinowski’s public feedback echoed these issues. In a follow-up message explaining her resignation, she mentioned the issue was not the idea of a protection partnership itself, however the tempo at which the choice to maneuver ahead was made.
Kalinowski’s position with robotics made the Pentagon deal particularly vital, as autonomous methods and robots all have potential army purposes. Her departure however underscored the strain that typically arises when superior expertise meets nationwide safety priorities.
To be clear, my subject is that the announcement was rushed with out the guardrails outlined. It is a governance concern firstly. These are too vital for offers or bulletins to be rushed.March 7, 2026
Rushed AI offers
That mentioned, OpenAI CEO Sam Altman has tried to calm the waters. He said that the contract could be adjusted to make sure the corporate’s fashions couldn’t be used for home surveillance of U.S. residents. Whereas OpenAI has gestured in direction of opposing the event of totally autonomous deadly methods, having to say so does not precisely encourage confidence.
Some authorities businesses more and more view AI as a strategic functionality that would form the long run steadiness of energy. However some tech companies are uneasy about how intently they need to collaborate with the army. Or if they don’t seem to be, they’re a minimum of uneasy about how their common clients would react to them working with the army.
Kalinowski’s resignation is unlikely to derail OpenAI’s protection partnership, nevertheless it would possibly a minimum of immediate additional questions and maybe sluggish issues down a little bit. Some firms will conclude that collaboration with authorities businesses is important to make sure democratic oversight and the accountable use of superior expertise. However it might additionally remind the business that choices about how AI is utilized in nationwide safety deserve cautious scrutiny.
Comply with TechRadar on Google Information and add us as a most well-liked supply to get our knowledgeable information, evaluations, and opinion in your feeds. Ensure that to click on the Comply with button!
And naturally it’s also possible to comply with TechRadar on TikTok for information, evaluations, unboxings in video type, and get common updates from us on WhatsApp too.
The most effective enterprise laptops for all budgets

