That is AI generated summarization, which can have errors. For context, all the time check with the total article.
Here is why Anthropic CEO Dario Amodei and the staff of some competing firms are one in saying no to the US Division of Battle
In america, there’s a deadline for Anthropic that carries a lot weight within the divided world at present.
Anthropic is refusing to take away safeguards that might forestall its know-how from getting used to focus on weapons autonomously and to conduct surveillance within the US. It has till 5:01 pm of Friday, February 26 (February 27, Philippine time), to accede or face the wrath of the US authorities.
The US Division of Battle is threatening to invoke a legislation, the Protection Manufacturing Act, that might pressure Anthropic to tailor its mannequin to swimsuit the navy’s wants or else it’ll label Anthropic a “provide chain threat” that might probably damage the corporate financially as it might be handled like a US adversary.
Right here’s why Anthropic CEO Dario Amodei and the staff of some competing firms are becoming a member of collectively in saying no to the Division of Battle.
The reasoning of Anthropic’s Dario Amodei
Anthropic CEO Dario Amodei launched an announcement on February 26 outlining what it’s already finished and why it intends to say “No” to the Division of Battle. Whereas he mentioned Anthropic has “labored proactively to deploy our fashions to the Division of Battle and the intelligence neighborhood,” it nonetheless had some scruples as regards autonomous weaponry and US surveillance issues, and admitted that AI can “undermine, reasonably than defend, democratic values.”
Amodei mentioned that whereas Anthropic does help the usage of its AI for international intelligence and counterintelligence operations, it mentioned that the usage of mentioned AI techniques for mass home surveillance can be “incompatible with democratic values.”
Amodei added that relevant legal guidelines haven’t caught up with the present and rising capabilities of synthetic intelligence. Present legal guidelines permit the federal government to surveil, to some extent, the actions, net looking, and associations from public sources with out getting a warrant — and this has already seen bipartisan opposition in US Congress, he mentioned.
“Highly effective AI makes it attainable to assemble this scattered, individually innocuous knowledge right into a complete image of any individual’s life — mechanically and at huge scale,” Amodei warned.
As regards autonomous weaponry, Amodei mentioned such would wish human oversight as totally autonomous weapons can’t be relied upon to have the vital judgment of a human soldier. He defined, “They must be deployed with correct guardrails, which don’t exist at present.”
Discovering frequent floor
In a petition launched Friday, Google and OpenAI workers joined fingers to say they don’t help what the Division of Battle needs, at the same time as Elon Musk’s xAI signed an settlement to permit the navy to make use of its mannequin, Grok, in categorised techniques.
Based on the 261 signatories of the petition: “They’re attempting to divide every firm with worry that the opposite will give in. That technique solely works if none of us know the place the others stand. This letter serves to create shared understanding and solidarity within the face of this stress from the Division of Battle.”
The signatories urged the leaders of their respective firms to “put apart their variations and stand collectively to proceed to refuse the Division of Battle’s present calls for for permission to make use of our fashions for home mass surveillance and autonomously killing folks with out human oversight.”
Stand united
Pentagon spokesperson Sean Parnell mentioned on X that it has no real interest in utilizing AI to conduct mass surveillance of People nor does it need to use AI to develop autonomous weapons that function with out human involvement. He added that every one the Pentagon needs is to “use Anthropic’s mannequin for all lawful functions.”
The issue is that Trump’s authorities has little regard for the formalities of legislation, and has proven it’s prepared to make use of lawfare make folks — and firms — bend to its will.
Whereas AI will be useful in crunching numbers and knowledge at scale to assist folks make higher selections, leaving all the things — from collating the data of US residents en masse to gunning folks down — to the autonomy of an AI appears certain to harm democracy and pave the way in which for harsher technique of imposing order.
I’ll not help AI in its entirety, however given AI’s limitations, I help these extra educated than I concerning AI who need to impose safeguards and protections towards AI abuse particularly because it pertains to one thing as risky as conflict. – Rappler.com

![[Tech Thoughts] Uniting towards eradicating AI safeguards for navy functions [Tech Thoughts] Uniting towards eradicating AI safeguards for navy functions](https://i1.wp.com/www.rappler.com/tachyon/2026/02/tech-thoughts-ANTHROPIC-ai-us-dept-war-dispute.jpg?w=1024&resize=1024,1024&ssl=1)