On Feb. 27, mere hours earlier than the U.S. and Israel started bombing Iran with the assistance of instruments made by main AI firm Anthropic, the corporate’s relationship with the U.S. authorities went up in flames. Amid a contentious and really public contract dispute over how Anthropic’s fashions could possibly be utilized by the U.S. army, Protection Secretary Pete Hegseth declared Anthropic a provide chain threat in a press release so broad that it may well solely be seen as an influence play geared toward destroying the corporate.
Shortly thereafter, OpenAI, certainly one of Anthropic’s principal rivals, introduced it had reached its personal take care of the Pentagon, claiming it had secured all the protection phrases that Anthropic sought, plus further guardrails. But buried in an OpenAI FAQ launched the subsequent day was a seemingly banal however telling acknowledgement. In response to a query asking what would occur if the federal government violated the phrases of the contract, the corporate wrote, “As with all contract, we may terminate it if the counterparty violates the phrases. We don’t anticipate that to occur.”
To which I can solely reply: “Wait… Huh?”

