Trump gives the Claude shoulder
• 3 min read
TL;DR: Anthropic refused to loosen guardrails on how its AI model Claude could be used by the military, so the Trump administration ordered federal agencies to stop using its technology. Hours later, OpenAI stepped in with its own Pentagon deal. In the short term, Anthropic may walk away a winner with consumers. But in the long term, some wonder whether Washington’s campaign against the company will result in a successful “corporate murder.”
What happened: Ahead of its 5:01pm deadline on Friday, the White House ordered all federal agencies to begin a six-month phaseout of Anthropic’s technology. At the same time, the Pentagon designated the company a national security “supply chain risk,” a move that effectively blocks it from future military contracts and restricts contractors from using its systems. (The dispute centered on Anthropic’s refusal to remove two safeguards on Claude.) Anthropic said it would challenge the supply chain risk designation in court. While some say the decision could cripple a major part of Anthropic’s business, legal experts note the Pentagon’s declaration is likely limited to military work.
OpenAI lurking in the wings: Just hours after President Donald Trump Truth Social-ed the Anthropic ban, OpenAI said it had reached a deal with the Pentagon to deploy its AI models in classified defense systems. CEO Sam Altman said his deal included similar guardrails (i.e. no mass surveillance), writing on X, "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems […] and we put them into our agreement.” Why Defense Secretary Pete Hegseth accepted OpenAI’s terms but rejected similar ones from Anthropic is unclear. (See OpenAI’s blog post outlining the details here.) And some people are questioning whether the two companies’ safeguards are actually equivalent (hint: A source says they’re not).
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
Yes, but: Sources say the Pentagon still considers Anthropic to have superior technology, and Claude is already deeply embedded in defense workflows. In fact, the administration reportedly used Claude (just hours after announcing the ban) during the US and Israel’s joint attack on Iran. Claude was also used in another high-profile mission, the capture of former Venezuelan President Nicolás Maduro, earlier this year. Disentangling will be… messy, to say the least.
Taking sides: The political fight appears to be boosting Anthropic’s brand. Claude became the No. 1 downloaded app in the US over the weekend, overtaking ChatGPT, as campaigns urging users to “quit GPT” gained traction. Anthropic is capitalizing on this surge, improving features to help users switch chatbots. The fight is also rippling through Silicon Valley. Employees at several large tech companies have urged their employers to back Anthropic’s stance on military guardrails.
Bottom line: Anthropic may have lost the Pentagon contract in the short term. But the clash is forcing a bigger question across Washington and Silicon Valley: Who ultimately decides how powerful AI systems get used? —AC
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.