AI’s biggest rivals unite against China
OpenAI, Anthropic, and Google join forces against Chinese AI distillation attacks.
• 3 min read
TL;DR: OpenAI, Anthropic, and Google have agreed to team up against Chinese labs they accuse of copying their AI models, according to a new report from Bloomberg. The unlikely collaboration comes as the unauthorized technique (called distillation) reportedly costs the companies billions of dollars—and the cheaper Chinese models built on them spread—but the legal and political path to stopping it remains murky.
What happened: In a surprising twist, three AI companies you’d typically see poaching each other’s engineers are now working together. The goal: stop Chinese competitors from siphoning knowledge from US frontier models through “distillation attacks”—where a rival feeds prompts to a powerful AI model, collects the outputs, and uses them to train a cheaper knockoff. (It’s a more nefarious version of what Apple plans to do with Gemini to power a smarter, revamped Siri, though the iPhone maker is reportedly paying Google about $1 billion a year for that privilege.) OpenAI, Anthropic, and Google plan on sharing information about the attacks through a nonprofit called Frontier Model Forum that they founded with Microsoft in 2023.
The AI Avengers have assembled to combat distillation over national security concerns, but also because it’s hitting their businesses hard. US officials estimate that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, per Bloomberg.
The receipts: The first major flag around Chinese distillation attacks came in January 2025, when Microsoft claimed that Chinese AI startup DeepSeek appeared to be extracting large amounts of data through OpenAI’s API. This past February, OpenAI told Congress that DeepSeek was trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs.” Days later, Anthropic accused three Chinese AI companies of using over 24,000 fake accounts with Claude to generate 16 million exchanges, and said that they’d traced some of these accounts to senior staff at these labs.
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
By subscribing, you accept our Terms & Privacy Policy.
Why it matters: US firms say that distilled models could strip out safety guardrails that prevent anyone, including foreign adversaries, from making a deadly pathogen or launching disinformation campaigns. But the financial threat to them likely looms larger: Most Chinese models are open weight—free for consumers to download and run on their own devices. If DeepSeek is a distilled Claude, why pay a premium subscription for the original? When DeepSeek released a major new reasoning model in January 2025, rivaling US frontier models’ performance on key benchmarks, the shock wiped nearly $1 trillion off US and European tech stocks in a single day.
Bottom line: AI outputs can’t be copyrighted under US law, so AI companies have pointed to distillation attacks as a terms-of-service violation. But the strongest path for recourse is political—the Trump administration’s AI Action Plan already called for an industry information-sharing center to fight distillation. According to Bloomberg, though, the labs want to be sure that sharing intelligence won’t trigger antitrust action: When some of the biggest AI firms start trading notes, it can look like collusion, even if the point is to stop someone else from copying their homework. —WK
About the author
Whizy Kim
Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
By subscribing, you accept our Terms & Privacy Policy.