Claude’s hack at it
• 3 min read
TL;DR: A hacker stole 150GB of Mexican government data—including taxpayer records, voter info, and government employee credentials—in a massive cyberattack reported yesterday. This is a case study in how AI is reshaping the cybersecurity landscape.
What happened: According to a new report from Bloomberg, a hacker used Anthropic’s Claude chatbot to identify vulnerabilities in Mexican government networks starting last December and, over roughly a month, made off with a trove of citizen data. The hacker initiated the attack by repeatedly prompting Claude to act like an elite hacker. While Claude did initially flag the activity as malicious intent, the hacker was eventually able to “jailbreak” Claude’s protective guardrails. In a statement, Anthropic said it disrupted the activity and banned the accounts involved. Researchers believe the hacker also turned to OpenAI’s ChatGPT for additional guidance, with the company saying it identified policy-violating attempts and blocked them. The attack hasn’t been attributed to any individual, group, or country.
Nothing to see here, folks: These kinds of AI-assisted cyberattacks are rapidly increasing in frequency. (Reminder: Last year, hackers in China used Claude to try to breach 30 global targets.) According to CrowdStrike’s 2026 Global Threat Report, released Tuesday, AI-enabled actors increased attacks by 89% in 2025 compared to 2024. AI doesn’t even need to fully automate hacking to be disruptive (though, that’s coming, too). It just needs to make humans more efficient. How? AI can now explain vulnerabilities in plain language, generate ready-to-run code, and estimate detection risk, compressing what once took skilled hackers days or weeks into mere minutes. AI is also changing who can be a hacker. Before, expertise was required, and scaling attacks was hard. Now, AI is there to help with skill gaps and can run more hacks at once.
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
Guardrails optional: As demonstrated in the Mexico attack, hackers can use AI to just keep going, even when models push back, until they get usable output. Even strong defenses or guardrails that AI companies enable are probabilistic, but not a sure-fire moat. As for which models get used the most? According to CrowdStrike, ChatGPT seems to be a favorite way in—it was mentioned in criminal forums 550% more than other models.
AI vs. AI: Of course, cybersecurity companies are racing to use AI to scan systems for vulnerabilities and hunt for hackers, too. But the asymmetry remains: Attackers only need one way in. Defenders need to protect against any possible intrusion. And on top of all that, cybersecurity experts also previously warned that AI agents will be able to execute entire cyberattack operations autonomously within months.
Bottom line: AI is a force multiplier for cybercrime, no sophistication needed. Meanwhile, governments and institutions are underprepared—and AI companies’ guardrails are proving anything but airtight. —AC
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.