Skip to main content
AI

Computer, attack!

less than 3 min read

Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.

TL;DR: Soon, the ultimate hacker may not look like the ones in movies—a black-clad figure typing furiously on a keyboard. Instead, it will probably be a computer—or, to be precise, AI. That's already starting to happen, with AI models beating human hackers on tests. And the government wants tech companies to answer for it.

What happened: Experts from Anthropic and Google testified before two US House Homeland Security subcommittees today on how AI is quickly enabling hackers to “conduct an unprecedented scale of cyberattacks." In written testimony, Google said hackers are now not only using AI to help them move more quickly, but also experimenting with “novel AI-enabled malware.” Last week, OpenAI warned that its latest AI model’s scores on a hacking test jumped from 27% to 76% in just about three months. A Stanford-led study put AI agents up against 10 cybersecurity professionals earlier this year in a test of hacking capabilities, and overall it scored higher than nine of the humans.

Why it matters: The pace of improvement is worrying. AI can probe thousands of targets faster, cheaper, and easier. And the human using AI doesn't need to be a skilled hacker themselves. What’s more, AI has already been deployed in real-world cyberattacks. Last month, Anthropic said it disrupted the first “AI-orchestrated” cyberespionage campaign which used Claude Code to automate large parts of the operation. Cyberattack activity is on the rise globally, and we’ve seen ransomware attackers wielding AI to not only create slick phishing scripts but also create malware and even sell it as a service, according to a Wired report.

The counter: If AI can power attacks, it can also stop them. That belief is fueling a boom in cybersecurity startups. At the House hearing, Google’s head of security said Gemini already patches vulnerabilities automatically. But AI companies could do more by adding stronger guardrails, watching for misuse, and cutting off attackers.

Government has a role too: Require faster breach reporting, raise security standards for major software and vendors, and modernize outdated federal systems. Some in the AI industry argue this is exactly why the US must protect its AI lead, including by limiting adversaries’ access to advanced chips.

If you want to nerd out: Comparing AI Agents to Cybersecurity Professionals in Real-World Penetration Testing.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.