The AI agent craze is molting into a security nightmare
• 4 min read
Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.
TL;DR: Moltbot, an open-source DIY AI agent, is the talk of the town right now. But its always-on, autonomous access to your emails, files, and logins can create serious security risks—and the fact it remembers everything can be a problem too. Guardrails, as well as our awareness of the danger, just aren't there yet—even though tech companies like Apple and Motorola are rolling out on-device AI agents soon.
What happened: Unfortunately, we might have to turn off the internet until we figure out what’s going on with everyone. First, there was the news that the head of CISA, the US cybersecurity agency, uploaded sensitive files to ChatGPT. (Cue eye roll here.) Then there’s the Moltbot (fka Clawdbot) fracas: The open-source personal AI assistant is one of the hottest projects on GitHub right now, with people rushing to install it and give it sweeping access to their lives.
Case in point: A cybersecurity expert created a fake Moltbot “skill”—an add-on that people can download to extend its capabilities— and thousands of people downloaded it by Wednesday morning, giving it access to files, programs, and login credentials.
But there are many more reasons why AI agents, not just dubious add-ons, can be so dangerous as the industry makes a push for putting them on popular consumer devices:
- It’s always on and can see all your files. The point of an AI agent like Moltbot is to automate your life—so you give it access to all of your files. Because it’s always running, it creates a much larger window for attackers than a one-off chat session ever could.
- It doesn’t just look at your data—it can act as you. ChatGPT can remember context, but it can’t autonomously log into your bank and make a payment. AI agents can, if you give them access. And if you’re not careful about the way you prompt things, it could go rogue and cancel all your auto-pay bills.
- Others can “talk” to your AI agent, too. One common way agents get tricked is through prompt injection. Malicious commands can be embedded in links, emails, documents, or messages—when you click on them, it can do everything from reset passwords, approve logins, move money, delete files, and maybe even start drama in all 50 of your group chats.
- All your digital keys are stored in one place. To function, agents store passwords, API keys, tokens, and permissions. One breach could unlock everything, turning a single mistake into a catastrophe.
- Persistent memory can make problems stick. Agents are built to remember you over time. If malware gets in, any problems it causes might persist even if you remove it.
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
It’s tempting to shrug and say this isn’t new. We already give Instagram our photos, Google our searches, and ChatGPT our thoughts. But until now, most of that data has been used to observe and profile us, largely to target ads or optimize feeds (though it can, of course, be used for more nefarious ends). Agentic AI crosses a privacy Rubicon because it’s designed to constantly store and act on that knowledge. And the uncomfortable truth is that most of us probably still underestimate just how much data tech companies already have on us. As an MIT Technology Review piece put it, AI’s ability to remember everything will be the new privacy frontier.
Is there a fix?: Kind of, but you probably won’t like the answer. Most “fixes” limit how useful an agent is. Take Claude Cowork: Each task runs as a separate session, with no shared memory, and it can only access folders you explicitly grant for that task. In practice, better security means limiting autonomy and adding friction: sandboxed agents, limited memory, fewer or no third-party integrations, and more “are you sure?” prompts.
Let’s pause: At some point it’s worth asking the obvious question: Do you actually need an AI to run your life, or do you just want to play with the thing everyone’s talking about? Even Moltbot’s developer said that most “non-techies” shouldn’t install the AI.
There’s already a genre of jokes about people spending weekends wiring up Moltbot only to realize their life is too boring—or too bizarre—to automate. To put it in 2026 internet speak, it’s just optimization-slop. If Moltbot is such an amazing AI assistant for improving your life, the last thing it does might be to uninstall itself. —WK
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.