Skip to main content
AI

The race to create the new, safer OpenClaw

3 min read

TL;DR: OpenClaw proved AI agents can use computers like humans—sending emails, moving files, executing tasks across entire systems (something Anthropic jumped on yesterday). Now every major AI company is racing to build a safer, enterprise-ready version. The problem: Most of the agents keep messing up in ways that make “enterprise-ready” sound aspirational at best.

What happened: At Nvidia’s GTC event last week, Jensen Huang declared that “every company needs an OpenClaw strategy”—talking about the open-source AI agent that gives LLMs full control of a computer, no default guardrails included.

Huang was there in part to unveil NemoClaw, a bundle of software designed to make AI agents more reliable, transparent, and secure. But Nvidia is far from alone in pursuing an “OpenClaw strategy.” In the past month or so, basically every major company has revealed a focus on agentic AI:

  • OpenAI hired OpenClaw’s creator, Peter Steinberger, to lead its next generation of personal agents—with the goal of building one simple enough that, as he put it, even his mother could use it.
  • Meta acquired Moltbook, a social network where AI agents can communicate with each other. It also hired the team behind an agent startup founded by former Google and Stripe execs.
  • Google reorganized its Project Mariner team to pivot away from browser automation toward coding agents.
  • Perplexity, the AI-powered search engine, just announced new agentic AI tools pitched as more secure than OpenClaw.
  • Anthropic has been layering on agent capabilities since launching Claude Code last year—adding the more general-purpose Cowork in January, then Dispatch for mobile remote control last week. Yesterday, it went further: Claude can now directly control a Mac’s mouse, keyboard, and screen, using your computer exactly as you would (outside of Cowork’s existing interface).
Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.

The industry wants enterprise-grade AI agents. Actually building them is another matter.

Easier said: Agentic AI offers a seductive pitch promising massive productivity gains and cost savings. But the early safety track record has been rough. At Meta, a rogue agent recently bypassed four identity checks to access and expose sensitive data. At AWS, a misconfigured agent reportedly deleted and attempted to recreate a whole chunk of code, triggering a 13-hour outage. Many companies seem to be handing agents broad access without adequate controls—or even knowing what, exactly, their AI has access to. When an agent messes up, it might be holding the keys to everything.

Consumer agentic AI, meanwhile, has largely been a dud so far. Perplexity’s browser agent Comet peaked at 2.8 million weekly active users—a rounding error next to ChatGPT’s 900 million—while ChatGPT’s own browser agent reportedly fell below 1 million in recent months. Walmart quietly pulled back from its agentic checkout experiment with OpenAI after conversion rates inside ChatGPT came in at roughly a third of its own site. (Google’s rollout of Gemini automated tasks built into smartphones might prove to be an exception.)

Bottom line: The core tension of the agent race is that autonomy and security pull in opposite directions—the more you empower an agent to act, the more that can go wrong. Given the scale at which they operate, some industry voices argue human oversight alone won’t cut it and that the answer may be using AI to monitor AI. —WK

About the author

Whizy Kim

Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.