Skip to main content
The AI enterprise power grab
To:Brew Readers
Tech Brew // Morning Brew // Update
Plus, a rare win for renewable energy.

Anthropic just released a study on how people use Claude—and found a potentially concerning pattern. When Claude creates "artifacts"—things like code, documents, and interactive tools—users are apparently significantly less likely to fact-check, question its reasoning, or identify missing context compared to regular text conversations. In other words: The better it looks, the less people scrutinize it—even though these tasks are often the most complex and error-prone.

The study used a limited data set (it's based on nearly 10,000 conversations over a weeklong period) and found that in almost 86% of the conversations, users iterated and refined their requests (which correlated with better outcomes), but they told Claude how they want it to interact with them upfront only 30% of the time. Anthropic's advice? Treat polished outputs (such as artifacts) as a red flag to ask more questions, not fewer. You can check out Anthropic's three AI fluency tips here.

Also in today's newsletter:

  • The enterprise companies vs. AI dilemma.
  • A Meta exec’s AI almost deleted her emails.
  • A new app can notify you if someone in your area is using smart glasses.

—Whizy Kim, Saira Mueller, and Alex Carr

THE DOWNLOAD

Laptop computer on fire

Getty Images

TL;DR: Within 36 hours of one another, Anthropic and OpenAI both made major enterprise pushes. Just days before that, a viral memo imagined how those kinds of moves would hollow out white-collar work (and even be the end of enterprise). Now, investors and executives are caught between embracing AI and wondering if they're accelerating their own obsolescence.

What happened: Yesterday, OpenAI unveiled partnerships with four major consulting firms, including McKinsey and Accenture, to roll out its nascent Frontier enterprise system. The goal: help large organizations deploy AI agents inside existing operations. Anthropic made a parallel move. The company announced it’s expanding plugins for Claude Cowork, adding custom skills—for everything from financial analysis to HR onboarding—to tools their employees already use. But perhaps the most telling detail: It revealed that Claude can now modernize COBOL, the decades-old programming language that still handles 95% of US ATM transactions and underpins IBM's consulting empire.

The enterprise race heats up: Companies spent about $37 billion on generative AI last year, which is roughly triple consumer spending, and that gap is widening. OpenAI is trying to claw back its enterprise revenue share, which has dropped since 2023. Meanwhile, Anthropic already generates most of its revenue from corporate customers and can’t afford to cede that advantage. Google is gaining fast too, selling 8 million paid Gemini Enterprise seats in about four months.

The frenemies’ dilemma: According to The Information, OpenAI leaders told investors last week that they expect future products to replace software from SaaS juggernauts like Salesforce, Workday, Adobe, Slack, and Atlassian—the exact tools Frontier is designed to integrate with. Anthropic hasn’t been as blunt, but its launches have contributed to the hit on software stocks.

So why do these companies keep inking what seems to be their own demise? Because the alternative—watching a competitor integrate AI first—might be scarier than partnering with the company building their replacement. For now.

Gasoline, meet fire: Over the weekend, financial research firm Citrini Research published a speculative scenario titled "The 2028 Global Intelligence Crisis," imagining widespread white-collar displacement and AI agents bypassing platforms altogether. The report took aim at Uber, DoorDash (whose co-founder responded on X), credit card companies like Amex and Visa, and SaaS platforms like ServiceNow. Their stocks all took a hit.

The Citrini memo went viral not because it was rigorous—it was a fictional thought experiment, not a forecast—but because it made legible a fear that this week’s enterprise announcements sharpened: that the AI being sold to businesses will also replace the businesses themselves.

Bottom line: The enterprise push will continue. The jumpiness probably will too. But, as Derek Thompson observed, not even the executives building these tools know how this plays out. —WK

Presented by Got Print

A stylized image with the words life hack.

My AI knows less about me on purpose

When OpenAI rolled out ChatGPT’s long-term memory at the end of 2024, the pitch was simple: It would remember “things you discuss across all chats,” so you wouldn’t have to repeat yourself. Conversations would be smoother. Responses more helpful. Context would carry forward.

And despite all that, I have “memory” permanently switched off.

Partly, it’s about privacy. The more data about me is stored thanks to memory, the more privacy concerns I’ve got.

But, for me, the more noticeable issue is context bleed. I’ve built multiple GPTs for different purposes, including one that acts as a blunt, motivational “career coach.” That tone (loosely based on a she-who-shall-not-be-named celebrity) works for that limited context. It does not work in a project summary for company leadership. And yet, with memory on, that voice crept into a professional draft, complete with an unnecessary and cringe motivational flourish that would have made me never want to go back into the office. Turning memory off helps to eliminate that cross-contamination.

Memory also fossilizes things that were temporary. I once asked about a mild food reaction I thought I was having to dairy products. Months later, ChatGPT was quietly steering meal plans as if I had a permanent, severe allergy, and my husband asked why we haven’t had dairy in a year. One-off questions shouldn’t become long-term constraints, and bringing in memories that actually aren’t relevant can distract the chatbot from reaching a better outcome. (Never mind the fact that humans are messy, and we’re bound to contradict ourselves throughout the course of our ChatGPT usage.)

Finally, there’s also what researchers call “context poisoning:” when an inaccurate or low-quality detail gets embedded and influences future outputs. That error (for argument’s sake, let’s say I’m a huge Nickelback fan) then works its way into instructions and is referenced ad nauseum, so every exchange ends with: “Never made it as a wise man.”

Since turning memory off, I’ve found it easier to approach problems with fresh thinking and challenge ideas without old context nudging the answer. ChatGPT has also been able to more clearly separate between my professional and personal use.

With memory off, you do have to prompt better, since you can’t rely on stored preferences. The context you give it matters more, which forces clarity, at least for me.

If you want to turn memory off, go to SettingsPersonalizationMemory, and toggle it off. You can also review and delete individual saved memories there. —AC

If you have a tech tip or life hack you just can’t live without, fill out this form and you may see it featured in a future edition.

Together With Consumer Cellular

THE ZEITBYTE

OpenClaw Clawdbots

OpenClaw

You can almost hear the Curb Your Enthusiasm theme song kicking in: On Sunday, a Meta safety director admitted on X that she’d told OpenClaw—the viral open-source AI agent that’s also a security dumpster fire—to comb through her personal email and suggest deletions. The bot promptly tried to delete her inbox. “I had to RUN to my Mac mini like I was defusing a bomb,” wrote Summer Yue, director of alignment at Meta Superintelligence Labs. (Luckily, she stopped OpenClaw before it zapped everything.)

Yue’s reason for giving free rein to a highly vulnerable Franken-agent built by strangers on GitHub? She’d run OpenClaw in a low-stakes test inbox for weeks without issue. The number of emails in her real inbox, though, ballooned the conversation size and filled up the AI's memory, so OpenClaw condensed the convo and forgot key instructions—like “ask before nuking my entire personal life.”

While OpenClaw's proactive autonomy is its selling point, security audits have found that its code is riddled with vulnerabilities. Big Tech's agents aren't immune, either—Amazon's coding tool reportedly crashed AWS last December after it decided human-written code was suboptimal and rewrote it (though Amazon blames the outage on human error), and Microsoft recently confirmed that a Copilot bug let its AI summarize confidential emails without permission.

Meanwhile, Yue's own employer warned staff last week that they could be fired for installing OpenClaw on work laptops. Her “rookie mistake,” as she called it, on her personal computer is either an alarming sign that even the people paid to worry about AI safety can get owned by it, or that a safety leader really committed to the bit, leading by example of what not to do. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

*A message from our sponsor.

Readers’ most-clicked story was about “vibe coding.” If you’ve wanted to try it (or you’re not sure what it even means), start here.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 5

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=9ec4d467

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.

A mobile phone scrolling a newsletter issue of Tech Brew