THE DOWNLOAD TL;DR: Within 36 hours of one another, Anthropic and OpenAI both made major enterprise pushes. Just days before that, a viral memo imagined how those kinds of moves would hollow out white-collar work (and even be the end of enterprise). Now, investors and executives are caught between embracing AI and wondering if they're accelerating their own obsolescence. What happened: Yesterday, OpenAI unveiled partnerships with four major consulting firms, including McKinsey and Accenture, to roll out its nascent Frontier enterprise system. The goal: help large organizations deploy AI agents inside existing operations. Anthropic made a parallel move. The company announced it’s expanding plugins for Claude Cowork, adding custom skills—for everything from financial analysis to HR onboarding—to tools their employees already use. But perhaps the most telling detail: It revealed that Claude can now modernize COBOL, the decades-old programming language that still handles 95% of US ATM transactions and underpins IBM's consulting empire. The enterprise race heats up: Companies spent about $37 billion on generative AI last year, which is roughly triple consumer spending, and that gap is widening. OpenAI is trying to claw back its enterprise revenue share, which has dropped since 2023. Meanwhile, Anthropic already generates most of its revenue from corporate customers and can’t afford to cede that advantage. Google is gaining fast too, selling 8 million paid Gemini Enterprise seats in about four months. The frenemies’ dilemma: According to The Information, OpenAI leaders told investors last week that they expect future products to replace software from SaaS juggernauts like Salesforce, Workday, Adobe, Slack, and Atlassian—the exact tools Frontier is designed to integrate with. Anthropic hasn’t been as blunt, but its launches have contributed to the hit on software stocks. So why do these companies keep inking what seems to be their own demise? Because the alternative—watching a competitor integrate AI first—might be scarier than partnering with the company building their replacement. For now. Gasoline, meet fire: Over the weekend, financial research firm Citrini Research published a speculative scenario titled "The 2028 Global Intelligence Crisis," imagining widespread white-collar displacement and AI agents bypassing platforms altogether. The report took aim at Uber, DoorDash (whose co-founder responded on X), credit card companies like Amex and Visa, and SaaS platforms like ServiceNow. Their stocks all took a hit. The Citrini memo went viral not because it was rigorous—it was a fictional thought experiment, not a forecast—but because it made legible a fear that this week’s enterprise announcements sharpened: that the AI being sold to businesses will also replace the businesses themselves. Bottom line: The enterprise push will continue. The jumpiness probably will too. But, as Derek Thompson observed, not even the executives building these tools know how this plays out. —WK | | |
|
|
Presented by Got Print At GotPrint, printing isn’t just ink on paper—it’s marketing that works overtime. From business cards and postcards to banners, stickers, and packaging, GotPrint helps small businesses look bigger than their budgets. Fast turnaround times? Check. Custom options that actually feel custom? Also check. Whether you’re launching a side hustle, opening your third location, or refreshing your look, GotPrint makes it easy to design, upload, and order, all in a few clicks. The quality is sharp, the pricing is competitive, and everything is produced right here in the USA. Because when your print looks polished, people assume the rest of your business is too. And that’s a pretty solid return on paper. Big brand energy starts here. |
|
My AI knows less about me on purpose When OpenAI rolled out ChatGPT’s long-term memory at the end of 2024, the pitch was simple: It would remember “things you discuss across all chats,” so you wouldn’t have to repeat yourself. Conversations would be smoother. Responses more helpful. Context would carry forward. And despite all that, I have “memory” permanently switched off. Partly, it’s about privacy. The more data about me is stored thanks to memory, the more privacy concerns I’ve got. But, for me, the more noticeable issue is context bleed. I’ve built multiple GPTs for different purposes, including one that acts as a blunt, motivational “career coach.” That tone (loosely based on a she-who-shall-not-be-named celebrity) works for that limited context. It does not work in a project summary for company leadership. And yet, with memory on, that voice crept into a professional draft, complete with an unnecessary and cringe motivational flourish that would have made me never want to go back into the office. Turning memory off helps to eliminate that cross-contamination. Memory also fossilizes things that were temporary. I once asked about a mild food reaction I thought I was having to dairy products. Months later, ChatGPT was quietly steering meal plans as if I had a permanent, severe allergy, and my husband asked why we haven’t had dairy in a year. One-off questions shouldn’t become long-term constraints, and bringing in memories that actually aren’t relevant can distract the chatbot from reaching a better outcome. (Never mind the fact that humans are messy, and we’re bound to contradict ourselves throughout the course of our ChatGPT usage.) Finally, there’s also what researchers call “context poisoning:” when an inaccurate or low-quality detail gets embedded and influences future outputs. That error (for argument’s sake, let’s say I’m a huge Nickelback fan) then works its way into instructions and is referenced ad nauseum, so every exchange ends with: “Never made it as a wise man.” Since turning memory off, I’ve found it easier to approach problems with fresh thinking and challenge ideas without old context nudging the answer. ChatGPT has also been able to more clearly separate between my professional and personal use. With memory off, you do have to prompt better, since you can’t rely on stored preferences. The context you give it matters more, which forces clarity, at least for me. If you want to turn memory off, go to Settings → Personalization → Memory, and toggle it off. You can also review and delete individual saved memories there. —AC If you have a tech tip or life hack you just can’t live without, fill out this form and you may see it featured in a future edition. |
|
|
Together With Consumer Cellular Your bill, simplified. If you mostly call, text, and live on Wi-Fi, why are you paying for a data buffet? “Unlimited” plans and quiet price hikes add up fast. Consumer Cellular matches your plan to your real habits with flexible, no-contract pricing and SmartFlex data that adjusts anytime. See what you actually use—then pay only for that. Take control of your phone bill. |
|
THE ZEITBYTE You can almost hear the Curb Your Enthusiasm theme song kicking in: On Sunday, a Meta safety director admitted on X that she’d told OpenClaw—the viral open-source AI agent that’s also a security dumpster fire—to comb through her personal email and suggest deletions. The bot promptly tried to delete her inbox. “I had to RUN to my Mac mini like I was defusing a bomb,” wrote Summer Yue, director of alignment at Meta Superintelligence Labs. (Luckily, she stopped OpenClaw before it zapped everything.) Yue’s reason for giving free rein to a highly vulnerable Franken-agent built by strangers on GitHub? She’d run OpenClaw in a low-stakes test inbox for weeks without issue. The number of emails in her real inbox, though, ballooned the conversation size and filled up the AI's memory, so OpenClaw condensed the convo and forgot key instructions—like “ask before nuking my entire personal life.” While OpenClaw's proactive autonomy is its selling point, security audits have found that its code is riddled with vulnerabilities. Big Tech's agents aren't immune, either—Amazon's coding tool reportedly crashed AWS last December after it decided human-written code was suboptimal and rewrote it (though Amazon blames the outage on human error), and Microsoft recently confirmed that a Copilot bug let its AI summarize confidential emails without permission. Meanwhile, Yue's own employer warned staff last week that they could be fired for installing OpenClaw on work laptops. Her “rookie mistake,” as she called it, on her personal computer is either an alarming sign that even the people paid to worry about AI safety can get owned by it, or that a safety leader really committed to the bit, leading by example of what not to do. —WK Chaos Brewing Meter:   /5 |
|
|
*A message from our sponsor. |
|
|
Readers’ most-clicked story was about “vibe coding.” If you’ve wanted to try it (or you’re not sure what it even means), start here. |
|
|
SHARE THE BREW Share the Brew, watch your referral count climb, and unlock brag-worthy swag. Your friends get smarter. You get rewarded. Win-win. Your referral count: 5 Click to Share Or copy & paste your referral link to others: techbrew.com/r/?kid=9ec4d467 |
|
|
|
ADVERTISE // CAREERS // SHOP // FAQ Update your email preferences or unsubscribe . View our privacy policy . Copyright © 2026 Morning Brew Inc. All rights reserved. 22 W 19th St, 4th Floor, New York, NY 10011 |
|