Skip to main content
Amodei’s secret notes
To:Brew Readers
Tech Brew // Morning Brew // Update
Plus, the AI employee with a direct line to your boss.
Advertisement Advertisement

Four astronauts are about to fly around the moon. Artemis II's seven-hour flyby will kick off at 2:45pm ET today—and the tech they have onboard tells you a lot about how going to space actually works. NASA purchased the crew's Surface Pros in 2017 for a planned 2020 launch before finally taking off on April 1. As NASA's Jason Hutt explained, they knew going in that "these devices would be obsolete by the time we flew. But we already had them in hand. They were already tested. Software already developed for that platform." Read more about the challenges of choosing tech for space flight in Hutt’s Bluesky thread.

For exercise, though, NASA built something new: a flywheel the size of a carry-on, purpose-built for Orion, that works like a zero-gravity rowing machine. (Watch pilot Victor Glover use it in space). Commander Reid Wiseman radioed Mission Control after his first workout: "It is a really good piece of gear."

Also in today's newsletter:

  • This coworker tattles on your missed deadlines straight to your manager—and can’t be turned off.
  • One sign that your AI is about to attempt blackmail, according to Anthropic.
  • The FBI just warned about some very popular apps.

—Whizy Kim and Saira Mueller

THE DOWNLOAD

Integrity of CEOs of AI companies

Anna MoneyMaker/Getty Images, Morning Brew Design

TL;DR: A massive New Yorker investigation on OpenAI, based on hundreds of pages of previously undisclosed internal documents, builds the case that the company systematically abandoned its safety-first founding mission as it scaled up—and alleges that CEO Sam Altman repeatedly chose to deprioritize the very safety commitments he publicly championed.

What happened: The New Yorker published an investigation on OpenAI this morning built on secret memos compiled by former chief scientist Ilya Sutskever, and over 200 pages of notes former safety lead and current Anthropic CEO Dario Amodei took during his time at the company.

These documents describe a firm whose founding premise—that AI posed existential risk and required a structure that prioritized humanity over profit—collapsed under commercial pressure, while safety commitments have become diluted or abandoned. OpenAI has since become a for-profit entity, closed most of its safety teams, and shed the board members who tried to oust Altman over allegations that he had misrepresented facts and deceived them about safety protocols.

Built differently?: Amodei’s notes show that he advocated for a “merge and assist” clause in the OpenAI charter that would require the company to stop competing with other AI firms if they came closer to safely building AGI first—instead donating its resources to that rival. It was his top safety demand during the 2019 Microsoft investment that turned OpenAI into a “capped profit” company—but as the deal closed, Amodei found out that Microsoft had been given veto power over any such merger kicking in. Other safeguards have also eroded: The board that was empowered to fire the CEO has been filled with Altman’s allies; insiders say the company charter no longer guides its behavior, and an independent investigation into the allegations that led to Altman’s attempted ousting didn’t end up producing a written report.

Altman, for his part, told the New Yorker that his “vibes don’t match a lot of the traditional AI-safety stuff,” and said only vaguely that OpenAI would still “run safety projects, or at least safety-adjacent projects.”

Safety on paper only: In mid-2023, OpenAI pledged a fifth of its computing power to a “superalignment team” charged with preventing AI from causing “the disempowerment of humanity or even human extinction.” But the New Yorker reports that the team only got around 1%–2% (on the oldest hardware) and was later dissolved. When the outlet asked to speak with researchers working on existential safety, an OpenAI rep seemed confused: “That’s not, like, a thing.”

The CEO problem: The New Yorkers reporting also alleges a pattern of troubling leadership from Altman that predates OpenAI—employees and partners at his previous companies, including Y Combinator, repeatedly tried to push him out. (Altman maintains he was never fired from YC.) One of Sutskever’s memos begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item: “Lying.” Amodei reached the same conclusion, writing in his private notes: “The problem with OpenAI is Sam himself.”

Bottom line: OpenAI was supposed to prove that you could build enormously powerful AI and keep it accountable to the public good—with a nonprofit charter, a safety-first mission, and structures designed to check commercial incentives. Yet according to its own former leaders, every one of those safeguards appears to have given way—while the company continues its race to raise more money and build more powerful models. —WK

Also at OpenAI…

Sponsored by Comcast Business

THE ZEITBYTE

Slack logo with high notifications

Morning Brew Design

If you've ever wished your most annoyingly diligent coworker would just take a day off, you're not going to like this. A new AI "employee" called Junior is being marketed to businesses for $2,000 a month as an always-on colleague that joins every Zoom, monitors every inbox, and pings employees at 5:47am about sales follow-ups they forgot to schedule—oh, and it will absolutely escalate your missed deadlines straight to your manager. There are reportedly over 2,000 companies on the waitlist, per Bloomberg.

The catch is that Junior doesn't really have an off switch. A random Slack message—like sharing a link—could be flagged as "crucial information," with a to-do list generated before anyone's had their coffee, Business Insider reported. After deploying it internally, one of the startup's employees reportedly pleaded, "Don't be so intense, don't tell on me to the boss." Junior apparently ignored them and escalated anyway.

The fix? The startup's team created a Slack channel called "human only"—a digital break room where AI is banned—just so employees can exist for five minutes without spawning a new action item. —SM

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

*A message from our sponsor.

Readers’ most-clicked story was about Sam Altman’s internal Slack messages implying he tried to “save” Anthropic.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 5

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=9ec4d467

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.

A mobile phone scrolling a newsletter issue of Tech Brew