Skip to main content
AI

Inside the memos behind OpenAI's safety retreat

A New Yorker investigation on OpenAI reveals new details around how is safety commitments have eroded over the years.

4 min read

TL;DR: A massive New Yorker investigation on OpenAI, based on hundreds of pages of previously undisclosed internal documents, builds the case that the company systematically abandoned its safety-first founding mission as it scaled up—and alleges that CEO Sam Altman repeatedly chose to deprioritize the very safety commitments he publicly championed.

What happened: The New Yorker published an investigation on OpenAI this morning built on secret memos compiled by former chief scientist Ilya Sutskever, and over 200 pages of notes former safety lead and current Anthropic CEO Dario Amodei took during his time at the company.

These documents describe a firm whose founding premise—that AI posed existential risk and required a structure that prioritized humanity over profit—collapsed under commercial pressure, while safety commitments have become diluted or abandoned. OpenAI has since become a for-profit entity, closed most of its safety teams, and shed the board members who tried to oust Altman over allegations that he had misrepresented facts and deceived them about safety protocols.

Built differently?: Amodei’s notes show that he advocated for a “merge and assist” clause in the OpenAI charter that would require the company to stop competing with other AI firms if they came closer to safely building AGI first—instead donating its resources to that rival. It was his top safety demand during the 2019 Microsoft investment that turned OpenAI into a “capped profit” company—but as the deal closed, Amodei found out that Microsoft had been given veto power over any such merger kicking in. Other safeguards have also eroded: The board that was empowered to fire the CEO has been filled with Altman’s allies; insiders say the company charter no longer guides its behavior, and an independent investigation into the allegations that led to Altman’s attempted ousting didn’t end up producing a written report.

Altman, for his part, told the New Yorker that his “vibes don’t match a lot of the traditional AI-safety stuff,” and said only vaguely that OpenAI would still “run safety projects, or at least safety-adjacent projects.”

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.

Safety on paper only: In mid-2023, OpenAI pledged a fifth of its computing power to a “superalignment team” charged with preventing AI from causing “the disempowerment of humanity or even human extinction.” But the New Yorker reports that the team only got around 1%–2% (on the oldest hardware) and was later dissolved. When the outlet asked to speak with researchers working on existential safety, an OpenAI rep seemed confused: “That’s not, like, a thing.”

The CEO problem: The New Yorkers reporting also alleges a pattern of troubling leadership from Altman that predates OpenAI—employees and partners at his previous companies, including Y Combinator, repeatedly tried to push him out. (Altman maintains he was never fired from YC.) One of Sutskever’s memos begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item: “Lying.” Amodei reached the same conclusion, writing in his private notes: “The problem with OpenAI is Sam himself.”

Bottom line: OpenAI was supposed to prove that you could build enormously powerful AI and keep it accountable to the public good—with a nonprofit charter, a safety-first mission, and structures designed to check commercial incentives. Yet according to its own former leaders, every one of those safeguards appears to have given way—while the company continues its race to raise more money and build more powerful models. —WK

Also at OpenAI…

About the author

Whizy Kim

Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.