Skip to main content
AI

One exit after another

5 min read

Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.

TL;DR: Senior employees at OpenAI, xAI, and Anthropic all headed for the exits this week—some with dramatic social media farewells, one with a New York Times op-ed, and one safety lead with a lofty two-page letter warning that "the world is in peril." Their exact reasons vary, but the throughline is this: Key people tasked with keeping AI safe are departing over ethical concerns, as their former employers speedrun product improvements and updates.

What happened: In the past week, at least four top figures at OpenAI, Anthropic, and xAI have been “loud quitting.”

On Monday, Anthropic Senior Safety Researcher Mrinank Sharma resigned with a two-page letter—complete with footnotes—posted on X, citing worries about "interconnected crises" beyond just AI and that he constantly felt “pressures to set aside what matters most.” Einstein expressed remorse about the part he played in the creation of the atomic bomb, saying if he'd known, he would “have become a watchmaker." Sharma, for his part, plans to become “invisible” and study poetry.

Then, OpenAI Research Scientist Zoë Hitzig published a NYT op-ed on Wednesday announcing her resignation. The reason? ChatGPT's new ad rollout. Hitzig compared OpenAI's trajectory to Facebook's, arguing that users have entrusted the chatbot with an unprecedented “archive of human candor.” And while OpenAI says ads will be labeled and won’t influence responses, Hitzig worries these pledges will soon fall to the wayside because “the company is building an economic engine that creates strong incentives to override its own rules.”

On Monday and Tuesday, two xAI co-founders also left the company, meaning that half of xAI’s founding members have now exited. Tony Wu and Jimmy Ba posted amicable-sounding farewells on X, with Ba saying 2026 would be "the busiest (and most consequential) year for the future of our species." Musk, however, made it sound like it was totally his choice to push them out.

Turnover at big tech firms isn’t abnormal, but all these high-profile exits in a short period of time point to a bigger ethical problem. Employees at AI companies are grappling every day with how to prioritize issues like user safety and balance them with advances in technology and their employers' need for more revenue.

Gradually, then suddenly: The departures, and the foreboding missives published with them, come as public concern mounts that safety is taking a back seat to new product features and marketing stunts for AI companies. OpenAI alone has shipped over 20 model updates in the past year, plus shopping, an app marketplace, and now ads. Anthropic released five major models in the same span, as well as an agentic coding tool and an AI agent for daily tasks.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

Meanwhile, Platformer reported yesterday that OpenAI has dissolved its seven-person mission alignment team, which was created in 2024 to ensure that development of AGI—a level of intelligence that surpasses human ability—stays true to OpenAI's founding mission of benefiting humanity. Team leader Joshua Achiam told Platformer that its research was "wrapping up," and that he'd take on the new title of "chief futurist," whatever that is.

The Wall Street Journal also reported Tuesday that OpenAI fired safety exec Ryan Beiermeister in early January for alleged sexual discrimination—a claim she denies. This just happens to come right after she opposed a planned adult convo mode for ChatGPT and raised concerns about child exploitation safeguards.

Anthropic, the self-appointed conscience of AI, arguably spends more time studying how models could go wrong than anyone—yet still pushes just as aggressively toward the next, potentially more dangerous, level of intelligence. Its own safety report for its latest model, published yesterday, found "elevated susceptibility to harmful misuse," with the model supporting chemical weapon development and other “heinous crimes.” That’s a stark contrast from the Claude constitution the company updated just a few weeks ago, which included a hard ban on helping make “biological, chemical, nuclear, or radiological weapons.” In a New Yorker profile from this week, one Anthropic researcher admitted he often wonders if "maybe we should just stop."

Will the adults please stand up?: If it's true that AI is barreling down the path of wielding world-shattering power, it’s worth asking who’s going to pull the brakes. Probably not the CEOs announcing new heights of spending on AI infrastructure practically every day—especially not the one who once said AI would likely “lead to the end of the world.” And it probably also, unfortunately, won’t be the employees who gesture vaguely at existential concerns on X before disappearing into the sunset. In the meantime, we’ll all be over here wondering if we’re in a doomsday scenario. —WK

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.