AI labs score poor marks in safety group’s latest report card
The Future of Life Institute said no company had a plan for safe superintelligence.
• 4 min read
A recent report card from an AI safety watchdog isn’t one that tech companies will want to stick on the fridge.
The Future of Life Institute’s latest AI safety index found that major AI labs fell short on most measures of AI responsibility, with few letter grades rising above a C. The org graded eight companies across categories like safety frameworks, risk assessment, and current harms.
Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute.
“Reviewers found this kind of jarring,” Tegmark told us.
The reviewers in question were a panel of AI academics and governance experts who examined publicly available material as well as survey responses submitted by five of the eight companies.
Anthropic, OpenAI, and Google DeepMind took the top three spots with an overall grade of C+ or C. Then came, in order, Elon Musk’s Xai, Z.ai, Meta, DeepSeek, and Alibaba, all of which got Ds or a D-.
Tegmark blames a lack of regulation that has meant the cutthroat competition of the AI race trumps safety precautions. California recently passed the first law that requires frontier AI companies to disclose safety information around catastrophic risks, and New York is currently within spitting distance as well. Hopes for federal legislation are dim, however.
“Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark said.
In lieu of government-mandated standards, Tegmark said the industry has begun to take the group’s regularly released safety indexes more seriously; four of the five American companies now respond to its survey (Meta is the only holdout.) And companies have made some improvements over time, Tegmark said, mentioning Google’s transparency around its whistleblower policy as an example.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
But real-life harms reported around issues like teen suicides that chatbots allegedly encouraged, inappropriate interactions with minors, and major cyberattacks have also raised the stakes of the discussion, he said.
“[They] have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.
The Future of Life Institute recently enlisted public figures as diverse as Prince Harry and Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am to sign a statement opposing work that could lead to superintelligence.
Tegmark said he would like to see something like “an FDA for AI where companies first have to convince experts that their models are safe before they can sell them.
“The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark said. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ before you can sell the first sandwich, you need a health inspector to check your kitchen and make sure it’s not full of rats…If you instead say, ‘Oh no, I’m not going to sell any sandwiches. I’m just going to release superintelligence.’ OK! No need for any inspectors, no need to get any approvals for anything.”
“So the solution to this is very obvious,” Tegmark added. “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.”
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.