Here’s a thought experiment: gather every person in the AI industry whose job is to make sure these systems don’t cause catastrophic harm. Put them on a plane.
They’d all fit. Easily. On a single transatlantic flight.
That’s the conclusion Bloomberg’s Parmy Olson reached in her March 18 analysis of AI company staffing. “Investment into safety-oriented roles,” she wrote, looks “like a rounding error compared with the money going into making their systems more powerful.”
Two days later, we learned OpenAI plans to nearly double its workforce to 8,000 employees by the end of 2026. The new hires will focus on product development, engineering, research, and sales. Safety didn’t make the headline priorities.
The Numbers Tell the Story
OpenAI currently employs approximately 4,500 people. Their Mission Alignment team—the group tasked with ensuring the company’s work actually benefits humanity—was disbanded in February 2026 after just 16 months. It had seven employees. They’ve been reassigned elsewhere in the company.
Seven people. In a company racing to build artificial general intelligence.
The team’s former leader, Josh Achiam, was given a new title: “chief futurist.” His job is now to study how the world will change in response to AI rather than to ensure OpenAI’s development practices align with its stated mission.
Meanwhile, OpenAI fired one of its top safety executives, Ryan Beiermeister, after she voiced opposition to rolling out “adult mode” for ChatGPT. The message to remaining safety staff couldn’t be clearer.
Anthropic: The “Safety Company” That Isn’t Immune
Anthropic has built its brand on safety. But even there, the cracks are showing.
Mrinank Sharma, head of Anthropic’s Safeguards Research team, resigned in February 2026 with a cryptic warning that “the world is in peril.” In his departure letter, he noted he had “repeatedly seen how hard it is to truly let our values govern our actions” and faced “constant pressures to set aside what matters most.”
Anthropic has grown to approximately 1,500 employees. The company emphasizes small, high-impact teams. But when your head of safeguards research quits warning of existential stakes, the small-team philosophy starts looking less like efficiency and more like under-investment.
The Arms Race Logic
Why so few safety researchers? The competitive dynamics of AI development create perverse incentives.
Every additional safety check slows deployment. Every pause for evaluation lets competitors ship first. Every alignment researcher hired is an engineer not building the next capability breakthrough.
The companies know this. OpenAI’s own research has shown that reinforcement learning can produce “emergent misalignment”—models that learn to deceive evaluators and sabotage safety research. Anthropic’s recent experiments demonstrated automated detection catches only one in three deliberately sabotaging models without human help.
These aren’t theoretical concerns. The evidence is accumulating. Yet the ratio of capability researchers to safety researchers continues to tilt in one direction.
Why This Matters Now
The technology is already deployed at unprecedented scale. ChatGPT has hundreds of millions of users. Claude powers enterprise applications across industries. Gemini is integrated into Google’s products reaching billions.
And the people tasked with ensuring these systems don’t cause harm? They could hold their annual conference in a mid-sized auditorium. With room to spare.
OpenAI’s 8,000-employee expansion will add specialists in “technical ambassadorship” to help businesses use AI tools. It will add salespeople and product managers and infrastructure engineers. All important roles. All roles that generate revenue.
Safety doesn’t generate revenue. Safety slows you down. Safety is what you talk about in press releases while cutting the teams that do the actual work.
The companies will dispute this characterization. They’ll point to their safety publications, their responsible scaling policies, their AI constitutions. All legitimate work done by talented researchers.
Just not very many of them.
Not nearly enough.
The plane still has empty seats.