Sam Altman Hands Off AI Safety to Build Data Centers - OpenAI Bets Everything on 'Spud'

OpenAI's CEO delegates safety oversight to focus on infrastructure. The next model is codenamed Spud, and the product team is now called AGI Deployment.

Server room with rows of illuminated server racks in blue light

Sam Altman just told OpenAI staff he’s stepping back from directly overseeing the company’s safety and security teams. His new focus: raising capital, supply chains, and “building data centers at unprecedented scale.”

The timing is noteworthy. OpenAI has completed pre-training on its next major model, codenamed “Spud,” and renamed its product deployment team to “AGI Deployment.” The company is explicitly positioning itself for what it believes is the final push toward artificial general intelligence.

The Organizational Shift

According to The Information, safety now reports to Mark Chen, OpenAI’s Chief Research Officer, while security reports to Greg Brockman, the company president.

This restructuring places safety subordinate to research rather than in an independent oversight position. Critics have noted that a CRO role traditionally manages downside exposure rather than actively preventing problems—a subtle but meaningful distinction when the stakes are existential risk.

The move comes just weeks after OpenAI dissolved its Mission Alignment team in February 2026. That six-person unit had been tasked with translating safety principles into daily practice. It lasted 16 months.

A Pattern of Safety Departures

This isn’t new territory for OpenAI. The company has watched a steady exodus of safety-focused personnel:

Jan Leike, who co-led the Superalignment team, resigned with a pointed critique: “Over the past years, safety culture and processes have taken a backseat to shiny products.” He warned that OpenAI wasn’t on a trajectory to get safety right.

Ilya Sutskever, the other Superalignment co-lead, also left. The team was disbanded entirely.

Miles Brundage, Senior Advisor for AGI Readiness, departed after six years. His team was dissolved.

Fortune reported that more than half the employees focused on AGI safety had left the company within just a few months.

Now the CEO himself has delegated safety oversight to focus on infrastructure.

Enter Spud

OpenAI has finished pre-training its next frontier model, internally codenamed “Spud”—reportedly a joke signaling the company’s attempt to break out of the current capability plateau.

Altman told staff the model could “really accelerate the economy” and might be ready within weeks. Whether this becomes GPT-6 or something else remains unclear, but the framing is unambiguous: this is OpenAI’s AGI push.

The product team’s renaming to “AGI Deployment” reinforces the message. This isn’t cautious incrementalism—it’s a declaration of intent.

Sora Shutdown and Disney Collapse

To free resources for Spud, OpenAI shuttered its Sora video generation app on March 24. The economics were brutal: estimated inference costs hit $15 million per day while total lifetime revenue reached just $2.1 million.

The shutdown killed a planned $1 billion investment from Disney, which had agreed to license characters for use in Sora. According to Deadline, no money actually changed hands before the deal collapsed.

Disney’s statement was diplomatically terse: “We respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere.”

OpenAI says the Sora research team will refocus on “world simulation research to advance robotics.” The real-world video datasets aren’t being discarded—they’re being redirected toward physical AI applications.

What This Means

Two interpretations exist for these changes.

The charitable read: OpenAI is making hard resource allocation decisions. Video generation wasn’t working commercially. Data center capacity is genuinely necessary for frontier model training. Reorganizing reporting lines is routine corporate shuffling.

The critical read: OpenAI has decided the race matters more than the guardrails. Every safety team has been disbanded, defanged, or subordinated. The CEO has explicitly deprioritized oversight to focus on scale. And the company is calling its product team “AGI Deployment” while warning the model might “accelerate the economy.”

When Jan Leike left, he wrote that building systems smarter than humans is “inherently dangerous” and that OpenAI is “shouldering an enormous responsibility on behalf of all of humanity.”

The company’s response has been to reduce the people responsible for managing that danger while accelerating toward the finish line.

The Infrastructure Bet

Altman’s focus on data centers makes strategic sense regardless of safety concerns. Training frontier models requires immense compute. Securing that compute means controlling the physical infrastructure—power contracts, chip supply chains, facility construction.

But there’s a tension between “we need unprecedented scale to build AGI” and “we need robust safety processes before deploying AGI.” The organizational changes suggest OpenAI has decided which priority wins when they conflict.

The next few weeks will reveal whether Spud lives up to internal hype. What we already know is that the safety apparatus Leike and others built has been systematically dismantled, and the CEO now considers infrastructure more important than direct safety oversight.

Whether that’s prudent resource allocation or reckless acceleration depends on how close OpenAI actually is to systems that can “accelerate the economy”—and whether anyone inside the company is positioned to pump the brakes if something goes wrong.