Arm spent 35 years licensing chip designs to other companies. Last week, it started selling chips directly.
The AGI CPU, announced March 24, is a 136-core data center processor built on TSMC’s 3nm process. Meta co-developed the chip and will be its first major customer, deploying it alongside the company’s custom MTIA accelerators.
Arm CEO Rene Haas said at the San Francisco launch event that AGI CPU sales alone should bring in $15 billion by 2031. The market agreed—Arm’s stock jumped 16% the day after the announcement.
What’s Under the Hood
The AGI CPU packs serious specifications into a 300-watt envelope:
- 136 Neoverse V3 cores across two dies
- 3.2 GHz all-core, 3.7 GHz boost
- 12 channels of DDR5 at 8800 MT/s, delivering 800+ GB/s aggregate memory bandwidth
- 96 lanes of PCIe 6.0 with CXL 3.0 support
- 2MB L2 cache per core
The target isn’t general computing. Arm designed this specifically for what it calls “agentic AI infrastructure”—the CPU-side orchestration that coordinates accelerators and manages data movement in large-scale AI deployments. When you’re running thousands of GPUs in parallel, something has to manage the traffic.
The Performance Claim
Arm claims the AGI CPU delivers more than 2x performance per rack versus x86 CPUs in AI inference workloads. The company projects this translates to up to $10 billion in CAPEX savings per gigawatt of AI data center capacity.
That’s a bold claim, especially against AMD’s newest EPYC and Intel’s Xeon processors. Independent benchmarks aren’t available yet—commercial systems just started shipping from ASRock Rack, Lenovo, and Supermicro.
Why This Matters
Arm has historically avoided competing with its own licensees. Companies like Qualcomm, Apple, and AWS (Graviton) pay Arm for the right to use its instruction set architecture and core designs. Selling finished chips risks alienating that ecosystem.
The AGI CPU represents a calculated bet that the AI infrastructure market is large enough—and specialized enough—that Arm can capture a slice without cannibalizing its licensing business.
According to Tom’s Hardware, Arm’s partners list reads like an AI infrastructure wishlist: Meta, OpenAI, Cloudflare, Cerebras, Positron, Rebellions, SAP, and SK Telecom have all committed to deployments. Broader availability is expected in the second half of 2026.
Meta’s Role
Meta didn’t just agree to buy the chip—it helped design it. The companies co-developed the AGI CPU to work specifically with Meta’s MTIA (Meta Training and Inference Accelerator) chips.
This follows Meta’s broader strategy of reducing dependence on Nvidia. By building its own accelerators and partnering on CPU development, Meta is trying to control more of its AI infrastructure stack.
The partnership also explains why Arm felt confident enough to enter the silicon business. Having Meta as an anchor customer reduces the risk substantially—even if other hyperscalers don’t adopt, Meta’s scale ensures meaningful volume.
The Bigger Picture
The timing isn’t accidental. AI data center buildout is expected to consume hundreds of billions of dollars over the next few years. CPU efficiency in these environments matters because accelerators (GPUs, TPUs, custom ASICs) do the heavy lifting on training and inference, but CPUs handle orchestration, data preprocessing, and network management.
If Arm’s efficiency claims hold up—and that’s a significant if until independent testing—hyperscalers could save meaningful infrastructure costs at scale. The $10 billion CAPEX savings per gigawatt figure assumes widespread adoption, but even a fraction of that represents real money.
x86’s dominance in data centers isn’t guaranteed anymore. AWS Graviton already proved Arm can compete on price-performance for cloud workloads. The AGI CPU is Arm’s play for the specific subset of workloads driving the current infrastructure buildout.
What to Watch
The real test comes when independent benchmarks arrive. Arm’s internal figures are encouraging but unverified. Watch for:
- How AGI CPU performs against EPYC 9005 and Xeon 6 in mixed AI workloads
- Whether AWS, Google, or Microsoft adopt the chip or continue developing their own Arm-based designs
- How Arm’s licensees react to competition from their IP provider
Arm betting on silicon—after 35 years of avoiding it—suggests the company sees AI infrastructure as too important to sit out. Whether the bet pays off depends on whether the performance claims survive contact with real-world deployments.