Yann LeCun thinks you’ve been looking at AI wrong. Not just you—the entire industry. The hundreds of billions poured into large language models. The trillion-dollar valuations. The endless scaling of parameters. All of it chasing a fundamental dead end.
On March 10, AMI Labs announced a $1.03 billion seed round at a $3.5 billion pre-money valuation—the largest seed round in European startup history. The investors include Bezos Expeditions, NVIDIA, Samsung, and Temasek. The mission: build “world models” that actually understand how reality works.
This isn’t a pivot from a failed startup founder. LeCun won the Turing Award in 2018 for his work on convolutional neural networks. He spent 12 years building Meta’s AI research division. When he says LLMs won’t lead to artificial general intelligence, he’s making a technical argument, not a business one.
Why LeCun Left Meta
LeCun’s departure from Meta coincided with a strategic shift. After he’d spent years building FAIR into one of the world’s top AI research labs, Meta brought in Scale AI founder Alexandr Wang as chief AI officer. The company pivoted toward bigger LLMs and product integration. LeCun’s research-first, world-models vision no longer fit.
The split appears amicable. Meta isn’t blocking LeCun from pursuing his thesis, and he’s not burning bridges. But the philosophical disagreement is real: Meta is scaling LLMs. LeCun is betting they’ve hit a ceiling.
The Case Against Language Models
LeCun’s critique of LLMs centers on three structural limitations:
1. They don’t understand physics. LLMs excel at pattern-matching in text. Ask them how the world works—how objects fall, how liquids flow, how time passes—and they’re guessing based on descriptions, not modeling actual dynamics. They can describe a ball bouncing, but they don’t “know” what bouncing means.
2. They hallucinate because they lack ground truth. An LLM has no way to verify if its outputs match reality. It produces statistically likely text sequences, which often happen to be accurate. When they’re not, the model can’t tell the difference. There’s no internal consistency check.
3. They can’t plan. LLMs predict one token at a time. They don’t maintain persistent goals or break complex tasks into steps the way even simple animals do. Chain-of-thought prompting helps on narrow problem types but doesn’t fix the underlying architecture.
These aren’t bugs that scale away. LeCun argues they’re architectural: no amount of additional training data will give an LLM a genuine model of how the physical world operates.
What World Models Actually Do
AMI Labs is building on LeCun’s Joint Embedding Predictive Architecture (JEPA), developed during his Meta years. The key insight is deceptively simple: instead of predicting exact outputs (the next word, the next pixel), JEPA predicts abstract representations of future states.
Think about how you catch a ball. You don’t simulate every photon of light or calculate exact trajectories. You have an abstract sense of where the ball will be, how fast it’s moving, when you need to move your hand. You model the world at a level of abstraction that’s useful for action.
JEPA works similarly. It learns to predict high-level representations of future states, ignoring irrelevant surface details. A JEPA model watching a video doesn’t try to predict the exact color of each future pixel—it predicts abstract features that capture the meaningful dynamics of the scene.
This approach has concrete advantages:
- Efficiency: Predicting abstract representations requires far less compute than predicting raw outputs
- Robustness: Models generalize better because they’re not overfitting to surface details
- Physical grounding: The architecture forces the model to learn causal relationships, not just correlations
The Team Building It
LeCun isn’t running day-to-day operations. The leadership team includes:
- Laurent Solly (COO): Meta’s former VP for Europe, handling operations and business strategy
- Saining Xie (Chief Science Officer): Former Meta researcher who led work on vision transformers
- Pascale Fung (Chief Research and Innovation Officer): Hong Kong University of Science and Technology professor, expert in multilingual NLP
- Michael Rabbat (VP of World Models): Former Meta researcher specializing in distributed learning
The team is Paris-based, positioning AMI Labs in the European AI ecosystem. The investor roster—Bezos, NVIDIA, Samsung—signals that this isn’t a research project. It’s a commercial venture targeting applications where LLM limitations are most painful.
Where This Matters
AMI Labs is targeting three initial domains where world models would provide clear advantages:
Robotics: Robots operating in physical environments need to understand physics, not just describe it. A warehouse robot that can genuinely model how objects behave will outperform one running on LLM-based planning.
Healthcare: Medical applications require reasoning about biological processes and causal relationships. Hallucination isn’t an acceptable failure mode when lives are at stake.
Industrial automation: Manufacturing, logistics, and infrastructure all involve physical systems that follow predictable dynamics. World models could dramatically improve reliability.
These aren’t theoretical applications. The $1 billion raise suggests investors see a path to commercial deployment, not just research papers.
The Skeptic’s View
Not everyone agrees with LeCun’s thesis. The counterarguments are straightforward:
Scale works. GPT-5.4 and Claude Opus 4.6 are dramatically more capable than their predecessors. Whatever ceiling exists, we haven’t hit it yet. Maybe more parameters and better training data will solve the problems LeCun identifies.
Multimodal helps. Models that process images, video, and audio alongside text develop some understanding of physical dynamics. Vision-language models can reason about spatial relationships in ways pure text models can’t.
Embodied AI is happening anyway. Robotics companies aren’t waiting for world models. They’re deploying LLM-based systems that work well enough in constrained environments. Perfect is the enemy of good.
The timeline matters. Even if world models are ultimately superior, LLMs are generating revenue now. AMI Labs needs to ship something that works better than existing approaches before funding runs out.
What This Means for You
If you’re building with AI, the immediate impact is zero. LLMs work. Use them. The tools available today—Claude, GPT, open-weight models—solve real problems effectively.
The longer-term implications are more interesting:
If LeCun is right, we’re in a local maximum. Current AI capabilities will plateau, and systems built entirely on LLMs will hit walls in applications requiring genuine physical reasoning. Companies that diversified their AI architectures early will have an advantage.
If LeCun is wrong, AMI Labs burns through a billion dollars proving that LLMs were the right approach all along. The investors lose money; the rest of us continue benefiting from improved language models.
Either way, the bet is now placed. The largest seed round in European history backs a direct challenge to the industry’s dominant paradigm. In a few years, we’ll know if Yann LeCun saw something the rest of the industry missed—or if the Turing Award winner made a very expensive mistake.