Skip to main content
Back to Pulse
HumAI

Yann LeCun leaves Meta, founds AMI Labs calling LLMs a dead end

Read the full articleYann LeCun Leaves Meta, Founds AMI Labs on HumAI

What Happened

Turing Award winner Yann LeCun departed Meta in January 2026 to found AMI Labs in Paris, focused on 'world model' architectures as an alternative to text-based LLMs. LeCun has argued publicly that next-token prediction approaches are architecturally limited and cannot scale to general intelligence. He suggested Chinese AI companies exploring divergent architectures may have a structural advantage over Silicon Valley's current LLM-first approach.

Our Take

LeCun's been saying LLMs are a dead end for two years on Twitter. The difference now? He's putting his name — and presumably a runway — behind a competing bet.

World models: the idea that real intelligence needs a persistent mental model of how the world works, not just next-token prediction. It's not fringe. It's actually closer to how humans build intuition. Whether AMI Labs can execute this in Paris (without Meta's $40B/year compute budget) is a completely different question.

Honestly, I'm not dismissing this. The guy built convolutional nets when nobody cared. He's been right about things that seemed fringe before. The 'Silicon Valley superiority complex' jab reads as bitter — but the underlying point, that scaling laws are plateauing, matches what we're actually seeing in the gap between GPT-4o iterations.

For a small team? Nothing changes this quarter. LLMs are still the tool. But if AMI Labs ships something real by 2027, every inference pipeline and RAG setup we're building today might be legacy faster than we think.

What To Do

Bookmark AMI Labs' research output and set a Q1 2027 calendar reminder — if they publish a working world model demo, your current LLM architecture choices are worth a hard second look.

Cited By

React

Loading comments...