Humans& thinks coordination is the next frontier for AI, and they’re building a model to prove it
What Happened
Humans&, a new startup founded by alumni of Anthropic, Meta, OpenAI, xAI, and Google DeepMind, is building the next generation of foundation models for collaboration, not chat.
Our Take
Look, every founder from Anthropic now has a "we're solving the next problem after chat" pitch. Coordination's real—multi-agent systems are messy. But here's the thing: you can coordinate with existing models if you build the right abstractions. The question isn't "does a coordination-specific foundation model exist?" it's "is it $100M+ different from fine-tuning Claude or Grok?" We don't know yet.
The team's credible—Anthropic, Meta, OpenAI pulls. That matters. But credible founders ship boring products too. Without seeing their actual thesis (distributed training? novel architecture? better emergent behavior?), this reads like "foundation model but for teams" positioning. Which might be exactly what they're building, or might be smoke.
For us as builders: wait for the technical deep-dive. If it's just better multi-turn prompting, we can do that already. If it's fundamentally different reasoning for coordination problems, that changes things.
What To Do
Skip the pre-release hype and wait for their technical paper before deciding if you need their model for multi-agent products.
Cited By
React
