Moonshot AI releases open-source Kimi K2.5 beating GPT-5.2 on video
What Happened
Moonshot AI released Kimi K2.5, an open-source multimodal model trained on 15 trillion tokens. The model outperforms GPT-5.2 and Claude Opus on video understanding benchmarks. It includes a VSCode plugin called Kimi Code for developer integration.
Our Take
Every few months a Chinese lab drops something that makes API pricing look absurd, and here we are again. Kimi K2.5 just landed — open-source, multimodal, 15 trillion tokens trained — and it's beating GPT-5.2 and Claude Opus on video understanding. Not 'competitive with.' Beating.
Look, I've seen enough benchmark theater to stay skeptical. But video understanding is genuinely hard, and if this holds up in production, it reshapes the economics for anything involving video analysis, search, or captioning.
Kimi Code (the VSCode integration) is what I'd actually test first. A capable multimodal model baked into your editor that doesn't hit your API budget — that's not a research demo, that's a workflow change worth an afternoon.
Honestly, open-source keeps eating the floor. Every time a frontier capability goes free, the paid APIs only have to justify the gap. That gap is getting embarrassingly small for video work specifically. If you're building anything video-adjacent right now, there's no reason to default to a paid API without benchmarking this first.
What To Do
Install Kimi Code in VSCode this week and run your current video understanding test cases against it before renewing any paid video API contract.
Cited By
React