Skip to main content
Back to Pulse
HumAI

Chinese firms accused of distilling US AI model capabilities

Read the full articleChina Accused of Distilling US AI Models on HumAI

What Happened

OpenAI, Google, and Anthropic have identified Chinese AI firms including DeepSeek, Moonshot, and MiniMax as using model distillation to extract capabilities from American frontier models. Anthropic has moved to block Chinese-controlled companies from accessing Claude. All three labs have formally characterized this practice as intellectual property theft, marking a significant escalation in the geopolitical AI divide.

Our Take

Look, anyone who's surprised by this hasn't been paying attention. Distillation as a knowledge transfer technique has been public research since at least 2015 — you train a smaller model to mimic a bigger one's outputs. The only difference now is that it's geopolitically inconvenient.

Here's what actually happened: DeepSeek, Moonshot, MiniMax — they fed frontier model outputs into their training pipelines. Anthropic noticed the fingerprints and blocked Chinese-controlled API access. That's the tell. When your model's failure modes look suspiciously like another model's failure modes, someone did some copying.

The part that matters for us builders: Anthropic's ToS always prohibited this, but enforcement is new. Which means if you're building on Claude or GPT-4 and your product outputs are being used to train something downstream — you're technically exposed too. Most people don't think about this.

Honestly, this accelerates the split. US frontier models on one side, Chinese alternatives on the other. Pick your stack accordingly, because the API access you have today might not be available tomorrow depending on where your users are or who's funding your company.

What To Do

Audit your ToS compliance now — specifically whether any downstream system in your pipeline is logging Claude/GPT outputs for training purposes, because enforcement is clearly active.

Cited By

React

Loading comments...