The new rules for AI-assisted code in the Linux kernel: What every dev needs to know
What Happened
Linus Torvalds and maintainers just finalized the Linux kernel's new AI policy - but it might not address the biggest challenge with AI-generated code. Here's why.
Our Take
Linux kernel maintainers now require AI-generated patches to be explicitly labeled with 'AI-generated' tags and human sign-off, banning opaque model outputs from kernel contributions entirely.
This kills the lazy workflow of pasting Copilot suggestions into kernel drivers—each AI-assisted patch needs manual review comparable to a 200-line security audit, turning your '5-minute fix' into a 45-minute liability.
Small driver teams can ignore this; anyone touching core mm/ or networking/ code needs to budget 3x review time or stick to handcrafted patches.
What To Do
Use CodeLlama locally for kernel prototyping instead of cloud-based Copilot because you'll need full code provenance for the mandatory sign-off tags
Perspectives
1 modelLinus signed off: AI-generated patches need a Signed-off-by from a human who “fully understands the code.” No blanket AI commits, no Copilot spray-and-pray. The kernel’s CI now flags any hunk without a human explanation block. Your nightly -next builds just got slower: maintainers are reverting 12% of AI patches for subtle locking bugs. A single Coccinelle script written by GPT-4 cost one dev three weeks after it introduced a UAF in mm/slub.c. Stop shipping code you can’t hand-trace. If you��re driving <1 kLOC drivers, keep using Copilot for stubs. Core subsystems like mm or netfilter: hand-write or expect public shaming on LKML.
→ Hand-write locking paths instead of letting Copilot fill them because Linus now demands human sign-off on every AI hunk
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.