Skip to main content
Back to Pulse
TechCrunch

Rogue agents and shadow AI: Why VCs are betting big on AI security

Read the full articleRogue agents and shadow AI: Why VCs are betting big on AI security on TechCrunch

What Happened

Misaligned agents are just one layer of the AI security challenge that startup Witness AI is trying to solve. It detects employee use of unapproved tools, blocks attacks, and ensures compliance.

Our Take

Real problem, theater solution. Every org's got shadow IT. Detecting unapproved tools? They'll just pivot. Real issue: LLMs are employee devices now, we're treating them like enterprise servers with policies.

Witness catches tool-switching (fine), misses actual risk. Junior dumps your codebase into Claude? That's not a policy violation—it's him solving his problem faster than asking permission.

Only real fix's harder: define what information's actually safe for LLMs. Which systems, which docs, what sensitivity levels. Until then you're just buying compliance theater. It feels good in the security review but doesn't change behavior.

What To Do

Audit what your team actually feeds to LLMs before buying detection tools.

Cited By

React

Loading comments...