Skip to main content
Back to Pulse
TechCrunch

No, you can’t get your AI to ‘admit’ to being sexist, but it probably is anyway

Read the full articleNo, you can’t get your AI to ‘admit’ to being sexist, but it probably is anyway on TechCrunch

What Happened

Though LLMs might not use explicitly biased language, they may infer your demographic data and display implicit biases, researchers say.

Our Take

The subtle kill: AI won't say slurs, so people think it's fair. Meanwhile it's silently inferring your age, gender, zip code, and adjusting behavior. Hidden bias is worse than explicit bias because it's deniable and unauditable. You can't catch what the model won't admit. Researchers nailed this—the risk isn't the language, it's the inference.

This is the real compliance nightmare. Bias audits are theater if you're not testing for silent inference.

What To Do

When you deploy any LLM, explicitly test for inferred demographic attributes and audit behavioral differences across groups.

Cited By

React

Loading comments...