Gemini 3 refused to believe it was 2025, and hilarity ensued
What Happened
Famed AI researcher Andrej Karpathy got early access to Google’s latest AI model and stumbled onto its "model smell."
Our Take
This is comedy that masks a real problem. Andrej catching Gemini refusing to believe it's 2025 isn't just a gotcha—it's proof the model has weird epistemic gaps. 'Model smell' is what you call it when something's off but you can't quite articulate why. Google's probably claiming it's a jailbreak or prompt-injection, but honestly? It suggests the training's brittle on temporal reasoning. The model knows facts but doesn't have real understanding of time. That's a bigger issue than a funny demo makes it sound.
What To Do
Test any Gemini model on time-sensitive reasoning before trusting it in production systems.
Cited By
React
