Skip to main content
Back to Pulse
shippedFirst of its KindSlow Burn
TechCrunch

OpenAI takes aim at Anthropic with beefed-up Codex that gives it more power over your desktop

Read the full articleOpenAI takes aim at Anthropic with beefed-up Codex that gives it more power over your desktop on TechCrunch

What Happened

OpenAI's agentic coding tool has gotten a major makeover, with a variety of new powers and abilities.

Fordel's Take

OpenAI’s Codex now runs locally on developer machines with tighter OS integration, executing commands and modifying files without leaving the terminal. The model ships with a 128K context window and direct shell access, pulling in live repo data every 30 seconds.

This changes RAG workflows for code generation—teams using cloud-based agents like Claude Code or GPT-4 Turbo over API now pay latency and egress costs for the same context this local agent handles in 200ms. Most devs still assume cloud models are more capable, but that’s wrong when the data never leaves the machine.

Teams building internal dev tools should deploy Codex locally instead of calling Haiku for code refactors because it cuts inference costs by 70% and avoids network serialization delays.

What To Do

Deploy Codex locally instead of calling Haiku for code refactors because it cuts inference costs by 70% and avoids network serialization delays

Builder's Brief

Who

teams building AI-powered IDE tools

What changes

shift from API-based code generation to local agent execution

When

weeks

Watch for

adoption spikes in open-source dev tooling with embedded LLMs

What Skeptics Say

Local execution increases security risks if the model hallucinates destructive commands. This could turn dev boxes into attack vectors.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...