Skip to main content
Research
AI & Tools9 min read

Every AI Company Wants to Own Your Entire Developer Toolchain

OpenAI is buying linter companies. Google is building full-stack vibe coding. Cursor is hiding which models power your editor. The race to own your entire developer workflow is on, and the composable toolchain is the collateral.

AuthorAbhishek Sharma· Head of Engg @ Fordel Studios

Something happened this week that most developers will scroll past. OpenAI announced it is acquiring Astral — the company behind Ruff, the Python linter, and uv, the Python package manager. On the same day, Google unveiled a full-stack vibe coding experience in AI Studio. JetBrains launched a platform for managing AI coding agents. And Cursor is dealing with backlash for shipping a model from Moonshot AI without proper attribution.

Each of these stories, individually, is just another day in AI news. But zoom out and the pattern is impossible to miss: every major AI company is racing to vertically integrate the entire developer experience. From editor to linter to package manager to deployment. They do not just want to help you write code. They want to own the pipeline that code flows through.

I have been building software for fourteen years. I have seen platform wars before. But this one is different, because the lock-in is not at the infrastructure layer where you can see it. It is at the toolchain layer, where it feels like convenience.

···

The Astral Acquisition: OpenAI Buys the Plumbing

Let me be clear about what Astral represents. Ruff is not some niche tool. It is the fastest Python linter in existence — written in Rust, replacing Flake8, isort, and a dozen other tools in a single binary. uv is doing the same thing for Python package management, replacing pip, pip-tools, and virtualenv. These tools are foundational infrastructure used by millions of developers.

OpenAI buying Astral is not about improving Codex. It is about controlling the feedback loop. When your linter, your package manager, and your code generation tool are all owned by the same company, the AI does not just write your code — it validates it, manages its dependencies, and ships it. That is end-to-end ownership of your development workflow, presented to you as a seamless experience.

And to be fair, it probably will feel seamless. That is exactly what makes it dangerous.

The pitch will be something like: Codex understands your linting rules natively because it owns Ruff. It manages packages better because it owns uv. Everything just works. And developers, who are perpetually exhausted by tooling friction, will accept this trade because the alternative is stitching together fifteen tools that do not talk to each other.

···

Google Goes Full Stack in AI Studio

Google is taking a different path to the same destination. The new AI Studio vibe coding experience is not just another code generation toy. It is a full-stack environment where you can go from prompt to deployed application without leaving Google’s ecosystem. Frontend, backend, database, hosting — all within their platform.

I have used these kinds of environments. They are impressive demos. You describe what you want, the AI scaffolds everything, you click deploy, and a working application appears. For prototypes and internal tools, this is genuinely useful. We have used similar workflows at Fordel for rapid client prototyping.

But here is what nobody talks about: the code these platforms generate is optimized for that platform. The database layer assumes their database. The hosting assumes their infrastructure. The auth assumes their identity provider. You are not building a portable application. You are building a Google application that happens to use your business logic.

This is the same playbook as Firebase, just with a generative AI frontend. And we all remember what happened when teams tried to migrate off Firebase once they hit scale. Except now, the lock-in runs deeper because the AI has made architectural decisions you never explicitly approved.

The most effective lock-in is the kind the developer never consciously chose. When AI makes your infrastructure decisions, the switching cost is invisible until you try to switch.
···

Cursor and the Attribution Problem

The Cursor situation is telling in a different way. Reports surfaced that Cursor shipped Moonshot AI’s Kimi K2.5 model without clearly attributing it. The community backlash was immediate. Developers felt misled about what was actually powering their editor.

This matters more than people realize. When you use an AI coding tool, you are trusting it with your proprietary code. You are sending source files, architecture patterns, business logic — sometimes entire codebases — to whatever model is running behind the scenes. Knowing which model that is, who operates it, where the data goes, and what the retention policies are is not a nice-to-have. It is a basic requirement for any professional engineering team.

Cursor is not the only tool playing fast and loose with model transparency. The AI coding tool market is full of products that abstract away the model layer because they do not want you thinking about it. They want you thinking about how productive you feel. The model is supposed to be an implementation detail.

But models are not implementation details. They have different training data cutoffs, different capability profiles, different safety characteristics, and critically, different data handling policies. A tool that quietly swaps your model is a tool that quietly changes the security posture of your development environment.

···

JetBrains Enters the Ring

JetBrains launching a platform for managing AI coding agents is the move I have been expecting. If anyone understands developer toolchain lock-in, it is the company that built IntelliJ, PyCharm, WebStorm, and a dozen other IDEs that developers cannot imagine working without.

JetBrains’ play is smarter than the others. They are not trying to own the models. They are trying to own the orchestration layer — the platform where you configure, deploy, and manage the AI agents that work with your code. This is the control plane play, and it is arguably more defensible than owning any individual tool.

Think about what this means: you write code in JetBrains’ IDE, the AI agents run on JetBrains’ platform, the results feed back into JetBrains’ editor. The models can be from anyone — OpenAI, Anthropic, Google — but JetBrains controls the workflow. They become the middleware of AI-assisted development.

This is the Kubernetes play for AI coding. Kubernetes won by being the orchestration layer that abstracted away infrastructure. JetBrains wants to be the orchestration layer that abstracts away AI models. Whether they pull it off depends on whether developers trust a single vendor to manage that layer.

···

The Real Risk: Death of the Composable Toolchain

Here is what actually worries me. For the last twenty years, the developer toolchain has been composable. You pick your editor. You pick your linter. You pick your package manager. You pick your CI system. You wire them together with config files and shell scripts, and the result is a workflow that is uniquely yours.

This composability is not just a convenience. It is a defense mechanism. When no single vendor owns your entire pipeline, no single vendor can hold you hostage. You can swap out any piece without rebuilding everything. Competition at each layer keeps quality high and prices low. The Unix philosophy — small tools that do one thing well and compose together — has served us incredibly well.

What we are watching right now is the dismantling of that philosophy in real time. Not through force, but through convenience. The vertically integrated AI coding experience is genuinely better in the short term. Everything talks to everything else. There are no configuration gaps. The AI understands your full context because it owns your full context.

We are trading twenty years of hard-won composability for the convenience of not writing config files. Future us is going to have opinions about this.

I see this playing out at Fordel with client projects. Teams that adopted fully integrated AI coding environments shipped faster initially. But six months in, when they needed to change something fundamental — swap a model, adjust the deployment target, add a compliance requirement — they hit walls that did not exist with composable tools. The very integration that made them fast made them rigid.

···

What I Am Actually Doing About This

I am not arguing that you should reject AI coding tools. That would be absurd. The productivity gains are real and substantial. What I am arguing is that you should be deliberate about which layers you allow a single vendor to own.

My Rules for AI Toolchain Decisions
  • Never let the same vendor own code generation AND code validation. Your linter should be independent of your AI.
  • Know your model. If a tool cannot tell you which model is processing your code, do not use it for client work.
  • Keep infrastructure decisions explicit. If an AI makes an architectural choice, you should be able to see it, understand it, and reverse it.
  • Test portability early. Before you go deep with any integrated platform, try exporting and running the code elsewhere. If that is painful, you have your answer.
  • Separate the orchestration layer. Use tools like Claude Code or Codex in your existing editor and toolchain rather than adopting their full-stack environments.

The best setup I have found is what I call a loosely coupled AI workflow. Use AI for code generation, but run it through your own linting and testing pipeline. Use AI for architecture suggestions, but make the actual infrastructure decisions yourself. Use AI for code review, but have a human review the AI’s review. Keep each layer independent so you can swap any piece.

···

The Consolidation Is Coming Whether You Like It or Not

Let me be realistic. The consolidation of developer tooling under AI companies is inevitable. The economics are too compelling. A company that owns the entire pipeline has more data, more context, and more surface area to monetize. Developers will gravitate toward the path of least resistance. Most of these acquisitions and product launches will succeed.

What is not inevitable is the specific shape of this consolidation. If developers demand transparency about models, insist on data portability, and maintain independent validation layers, we can get the productivity benefits without the worst lock-in outcomes.

But that requires developers to care about this before it becomes a problem, not after. By the time your entire workflow depends on a single vendor’s AI platform, the negotiating leverage is gone. The time to set boundaries is now, while you still have choices.

The companies making these moves this week — OpenAI, Google, JetBrains, and yes, even the tools caught being less than transparent — are not doing anything evil. They are doing what companies do: building competitive advantages through vertical integration. The question is whether we, as an industry, will accept full vertical integration in our most critical workflow, or whether we will insist on keeping some layers independent.

I know which side I am on. But watching this week’s news, I am not confident the industry agrees with me.

Loading comments...