Skip to main content
Research
Engineering & AI17 min read

The Death of Implementation as a Moat: Why Your PM Just Outshipped Your 10x Developer

For thirty years, writing code was the hard part of software. Agentic tools killed that. Google reports over 30% of new code is AI-generated. Microsoft says 30–40% and climbing. GitHub Copilot generates 46% of code for its users. The bottleneck has shifted from implementation to decomposition — breaking products into versioned, atomic, shippable increments. The people best equipped for this are not your senior engineers. They are PM-engineers who think in major/minor/patch, not in functions and classes. The game changed. The skill mix that wins changed with it.

AuthorAbhishek Sharma· Fordel Studios

A senior developer with twelve years of experience just got outshipped by someone who has not opened an IDE in four years. Not on a side project. Not on a hackathon toy. On a production SaaS feature with authentication, Stripe integration, a database migration, and a deployment pipeline. The product manager turned builder did it in a weekend using Claude Code and Cursor. The engineering team had estimated the same scope at three sprints.

The developer is not bad at their job. They are exceptionally good at their job. They can hold an entire system in their head, reason about edge cases nobody else would catch, and write code that survives production for years without a bug. None of that mattered. The PM did not need to hold the system in their head. The PM needed to decompose the product requirement into twelve atomic, versioned increments and feed them to an agentic tool one at a time. The agent handled the rest.

This is not an anomaly. This is the new default.

30%+of new code at Google is AI-generatedSource: Sundar Pichai, Google Q1 2025 earnings call — up from 25% in October 2024
46%of code generated by GitHub Copilot for its usersSource: GitHub telemetry data — Java developers reach 61%. 20 million cumulative users by July 2025.
84%of developers using or planning to use AI coding toolsSource: Stack Overflow Developer Survey 2025 — 51% use AI tools daily. Up from 76% in 2024.
···

Three Eras of What Mattered

Software engineering has always had a bottleneck. The bottleneck determined who won. It determined who got hired, who got promoted, who got funded, and who shipped products that survived contact with reality. The bottleneck just moved, and most of the industry has not noticed yet.

The Architect Era: 1970s–2000s

In the waterfall era, the person who could design the system upfront owned the outcome. Implementation was labour. Interchangeable, outsourceable, commoditised labour. The architect drew the boxes and arrows. Teams of programmers filled them in. If the architecture was sound, the product shipped. If it was not, no amount of clever code could save it. The scarce resource was design. The abundant resource was typing.

This era rewarded a specific archetype: the systems thinker who could hold an entire product in their head and produce a specification document thick enough to stop a door. The specification was the product. The code was just its physical manifestation.

The Fast Coder Era: 2000s–2023

Agile inverted the waterfall. Ship small. Ship fast. Learn. Iterate. The specification became a liability — too slow, too rigid, too disconnected from what users actually needed. The person who won was the person who could hold a system in their head and type fast enough to iterate before the market moved. The 10x developer myth was born in this era. Not because some developers were literally ten times faster at typing, but because iteration speed compounded. Ship on Monday, learn on Tuesday, ship again on Wednesday. A developer who could do that was worth ten who could not.

The scarce resource was implementation speed. The abundant resource was ideas. Every product manager had a backlog three years deep. What mattered was how fast the engineering team could burn through it.

The Decomposer Era: 2024–Present

Agentic tools collapsed implementation. Not partially. Not for simple tasks. For entire feature scopes. Satya Nadella says AI contributions at Microsoft are "30–40 percent and going up monotonically." Claude Code hit a billion dollars in annualised run rate within six months of launch. Cursor is used by over half the Fortune 500. The denominator changed. Implementation is no longer the bottleneck. The new bottleneck is decomposition — the ability to break a product into versioned, atomic, shippable increments that an agent can execute.

The person who wins in this era is not the fastest coder. It is the person who can take a vague product requirement and decompose it into twelve precise, independently shippable units of work, each with clear acceptance criteria, each versioned against what already exists, each scoped tightly enough that an agentic tool can execute it without hallucinating its way into a mess.

EraScarce ResourceWho WonBottleneck
Architect (1970s–2000s)System designThe architect who could specify the entire system upfrontGetting the design right before writing a line of code
Fast Coder (2000s–2023)Implementation speedThe 10x developer who could iterate faster than competitorsBurning through the backlog before the market moved
Decomposer (2024–)Decomposition disciplineThe PM-engineer who versions work into atomic incrementsBreaking ambiguous requirements into agent-executable units

Why Decomposition Is the New 10x

The Atomic Task Problem

Agentic tools are spectacular at well-scoped tasks and catastrophic at ambiguous ones. "Build me a dashboard" produces something that looks like a dashboard and works like a dumpster fire. "Create a read-only API endpoint that returns paginated invoices filtered by date range, version 1.0.0, using the existing Invoice model and Prisma client" produces production-grade code on the first attempt. The difference is not prompt engineering. It is decomposition.

The person who can write the second prompt is not a prompt engineer. They are someone who understands software versioning, API contract design, data model constraints, and incremental delivery. They know that this endpoint is a v1.0.0 because it is a new capability with no prior contract to break. They know that adding a sort parameter later is a v1.1.0 because it extends the contract without breaking consumers. They know that changing the response shape is a v2.0.0 because every downstream client will need to update. This is not abstract knowledge. It is the operating system for agentic development.

Major, Minor, Patch: The PM Hygiene That Agents Need

Version discipline is not agile ceremony. It is not a process artifact that exists to satisfy a Jira board. It is the fundamental grammar of incremental software delivery. Major version: breaking change, new contract. Minor version: new capability, backward compatible. Patch: fix, no behaviour change. Every piece of software that has ever shipped successfully was implicitly versioned this way, whether the team used semver or not.

Product managers who have lived through the full software development lifecycle — who have managed releases, triaged regressions, negotiated breaking changes with customers, and watched a minor version bump cascade into a production incident — understand this grammar natively. It is how they think. When they decompose a product requirement, they do not think "what functions do I need to write." They think "what is the smallest shippable increment that moves the product forward without breaking what exists." That mental model is exactly what agentic tools need to receive.

Agile called this "user stories" and "story points" and "sprint commitments." Underneath the ceremony, the core skill was always the same: can you break a large, ambiguous objective into small, versioned, independently valuable increments? The teams that could do this well shipped. The teams that could not had two-week sprints that consistently delivered nothing.

The best prompt for an agentic tool is not a prompt at all. It is a well-scoped version increment with clear acceptance criteria and explicit constraints on what not to touch.

Specification Precision Follows Decomposition

Once you can break work into atomic versioned units, writing specs that agents can execute becomes trivial. The spec is the decomposition. It already contains the scope, the constraints, the acceptance criteria, and the relationship to what came before. Pure product managers write wishlists — "users should be able to filter invoices by date." Pure developers write implementation notes — "add a where clause to the Prisma query with gte and lte date params." The PM-engineer writes a buildable increment — "v1.1.0: add optional startDate and endDate query parameters to GET /api/invoices, validate ISO 8601 format, default to last 30 days, update OpenAPI spec, add integration test."

The AlphaCodium study demonstrated this empirically. Wrapping GPT-4 in structured decomposition steps — iterative reflection, test generation, incremental refinement — improved benchmark accuracy from 19% to 44%. The model did not get smarter. The decomposition got better. The constraint was never the AI. It was the quality of the task breakdown.

Release Judgment: The Other Half

Agents Ship Fast. Someone Needs to Say Stop.

Faros AI studied 1,255 engineering teams and over 10,000 developers. Teams with high AI adoption completed 21% more tasks and merged 98% more pull requests. But PR review time increased 91%. The bottleneck did not disappear. It moved downstream. From writing to reviewing. From implementation to judgment.

Agentic tools can produce twenty pull requests in an afternoon. Most of them will be technically correct. Some of them will be architecturally wrong. A few will introduce subtle regressions that no test catches because the test was also AI-generated and shares the same blind spots as the implementation. The scarce resource is no longer "who can write this code." It is "who can look at this code and decide whether it should ship."

That decision requires two things that rarely coexist in a single person: the technical depth to evaluate whether the implementation is sound, and the product sense to evaluate whether the implementation is valuable. A senior developer can tell you if the code will break. A product manager can tell you if it should exist. The PM-engineer can tell you both.

The Version Gate

Every output from an agentic tool is a candidate release. The PM-engineer evaluates it against three questions. Does this break existing contracts? That is a major version — proceed with caution, coordinate with consumers, plan migration. Does this add capability without breaking existing behaviour? That is a minor version — ship it, document it, monitor it. Does this fix something without changing behaviour? That is a patch — ship it now.

Pure developers often lack this product-level framing. They evaluate code on technical merit: is it clean, is it performant, does it follow the pattern. They do not instinctively ask "will this confuse the customer who relies on the current behaviour" or "does this feature belong in this release or the next one." Pure product managers cannot evaluate the technical implications: they see a feature working in a demo and assume it is ready to ship, missing the database migration that will lock the table for six minutes during deployment.

The PM-engineer sees both. They see the working demo and the locked table. They see the new feature and the breaking change it implies for the mobile client that has not been updated yet. They make the version call not from process, but from understanding the full surface area of a release.

The 10x developer used to be the person who could type the fastest. The 10x developer is now the person who can say "do not ship that" the fastest — and be right.

The Profile That Wins

The agentic era does not reward specialists. It rewards a specific hybrid: someone with enough engineering depth to evaluate generated code, enough decomposition discipline to feed agents well-scoped work, enough product sense to know what to build and what to defer, and enough release judgment to know when something is shippable versus when it is technical debt wearing a feature costume.

The Skill Stack That Makes Someone Dangerous

01
Engineering depth: can read and evaluate generated code

Not write it from scratch — evaluate it. Spot the N+1 query the agent introduced. Recognise when the auth middleware is bypassed. Understand why the agent chose a recursive solution and whether it will blow the stack at scale. This is not the same skill as writing production code for eight years. It is the skill that remains after you stop writing production code and start reviewing it. PM-engineers who spent years building software and then moved to product retain this pattern recognition even when their implementation muscles have atrophied.

02
Decomposition discipline: versions work into atomic increments

Major, minor, patch. Not as labels in a ticketing system — as a way of thinking about software change. Every feature is a set of increments. Every increment has a version number, even if only in your head. Every version has a blast radius. The PM-engineer who has managed releases, handled rollbacks, and explained breaking changes to angry customers does this instinctively. It is muscle memory from years of watching decomposition failures cascade into production incidents.

03
Product sense: knows what to build and what to defer

Agentic tools do not have opinions about what to build. A16z noted that AI models generating product ideas produce work that is "bland, derivative, and generally lacks the spark you see from really good new product thinking." The tool can execute. It cannot prioritise. It cannot look at a roadmap and say "this feature will delight three customers and confuse three thousand." That judgment is product management. And it is the one skill that agents cannot automate.

04
Release judgment: knows when something is shippable versus debt

The 91% increase in PR review time that Faros AI documented is the cost of not having this skill. Teams without release judgment merge everything the agent produces. Teams with release judgment gate every output against: does this move the product forward, and is the technical cost acceptable? The PM-engineer has been making this call at sprint reviews for years. The only difference is that the volume is higher and the judgment needs to be faster.

The Uncomfortable Evidence

The data tells a story that most engineering organisations are not ready to hear.

The Productivity Paradox

Goldman Sachs found that AI-driven productivity gains average 30% — but only in software coding and customer service. Economy-wide, there is no meaningful productivity impact yet. McKinsey surveyed 300 publicly traded companies and found that top-performing AI-driven software organisations saw 16–30% improvements in team productivity and time-to-market, with 31–45% improvements in software quality. JPMorgan Chase reported a 10–20% productivity boost for tens of thousands of engineers using coding assistants.

But here is the part nobody highlights: these gains are unevenly distributed. The Stack Overflow 2025 survey found that only 29% of developers trust AI-generated code. 46% actively distrust it. The top frustration for 66% of developers is "AI solutions that are almost right, but not quite." Positive sentiment about AI tools dropped from over 70% to 60% in a single year. The tools are getting better. The developers are getting more sceptical. Why?

Because implementation-first developers are using agentic tools as faster typewriters. They sit in the IDE, write partial code, and let the agent autocomplete. That is using a delegation engine as a text predictor. The productivity gains Goldman Sachs and McKinsey measured are coming from a different cohort — people who are decomposing work, delegating entire units to agents, and spending their time on review and specification instead of syntax.

The Communications of the ACM Already Said It

The Communications of the ACM — the most established academic publication in computer science — published a piece titled "The Vibe Coding Imperative for Product Managers." Not for developers. For product managers. The academic establishment has already recognised that the natural users of agentic coding tools are not the people who spent their careers writing code. They are the people who spent their careers specifying what should be built and in what order.

Andrej Karpathy coined the term "vibe coding" in February 2025, describing full delegation of implementation to LLMs. He has since updated his framing to "agentic engineering" — orchestrating agents rather than writing code. The terminology is converging on the thesis: the skill is orchestration, not implementation. And orchestration is decomposition by another name.

SkillImplementation-First DeveloperPM-Engineer
Using agentic toolsAutocomplete on steroids. Writes code, accepts suggestions, debugs inline.Delegation engine. Decomposes into atomic tasks, delegates entire units, reviews output.
Scoping workEstimates in hours of coding time. Thinks in technical complexity.Estimates in version increments. Thinks in product impact and contract changes.
Evaluating outputIs the code clean? Does it follow the pattern? Will it pass the linter?Does this ship? Does it break existing behaviour? Is it the right thing to build right now?
Handling ambiguityAsks for more technical requirements. Waits for a spec.Decomposes the ambiguity into testable hypotheses. Ships the smallest increment that answers the question.
When the agent failsDebugs the generated code. Fixes the implementation.Questions the decomposition. Rescopes the task. Tries a different breakdown.

What This Means for Organisations

Product manager hiring is up 11% since the start of 2025, the highest in over two years according to Lenny Rachitsky’s analysis of 6,000+ open roles. But the roles are different now. The hiring focus has shifted to PMs with deep technical fluency and AI awareness. Companies are not hiring PMs who write PRDs and throw them over the wall to engineering. They are hiring PMs who can build.

What Changes When Decomposition Is the Bottleneck
  • Engineering interviews should test decomposition and specification, not just algorithm implementation — can the candidate break an ambiguous requirement into versioned increments?
  • PM interviews should test technical evaluation, not just prioritisation — can the candidate look at generated code and identify what should not ship?
  • Team composition shifts from many implementers and few specifiers to many specifiers and fewer senior reviewers
  • Code review becomes the highest-leverage activity in the organisation — the 91% increase in review time is a feature, not a bug, if the reviewers have the right skill mix
  • Version discipline becomes a team-level competency, not a DevOps concern — every person on the team needs to think in major/minor/patch
  • The PM-engineer hybrid is no longer an unusual career path — it is the optimal one for agentic product teams

This does not mean firing your senior engineers. Your senior engineers are the people who can review AI-generated code and catch the subtle failures that agents introduce — the race condition, the security bypass, the data model that works today and collapses at ten times the scale. That skill is irreplaceable. What changes is what they spend their time on. Less writing code. More reviewing it. Less implementing features. More evaluating whether the decomposition was correct and the output is shippable.

The uncomfortable corollary: a mid-level developer who cannot evaluate AI-generated code and cannot decompose requirements is in a worse position than a PM who can do both. The mid-level developer’s primary value — implementation speed — has been commoditised. The PM-engineer’s primary value — decomposition, specification, release judgment — has become the bottleneck.

Your 10x developer just got outshipped by someone who has not written a for-loop in four years. Not because the PM is smarter. Because the game changed and the PM’s skill set is now the rate-limiting factor, not the developer’s.
···

Where Fordel Builds

We have seen this play out across client engagements in SaaS, fintech, healthcare, and enterprise. The teams that decompose well ship three to five times faster with agentic tools than the teams that hand ambiguous requirements to developers and hope the AI figures it out. We help organisations restructure their development workflow around decomposition discipline — training teams to think in versioned increments, establishing specification standards that agents can execute, and building review processes that catch what agents miss.

If your team is using agentic tools and not seeing the productivity gains the industry is promising, the problem is probably not the tools. It is the decomposition. That is the gap we close. Reach out.

Keep Exploring

Related services, agents, and capabilities

Services
01
AI Agent DevelopmentAgents that ship to production — not just pass a demo.
02
AI Product StrategyAvoid the AI wrapper trap. Find where AI creates a defensible moat.
03
API Design & IntegrationAPIs that AI agents can call reliably — and humans can maintain.
04
Cloud Architecture & DevOpsInfrastructure that runs AI workloads without surprising your budget.
Capabilities
05
AI Agent DevelopmentAutonomous systems that act, not just answer
06
Cloud Infrastructure & DevOpsInfrastructure that scales with AI workloads
07
Backend DevelopmentThe infrastructure that makes AI-powered systems reliable
08
AI/ML IntegrationAI that works in production, not just in notebooks
Industries
09
SaaSThe SaaSocalypse narrative is real and it is not done. Cursor with Claude built Anysphere into a $2.5B company selling to developers who used to pay for multiple separate tools. Bolt, Lovable, and Replit Agent are letting non-engineers ship MVPs in hours. Zero-seat software is emerging — AI agents as the only users of your API, with no human seat count to price against. The "wrapper problem" is killing thin AI wrappers with no moat. Single-person billion-dollar companies are no longer theoretical. Vertical AI is eating horizontal SaaS in category after category. And the great SaaS repricing is underway: customers are refusing to renew at legacy prices when AI does the same job for less.
10
FinanceAI-first neobanks are emerging. Bloomberg GPT and domain-specific financial LLMs are in production. Upstart and Zest AI are disrupting FICO-based credit scoring. Deepfake voice fraud is hitting bank call centers at scale. The RegTech market is heading toward $20B+ as compliance automation replaces compliance headcount. JP Morgan's LOXM and Goldman's AI initiatives are setting expectations for what institutional-grade financial AI looks like — and the compliance infrastructure required to deploy it.
11
HealthcareAmbient AI scribes are in production at health systems across the country — Abridge raised $150M, Nuance DAX is embedded in Epic, and physicians are actually adopting these tools because they remove documentation burden rather than adding to it. The prior authorization automation wars are heating up with CMS mandating FHIR APIs. AlphaFold and Recursion Pharma are rewriting drug discovery timelines. The engineering challenge is not AI capability — it is building systems that are safe, explainable, and HIPAA-compliant at the same time.
12
InsuranceInsurTech 2.0 is collapsing — most of the startups that raised on "AI-first insurance" burned through capital and failed or are being quietly absorbed by incumbents. What is emerging from the wreckage is more interesting: parametric AI underwriting, embedded insurance via API, and agent-first claims processing that handles FNOL to payment without human intervention. The carriers that win will be those that treat AI governance as an engineering requirement under the NAIC FACTS framework, not a compliance afterthought.
13
LegalGPT-4 scored in the 90th percentile on the bar exam. Lawyers have been sanctioned for citing AI-hallucinated cases in federal court. Harvey AI raised over $100M and partnered with BigLaw. CoCounsel was acquired by Thomson Reuters. The "robot lawyers" debate is live, the billable hour death spiral is real, and the firms that figure out new pricing models before their clients force the issue will define the next decade of legal services.