Everybody Is a 10x Developer Now

Draft 3 minutes read

We all know the “10x developer”. A person who walks into the office after lunch and codes until they get kicked out by the night guard. In the morning their colleagues wake up to a ton of pull requests that require “immediate attention”.

They write verbose code of poor quality. They consider rules to be roadblocks: coding conventions, DRY, separation of concerns… However they deliver features, fast.

Leadership praises them because they do deliver. They are branded ‘10x’ developer because they seemingly deliver more value to the company than 10 of their colleagues.

But do they?

These individuals will pump out features at breakneck pace until everything grinds to a halt because of the countless refactoring the project has to go through. Naturally they do not refactor the code themselves, or they do them with disregard of other people’s work.

Eventually the 10x developer leaves the project because they are frustrated by the slowing pace. The team takes the blame for failure, and the life goes on.

In short, being a 10x developer means borrowing time from the future by eschewing good practices.

And now we have AI1

10x developers have been rare. After all, they needed to be proficient coders and have a strong work morale (if not ethic) to remain focused. If not for their ego and impatience, most would be excellent developers.

But with agentic AI anybody can be a 10x developer. That is: anybody can spew hundreds of lines of code that “do stuff”, and do not follow any rules.

I know this is a hot topic but LLMs in agentic mode absolutely can produce complex functional code all the way up to fully functional programs2.

Agents make it stupidly easy to create interactive prototypes from nothing.

Herein lies the problem: The quality of the code they produce follows the quality of the prompt. The LLM does not know about your good practices, unless your documentation is in context and extremely explicit it will pull the general good practices for React developers even if you write a fuel pump controller in ADA.

You will end up with a horrible mess, that nevertheless works (and is of course full of bugs). But debugging code is hard, and somebody using an LLM to just zero-shot things will have neither skill or patience to make it clean. They will send it to you for a review.

If you actually take the time to comment on all the problems, they will most probably just feed those back to the machine, inevitably losing context. Worst of all, they will not learn anything, so you will remain in the role of the slop reviewer forever.


  1. I don’t particularly care about the differences between machine learning, transformers, LLMs and so on; I use the term AI for anything that one would think of as one. ↩︎

  2. I am not talking about copy and pasting code from ChatGPT.

    A modern agentic IDE is capable of:

    • Planning step-by-step feature development with a test plan for each step.
      • With multiple rounds of plan review.
    • Gathering additional information from the web, documentation and any given resources.
    • Running programs, opening webpages, interacting with both, recording the interactions in screenshots and video and using these artifacts as additional input.

    All this to say that you can give a screenshot of a bug of your web application to your IDE with the istruction “fix it”, and it will do it. ↩︎