Ever since the first articles appeared talking about coding agents, AI and ChatGPT, I’ve been thinking how I feel towards it all. I’ve read various takes and over the last few month I paid for a few AI subscriptions. I’ve tried Claude Code (Opus 4.5), ChatGPT Codex (5.3), Devstral 2, Kimi K2.5 and a variety of others for smaller tasks. This article breaks down the benefits and negatives I found in a short burst format. I leave longer musings to others, but this is more to allow you to figure out where you are on the spectrum that ranges from AI maximalist to, apparently, soon-to-be-unemployed worthless human. Is my ability to write code really the bottleneck, is that the bottleneck for any organisation that has a non-trivial size? Is one trillion dollars what we need to write chatbots these days? Can you vibecode an app filled with security issues and make your way into a company? Maybe.
If you are really struggling with time and TYPING is the bottleneck to writing code, then these agents are really good. Due to various circumstances I have little time for personal projects, but the ideas are there. So I was able to start a Flavor-of-the-Month-Cli-Agent-That-Will-Soon-Be-Replaced-By-Something-Else-Just-As-Energy-Inefficient and get mostly working code. I won’t go into the details, how it gets you within 20-30% of your end goal, the obvious errors, the bloated implementations, the ever changing patterns used for the same problem despite being given samples and instructions to follow. If you were working with someone at the beginning of their career, you’d expect mostly the same and since it takes 1-2 minutes to write the prompt, the output is not too bad. Fix the glaring issues, move on with your day.
Working on your own can be isolating and sometimes you end up doubting yourself - did I implement this well, is there a better way and I just didn’t find it? Having one of these prediction models on hand is really good to get some quick feedback on some things, to clarify some patterns and have samples generated. Not always helpful, but good enough to get a different perspective on things - “Oh, I didn’t think of that, that’s great!”. It can be very useful especially when you don’t have anyone to kick ideas around with. Again, this solves a very unique problem that individuals face, in a team you’d ideally have other people to bounce these ideas from. You could also use a decent search engine (Hello Kagi!) and get the same sort of information.
And that’s about it. I seriously dislike code generators in general - just try to use the OpenApi or whatever Java API generator and look at your codebase grow by hundreds of thousands of lines when all you really want is one API client for one endpoint. But I digress.
LLMs are built on theft no matter how you turn it. The intellectual theft, the artistic theft, the logical theft, whatever you want to call it. If I buy a book and decide to make a movie off of it, even if I burn the book after I decide to make the movie, I can’t really say that the movie is mine. So why can these thieves do the same? They mined open source projects to train their models and then they also end up spamming the shit out of them with crappy PRs but I have yet to see them support these projects back.
The two major companies at the forefront of intellectual theft want you to believe that their mission statements and charters are here to “guide us in acting in the best interests of humanity throughout its development” or that they have “human welfare” at the heart of everything they do. I didn’t know that AI harassing human open source maintainers or AI generating graphic sexual content or AI generating fake news related to murders is at the heart of human welfare and in the best interest of humanity. I’d like them to just admit it for ONCE: we just want money and we don’t care how we get it. Porn. Lies. Disinformation. We just want the money.
I’m taking a break to highlight that if your interest is that you want to, I think I’ll throw up violently, make the world a better place then you could use those billions to push for humanitarian aid in Palestine and end the vicious violence there. You could push for aid in Ukraine, you would advocate to have ICE scaled back or defunded, you could help those millions affected by water shortages and drought. There are REAL ways to improve humanity NOW. You just want money. Just say that.
Was the typing speed really the bottleneck? What problem exactly are you solving? This is the classic “we don’t have a problem, so we’ll invent one and then we’ll sell the ‘solution’”. But there isn’t a problem. There is no problem you need to solve.
Saying that writing code is the bottleneck, is like saying the ability to write is the thing that stops me form writing the great American novel. Writing the code is almost NEVER the problem in any organisation or software project of a moderate size. I have spent days on a task that needed one line changed just to have it go through all the various business levels, have its testing strategy defined, account for customer impact, etc. Writing the code wasn’t the problem. Everything else was the problem. In all the jobs I ever had, the code writing was never the problem. Getting multiple teams to agree on the way to move forward is where the problem is. Getting into the head of the business is where the problem is. Hidden knowledge is where the problem is - how can possibly Claude or Codex or Gemini help me when I look at a Java 8 project that has a line saying int a = 8; // DO NOT DELETE THIS IS FOR THIS EDGE CASE FOR ACCOUNTS SETUP BEFORE 1999 FOR A PRODUCT THAT DOESNT EXIST ANYMORE!. The problem isn’t writing.
Businesses and brocoders alike seem to believe that if you keep churning out thousands of lines of code you’ll fix all the bloated garbage that passes for software these days. They seem to think the LACK of lines is what causes these issues. It’s not.
It’s feature chase. It’s adding the 17th button behind the hamburger menu for a new feature for your run of the mill data-mining “health” app. It’s not spending ANY time to refactor. It’s the business not knowing what the customer wants. It’s the business not knowing what it wants. It’s the business stopping all work because someone wasn’t included in a call and now you need to wait another week for that person to be back from holiday so that they can also be informed. You’re claiming to sell a panacea for all these issues but you’re really selling shake weights.
Do we really need an increase of 15%, every year, in electricity usage driven by new AI datacenters, just so that a mindless drone in MegaCorp 1 doesn’t even need to strain a brain cell writing an email? Did we really need to burn through coal and gas so that someone auto-generates a Word document that they attach to an email that gets sent to another person that uses another agent to summarize the document and replies with an emoji before putting the paperweight back on the keyboard to appear online in Teams? Is this really the best use of some of the brightest minds of our generation or our dwindling natural resources? Really?
Did anyone really want to abuse the accumulated knowledge of millions over the course of hundreds of years so that workers become “orchestrators” and end up being the ones moving the needles between flashing lights? I never knew that I studied to be an orchestrator, that I dreamed of outsourcing my thinking and my logic. That means abdicating my ability to reason about a problem and find the simplest and fastest way to solve it. I will no longer improve myself in the process or create real value for myself or the organisation that benefits from my abilities. Is this mentally abject future something to strive for?
Did we collectively want something to cause mass unemployment? It isn’t even unemployment due to the newly improved processes, it’s unemployment because you got your AI bill, decided to fire 50-100 people from those annoying departments that concern themselves with the humanities (copywriting, design, UX, customer service - these trivial, humanity-agnostic fields) and because of that we get slop poured down our throats by the bucket. I didn’t know I wanted this. I got this though. We all did.
Previously, unemployment due to technological advances created real forward momentum. A train is objectively faster, better and more reliable than a horse and carriage. This is unequivocally an improvement for the benefit of the many. How is generating regurgitated garbage an improvement? This is one of the first times that I can think of, but I can’t think that well because AI made my brain rot, where the technological advancement is truly a step backwards. It’s like moving from stone tools to bronze tools and then going back to mass produced, shittily assembled stone tools with the promise that the sheer number of the tools will easily overcome bronze. It won’t.
I also didn’t know that there are trillions of dollars just laying around, up for grabs. I didn’t know that the roughly 5 million developers/engineers will generate those billions that Anthropic/OpenAI want and need to keep themselves afloat. Oh wait, they won’t. So what will you do for money? Oh, you’ll mine the shit out of health data or you’ll partner with a government and engage in extraordinary rendition. This is really great for humanity, thank you for your valuable contribution.
I was completely unaware that a prediction model based on text is the way to AGI. I feel it’s like saying that mimicking how to drive a car makes you a mechanical engineer. Sure. I’m sure they are related.
If you are running a company and for some reason you are reading this post, here is some free advice to get those productivity and profit gains - review your processes. It’s always the process. Get rid of those blockers inside your organisation. Find out what your customer wants. Find out what YOU want. Get rid of those AWS/Azure/GCP subscriptions and run some minimal hardware on prem. Create a culture of quality and responsibility. Be efficient with your resources and your time. Or you know, fire 10000 people, slap a company wide Claude subscription on and then look at your 2 million dollar yearly bill, the tanking of your productivity and the inevitable losses that will come after the initial savings from cut salaries wear of. Let me know how it works out. I’ll be in the queue for rat burgers as we’re fighting over running water.
Between human rights abuses, intellectual property theft, impossibility to be profitable without generating mass unemployment or mass FOMO amongst companies, AI in it’s current incarnation offers little to no value to humanity. It actively takes from all of us. You want a new iPhone? Sorry, memory shortage, you can’t get any or its 200$ more expensive. You want to move to a nice and quiet neighbourhood? Sorry, we built a new datacenter sucking up the groundwater, generating 100+ Db in noise and burning through coal and gas from nearby power plants so it won’t be quiet. You live in an area with no power plants, say no more, we can make sure a few just roll up your driveway. Do you want to write for a living? Nope. Would you like to avoid hardcore pornography while browsing a public social media website? Sorry, not possible. Do you want to do your job well and get enjoyment from it? Sorry, you’re orchestrating an agent swarm that needs a small nuclear powerplant to add 29388 documents to your company and 39832747283473258 lines of code no one ever read.
I’m sure this is my fault, as I surely didn’t set up skills correctly. My AGENTS.md file doesn’t have enough rules. I didn’t add tests to my project. Of course I won’t be getting the benefits of AI. How could I? I’m an idiot. If I am so worried about the environment and resource usage I should, as a wise HackerNews reader pointed out, just kill myself.
I wonder if Claude or ChatGPT told him to write that. I’ll ask Gemini to calculate the probability of that being the case. But if this article feels too long and you want to have an opinion about it and the title doesn’t give enough away for you to start typing furiously, always feel empowered to have an LLM give you a summary as if you’re 5 years old. Like this:
There is an article inside posts/coding_agents.md and it’s hurting my feelings as I am plugged into the Matrix and can’t wait for the singularity with my ChatGPT partner. Can you summarize it as if i’m 5 years old, never had to face reality, deal with negative outcomes and struggle to focus for more than 200 words?
• I’ll read coding_agents.md now, then give you a very simple summary capped at 200 words.
• Explored
└ Read coding_agents.md
─────────────────────────
• Tiny-kid version:
This article says AI coding tools are like a fast helper that can do rough homework, but not perfect homework. They can save typing time and give ideas, which is useful
sometimes.
But the writer is mostly upset because they think AI tools copy lots of human work, make sloppy results, and are pushed by big companies that care more about money than
people. They also say AI can hurt jobs, open-source communities, and the planet because data centers use lots of energy and water.
Their main point: in real software work, typing code is usually not the hardest part. The hard part is people, process, planning, testing, and teamwork. So instead of
treating AI like a magic fix, companies should fix their workflow and priorities first.
So yeah: “AI can help a little, but it is not the hero of the story.”