

The difference, when the tool is used correctly, is so massive that only someone deeply uninformed or naive would contend it.
I got about 4 entire days worth of work completed in about 5 hours yesterday at my job, thats just objective fact.
Tasks that used to take weeks now take days, and tasks that used to take days now take hours. Theres no “feeling” about this, Ive been a software developer for approaching 17 years now professionally. I know how long it takes to produce an entire gambit of integration tests for a given feature. I spend almost all of my time now reviewing mountains of code (which is fairly good quality, the machines produce fairly accurate results), and then a small amount of time refining it.
People deeply do not at all understand how dramatically the results have changed over the past 2 years, and their biases are based on how things were 2 years ago.
Sure, 2 years ago the quality was way worse, the security was bad, the enforcement almost non existent, and peoples overall skill with how to use the tools was just beginning to grow. You cant exactly be good at using a tool that only just came out.
But its been two years of very rapid improvement. Its good now. Anyone who has been using these tools and actually monitoring progression can speak to this.
Things heavily shifted about 5 months ago when competition started to really fire up between different providers, and I wont say its even close to great yet, but its definitely good, it works, its fast, and it’s pretty damn good at what I need it to do.



Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it’ll see warnings/hints/suggestions from the lsp
Unit tests. Unit tests. Unit tests. Unit tests.
I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.
Instead of reasoning out “it should do this” they can just run the damn test and find out.
They’ll iterate on it til it actually works and then you can look at it and confirm if its good or not.
I use Sonnet 4.5 / 4.6 extensively and, yes, its prone to getting the answer almost right but a wrong in the end.
But the unit tests catch this, and it corrects.
Example: I am working on my own fame engine with monogame and its about 95% vibe coded.
This transform math is almost 100% vibe coded: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame/Transform/TransformRegistry.cs
The reason its solid is because of this: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame.Tests/Transform/Integrations/TransformRegistryIntegrationTests.cs
Also vibe coded and then sanity checked by me by hand to confirm the math checks out for the tests.
And yes, it caught multiple bugs, but the agent automatically could respond to that, fix the bug, rerun the tests, and iterate til everything was solid.
Test Driven Development is huge for making agents self police their own code.