

write something that already exists so it doesn’t need to think
If something already exists, it shouldn’t need to be rewritten.
Doing otherwise is a sign that something has gone wrong.
That was the case before LLMs and it is still the case today.


write something that already exists so it doesn’t need to think
If something already exists, it shouldn’t need to be rewritten.
Doing otherwise is a sign that something has gone wrong.
That was the case before LLMs and it is still the case today.


The first month is inputted with a 1 and exported as a 0.
The first day is inputted with a 1 and exported as a 1.


Also: January is not always the 1st month, sometimes it is the 0th.
1/1/2026 can be both Jan 1st, and Feb 1st.
For the downvoters, try it in your browser’s terminal:
let test = new Date("1-1-2026");
console.log(`Year: ${test.getFullYear()}, Month: ${test.getMonth()}, Day: ${test.getDate()}`);
// Prints -> Year: 2026, Month: 0, Day: 1
// --- --- --- --- --- --- --- ---
// test.getFullYear() returns the year, but test.getYear() only returns the number of years since 1900
// test.getMonth() returns the month, but the first month is 0-indexed
// test.getDate() returns the day, 1-indexed, but test.getDay() returns the current day right now and not the day of the date object
Deno’s packaging can be confusing (is this dependency installed in node_modules, the global cache or somehow both across module boundaries?)…
… But damn, its permission system is fucking amazing. Just run your code like normal and watch what permissions it asks you for.
What’s that?
Oh, you’d like permission to read the env file and network access? Begone namesquatted malware!
What’s that?
Oh, you want write permission to /.., fuck off slopped out pull request!
I tried porting a project back to node (v24) to see what it was like (since I heard node had a permission system now) and because some devs wanted to stick with what was familiar to them.
First thing I noticed, my watch/rebuild/serve script went from 0.2 seconds to 3-5 seconds with no code changes and using the same dependencies that were originally written for node.
Second thing, Node’s permission system doesn’t work because it’s broken by design. Permissions cannot ever be opt in. Everything needs to be locked down and you need to explicitly get permission to access things.
In node’s design, a junior dev could “opt in” to the network permission to disable network requests, but they wouldn’t think to block subprocesses (which could call cURL/wget and get it to make network requests on the main processes’ behalf). It’s utterly broken and shifts the blame to the developer for not knowing better.
I instantly switched the project back to deno.


it’s good at answering questions like that
Are you sure about that: https://www.youtube.com/watch?v=JtBI2BvPKBQ
Remember: if you don’t know enough about the topic in question to grade its answers then you’re not judging its ability to answer accurately, you’re judging it’s ability to convince you.


It’s for the best.
Learning how code works is better than getting an LLM to produce convincing looking code without anyone having an understanding of how it works.
LLMs just teach students to paint themselves into a corner without them even realising why bad things keep happening to them.


More like “why the fuck would I walk all the way across the city now that I own a car”
That’s a bad analogy.
Using an LLM for coding gives you an initial speed up with the trade off being skill atrophy, and a build up of cognitive debt in the project.
A better analogy would be the Greek government before their national debt crisis. It would have been better to invest in themselves, not lie about their own finances, and not kick the can down the road. But they kept lying and kicking the can down the road because it was easier in the short term. Of course, we all know how that turned out in the end.


You’re way quicker manually reviewing code compared to setting up everything just so that an LLM agent could do that.
Not only that, you’re better off reviewing the code manually so you understand how everything works.
If you understand how things work, you can plan things out.
If you don’t, you’ll end up painting yourself into a corner.
If you are scared of that, commercial products are your choice.
Commercial products are not a panacea for bad software quality.
Code openness and code quality are independent, orthogonal axes.


Probably a good time to consider helix or ki over vim


If the code is reviewed and tested, I don’t care if it was written by a human or machine.
That’s a pretty big assumption there buddy.
If they didn’t care enough to write the code, what makes you think they cared enough to review or test the code?


No!
PopOS switched to Cosmic which only just came out.
Recommend PopOS to users with experience, who know how to fix/avoid problems.
But, hold off from recommending PopOS for beginners until around the 27.10 release when most of the papercuts are sanded over.


although I’m a little bit skeptical about having to integrate additional extensions and workflows
Just to allay your fears, it’s not a mishmash of random extensions and brittle workflows.
11ty was originally built in a more all-in-one box style, but it was kind of annoying to have 10+ templating languages to choose from (and all the dependencies that came along with them), when you only wanted one.
Every update, the author does two things:
You can see that here: (data taken from here: https://www.11ty.dev/blog/dependency-watch/#full-history)
| Version | Deps (3rd-party) | Change | node_modules Size | Change |
|---|---|---|---|---|
| v0.2.0 (2018 January) First npm release! | ×401 (400) | n/a | 51 MiB | - |
| v0.12.1 (2021 March) | ×362 (360) | -9.70% | 68 MiB | +33.30% |
| v1.0.2 (2022 August) | ×360 (357) | -0.50% | 71 MiB | +4.40% |
| v2.0.1 (2023 March) | ×213 (208) | -40.80% | 35 MiB | -50.70% |
| v3.0.0 (2024 October) | ×187 (174) | -12.20% | 27 MiB | -22.80% |
| v3.1.2 (2025 June) | ×134 (123) | -28.30% | 21 MiB | -22.20% |
| v4.0.0-alpha.1 (2025 July) | ×130 (116) | -2.90% | 16 MiB | -23.80% |
| v4.0.0-alpha.6 (2025 December) | ×105 (89) | -19.20% | 14 MiB | -12.50% |
The first-party plugins are all compatible with each other and all use the same 11ty config with the same sensible defaults, and 11ty is built with all of the first-party plugins in mind.
You can add them all in if you still want the all-in-one-box approach, but this way lets your environments be smaller.
It’s basically pre-computed tree shaking.
There’s also a security argument for it. By splitting everything apart, you isolate security issues. If one of the random 10+ templating languages got a security issue (e.g. supply-chain attack, redos, misglobbing, etc…), it will only affect the projects that decided to use that templating language.


I haven’t used Hugo.
I went with 11ty a few years ago because I wanted to stay as close to the actual web standards as possible (so, HTML/CSS/JS).
The main reason why is that every additional abstraction layer and every invocation of “magic”, is just extra hidden complexity which makes things harder to debug, extend, and maintain.
Having a SSG in go/python/rust would have been an extra layer.
The “maintain” point above is something most others don’t think about until it comes back to bite them. Nothing is more frustrating than reopening a project that worked fine a few years ago, and even though you haven’t changed anything, nothing works, and when trying to update it you end up with Frankenstein’s monster.
11ty went out of it’s way to remain as simple as possible. Here’s your input directory, and here’s your output directory. That makes the maintenance and backwards compatibility really easy.
Then you can add the minimum required complexity/abstraction layers only when you need them.
In my case, I use:
I’ll say the one thing I don’t like about 11ty is that it’s written in js, not ts. The author is all about simplicity and reducing layers of complexity. But now typescript has a typescript-lite version with the erasableSyntaxOnly flag, which basically allows it to run on node (deno and bun already ran typescript), so the next version (or one after), may be migrated to typescript.


The --self-extracting flag in deno compile --self-extracting looks awesome.
I could compile my 11ty stack into a single executable that I could send anywhere, but it would still be able to read external markdowns so semi-technical coworkers can make content changes without the maintenance overhead of a CMS and without the coworkers needing to learn modern web dev.
It would also be super awesome for all the internal tools I’ve built that need to run offline.
CSS has been considered turing complete for a long time.
So this isn’t a shocking revelation, but it is cool.


It’s a garbage article.
It’s not checking other’s claims, it’s not saying anything new, it’s not even listening to the majority of the parties.
For each of the parties (node, deno, bun): they are incentivised to only release benchmarks that make themselves look good, and are disincentivised to release benchmarks that make themselves look bad.
This article only uses benchmarks from one party then declares it the winner.
So, it’s garbage.


I don’t see any products or services being promoted in this article.
That’s kind of the point. You’re not supposed to.
They’re farming links so search engines see their domain as more important. That way, their entry will appear higher up on search results against competitors for their paid products.


Bro, how are generators going to be faster?
This is an AI article.
My results:
Firefox:
Loop: 44ms - timer ended
Generator: 4580ms - timer ended
Node (uses same engine as deno, chrome, edge):
Loop: 30.577ms
Generator: 1.533s
Safari (uses same engine as bun):
Loop: 605.222ms
Generator: 2804.669ms
Bun (same engine as Safari but without needing to apologise for Safari):
[17.52ms] Loop
[297.17ms] Generator
Generators are going to be slow because:
Until generators don’t rely on stack switching, they’re always going to be super slow.


This benchmark is pretty useless.
There’s no benchmark code shown to see if they’re doing something wrong or cherrypicking.
Also, they only tested on arm64? Why weren’t they testing on x86-64?
And what the hell is this test?
Common data transformation pattern, map then aggregate with reduce.
People who care about performance are using loops, not map(). Why are you even testing the slow path?
If the rewrite is based on something which has a license that your company can’t use, then the rewrite likely can’t be used either