

I sometimes use different dilimiters [you know, like “(” {or also “[” or “(” (as they are used in programming – since the em dash is essentially one of these, just with the added benefit of giving you a breath for thinking --)}].


I sometimes use different dilimiters [you know, like “(” {or also “[” or “(” (as they are used in programming – since the em dash is essentially one of these, just with the added benefit of giving you a breath for thinking --)}].


Related: Robotics pioneer Rodney Brooks saw this coming: https://arstechnica.com/ai/2025/10/why-irobots-founder-wont-go-within-10-feet-of-todays-walking-robots/
I think this is a well-written and important article.
One more aspect: The article lines out that todays control algirithms for robots are not inherently stable and can’t guarantee safety.
I have seen some code that runs in some if such humanoid robots and would like to add the following warning: the control code for robots is typically written by researchers, not safety experts. While there might be some brilliant programmers among them, such code will be, in most of the cases, a hot mess which cannot guarantee any safety. It will certainly not meet requirements which are commonly mandated for things like complex medical devices, automobiles, or other dangerous work equipment - but due to the much larger complexity and dangerous mechanical forces in such robots, the requirements should be higher than in automobiles.


Well, what the world really needs are laptops with built-in HVAC support!


Would you go near an uncontrollable maniac swinging a ten-pound sledgehammer, or stand two meters below a larger-than life bronce sculpture of Neptun with a harpoon, weighting 150 kilograms, which is not fixed, unstable and could at any moment fall upon you?
No? Then you should not go near such a robot.


“It’s an absolute no in my house,” said Pawloski, referring to how she doesn’t let her teenage daughter use tools like ChatGPT. And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is.
Very smart, especially if she did not know about the Gell-Mann effect before.


Ai is so dumb, nothing it tells me is more then regurgitating of common sense.
Thinking LLMs are intelligent because they sometimes reproduce correct statements is like believing that books themselves can think and are smart because some books contain thoughts of smart people.


I think kids are not resistent to addictive design, sadly.


Spot on. In German we have a cringe word that is used as a derogative by the far right. It is “Gutmensch” and literally translated means “good human”, essentially meaning a person that applies ethics to her actions. How dare they not to be controlled by greed and hate!


Better call them meat luddites ;-) ?


I use Zim Wiki + git. Or gollum wiki, which also uses git. Works both very well and can not only be synced but automatically merged.


Especially that diabolic number “zero”. It is a pure Arab invention. The Romans did fine without it. Then the Arabs introduced zero and we know what happened with the Roman Empire. Now, zeros can be found everywhere - they can even hide in computers. America should really exzerminate these zeros befote it is too late!!


The funny thing is that not C or Rust as languages “close to the hardware” have more specific bitwise operations - but Common Lisp has:
https://www.lispworks.com/documentation/HyperSpec/Body/f_logand.htm
https://www.lispworks.com/documentation/HyperSpec/Body/f_logcou.htm
https://www.lispworks.com/documentation/HyperSpec/Body/f_boole.htm#boole
(Though Rust has at least popcnt() and count_ones(), which are immensely useful e.g. when processing small sets.)


And that’s why GenAI has chances to leave kind of a double blast crater in tech: Deceptive advertising and completely unsustainable financing, followed by equally unsustainable technical decisions and development practices.


A full-stack developer based in India, who identified himself to The Register but asked not to be named, explained that the financial software company where he’s worked for the past few months has made a concerted effort to force developers to use AI coding tools while downsizing development staff.
[…]
He also said the AI-generated code is often full of bugs. He cited one issue that occurred before his arrival that meant there was no session handling in his employer’s application, so anybody could see the data of any organization using his company’s software.
This kind of things are exactly what I see with a mid-level dev who enthusiastically tries to use GenAI in embedded development: He produces code that seems to work, but misses essential correctness features, like using correct locking in multi-threaded code. With the effect that his code is full of subtle races conditions, unexpected crashes, things that can’t work but would take months to debug because the errors are non-deterministic. He has not fully understood why locks are necessary or what Undefined Behaviour in C++ really means. For example, he does not see a problem with a function with a declared return value to not return a value (inconceivably, gcc accepts such code by default, but using the value is undefined behaviour). He resists to eliminate compiler warnings or instrument his code with -Werror -Wall.
Unfortunately, I am not in the position to fire him. He was the top developer for two years. Also, the company was quite successful in the past and has, over these successful years, developed an unhealthy high level of tolerance for technucal debt.
And more unfortunately, the company’s balance sheet is already underwater, because of extreme short-term thinking in upper management and large shifts in markets, and is unlikely to survive the resulting mess.
Think of Nvidia GPUs as generic infrastructure like roads: You can use a road to transport all sorts of things using all sorts of vehicles.
Not if it turns out that it is not economical to build and maintain that kind of roads. And this is exactly the assessment and why it is called a bubble.
And of course, like you can use “classical AI” to solve the traveling salesman problem, play chess, find optimized subway connections, or recognize speech and handwriting, there /might/ be some useful applikations for newer algorithms and GPUs. Though the main application is to produce textual slop, which has little value.
For example, Linus Torvalds thinks AI might in future possibly help to find some bugs in human-written computer software. That could make its value similar to address sanitizer or valgrind. No, these two are not billion dollar companies.


It is essentially an attitude and value problem. See Torvalds’ email titled “WE DON’T BREAK USER SPACE” and Rich Hickeys talk “Spec-ulation” on YouTube.
Consequently, the fix is to move to another vendor.


I do not really see that.
The article is short, and myself I like to write longer, more detailed texts. But few people nowadays have the patience to read ten, five, or three pages of text.
Also, I am becoming wary about the trolling / disinformation tactic to qualify something as AI that you do not like. If a piece of text is wrong, it will have logic failures that you can address and point to.
And said that, burn-out is a real problem, I can confirm that. Not only in FOSS software but in other fields of software development, too - but the article also cites real factors which make it worse for open source development. And it is not only a threath for the mental health of individuals, but also for the community.
And the aspect of entitlement of some users is true, too.


I swear by mulinux with Emacs as textual WM.


I know gimp, git, borg, and mutt, but not the others :)
“If only individuals would use our climate-damaging cars and planes wisely!!”