

Enshittification comes from algorithm-driven profit seeking.
Not worried.
HW/FW security researcher & Demoscene elder.
I started having arguments online back on Fidonet and Usenet. I’m too tired to care now.
Enshittification comes from algorithm-driven profit seeking.
Not worried.
I host a SearXNG instance and follow the Matrix channel. Haven’t seen anything along those lines.
Alright, read up on it a bit more. Sadly the language choices (C++ now, maybe Swift later) rubs me the wrong way for something that needs to be incredibly secure against attacks. I really really support additional browser engines, but likely not this one.
Thus I think Servo is a better choice for those looking to contribute. IMHO.
Join our Discord server
crying in decentralization efforts
The AI support doesn’t hurt you if you don’t use it - and they’ve done the right thing by making sure you can do things locally instead of cloud.
Here’s what AI does for me (self-hosted, my own scripts) on NC 9:
When our phones sync photos to Nextcloud a local LLM creates image descriptions on all the photos, as well as creating five tags for each.
It is absolutely awesome.
This is where Signal’s biggest problem shows. It’s centralized. Matrix is the better choice since it will be up to you if you decide to break the law if it’s banned, since there will still be plenty of servers you can reach.
They did as instructed. What am I supposed to react to here?
Both agents have a simple LLM tool-calling function in place: “call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode”
Nice exploit chain!
Here in Sweden as well. Reported an issue with “extra text after “Gulf of Mexico” that should not be there”.
Nuclear is cheaper than your average electricity cost.
I know because I’m Swedish and you use us as your cheap electricity.
A lot of what I do (hw/fw hacking) involves running Ghidra on code by others so it’s just a tool I know well. As I mentioned I seldomly step through my own code while debugging high level languages.
Depends on language and platform ;) Ghidra, strace, printouts gets you quite far. The only language I regularly step would be assembler.
Sublime Text.
The only thing I need from my editor is syntax highlighting and not be slow.
(Assembler, C, Python, Java and Bash are the languages I mostly work with)
Lots of western companies have divested from working with/in Russia even though it has cost them lots of money. Some because that’s a legal requirement (sanctions), some because it’s the right thing to do.
Not doing so is supporting Russia.
Vlad wrote it to me in their chat. Screenshot here: https://ioc.exchange/@troed/113311981054448887
Ask your wife whether she thinks people should send money to Russia. Now, Yandex is politically twisting the truth in their search results, but I care less about that than the fact that I’ll happily send money to Ukraine but there’s no way in hell I’m sending money to Russia.
Being a Kagi subscriber means you are. Morally - I’m not ok with it. In some nations it might even be against the law. Sanctions, you know. I’m not even sure Kagi is legally in the clear here.
They specifically avoid sanctions by routing payments through Kazakhstan, and tried to claim Yandex wasn’t even a russian company when called out.
And no, the US is not the same. You might not have hosted Ukrainian refugees or be in full understanding of what’s happening there but any money going into Russia is right now used for torture, rape and killing of Ukrainians.
I had a Kagi family subscription and immediately cancelled when I learnt about Vlad’s “it’s just some geopolitical opinions” stance. I also know others have done the same.
FWIW - most mobile data plans roll over if you don’t use them fully during the month.
(at least where I live)
This must’ve been a lot more complicated to implement than to allow us to NOT SEND OUR SUBSCRIPTION MONEY TO RUSSIA.
sigh
I reported a comment along those lines here on Lemmy/Mbin earlier today. They have no place here either.
Ollama as a general LLM server and then LLaVa as model