

At least on android I was able to just add a link to the home screen in Firefox.
At least on android I was able to just add a link to the home screen in Firefox.
Lets be honest, most of the people who get stiff when looking at guns support this. They were always going to bend over for authoritarians because if authoritarians took over the government they were going to be right wing.
Republicans did the same, both “legally dubious” and blatantly illegal.
There are a lot of things they can do, but also if they are breaking the law and blatantly ignoring the constitution I would argue basically anything is on the table to stop this, even if it is “extra legal” or whatever.
Because if they don’t the entire system of laws and rules and everything is fucked beyond repair.
The Nazis were also extremely incompetent when I came to stuff like this. Hitler had generals scrambling behind his back to produce their best weapon, but he kept finding out and making them stop along with tons of other micromanaging.
Trump is an idiot and so is musk. Most business people are short sighted and fascists even more. They also may be high on their own farts and think the system won’t collapse with them “in charge” like a toddler left in s room full of candles and napalm.
Not saying that foreign powers aren’t loving this, but they don’t have to have control over it.
They can try, but its hard to band stuff that’s decentralized like that. All they realistically could do is prevent companies and organizations based in the us from federating, or federating with anything outside.
As a queer person I don’t really care at this point if China or Russia is tracking me. They aren’t the ones who are currently stripping me and others of rights and so many other things.
I don’t trust any governments on this front, but the government I live under is way more of a concern.
The problem is for organizations it’s harder to leave because that is where the people you want to reach are. That’s the only reason any org or company is on social media in the first place. If they leave too soon they risk too many people not seeing the things they send out to the community.
It’s more an individual thing because so many people just have social inertia and haven’t left since everyone they know is already there. The first to leave have to decide if they want to juggle using another platform to keep connections or cut off connections by abandoning the established platform.
I shouldn’t have anything to hide, but I’m part of a group the current fascist leadership in government want’s to eradicate, so hide I shall.
That said, I also feel like people acting like the remote server they are connected to is tracking what you do on it as some kind of surprise is so stupid. “Facebook is keeping track of the pictures I uploaded to it!!!” There’s a lot of stuff to complain about Facebook, google, or whoever, but them tracking stuff you send to them willingly isn’t one of them.
It doesn’t. They run using stuff like Ollama or other LLM tools, all of the hobbyist ones are open source. All the model is is the inputs, node weights and connections, and outputs.
LLMs, or neural nets at large, are kind of a “black box” but there’s no actual code that gets executed from the model when you run them, it’s just processed by the host software based on the rules for how these work. The “black box” part is mostly because these are so complex we don’t actually know exactly what it is doing or how it output answers, only that it works to a degree. It’s a digital representation of analog brains.
People have also been doing a ton of hacking at it, retraining, and other modifications that would show anything like that if it could exist.
Not in the way you think. They aren’t constantly training when interacting, that would be way more inefficient than what US AI companies have been doing.
It might be added to the training data, but a lot of training data now is apparently synthetic and generated by other models because while you might get garbage, it gives more control over the type of data and shape it takes, which makes it more efficient to train for specific domains.
Exactly. I’m queer. I’m not scared of China, even if they were doing the same thing the US currently is. Because only one of those actually effects the rights I have and what I do in my day-to-day.
I do not understand how the average person does not realize that.
As a queer woman in the US, I currently care infinitely more what the US gov and companies track about me than what China does.
I swear people do not understand how the internet works.
Anything you use on a remote server is going to be seen to some degree. They may or may not keep track of you, but you can’t be surprised if they are. If you run the model locally, there is no indication it is sending anything anywhere. It runs using the same open source LLM tools that run all the other models you can run locally.
This is very much like someone doing surprised pikachu when they find out that facebook saves all the photos they upload to facebook or that gmail can read your email.
If you are blindly asking it questions without a grounding resources you’re gonning to get nonsense eventually unless it’s really simple questions.
They aren’t infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.
Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.
The problem is that people are misusing the technology, not that the tech has no use or merit, even if it’s just from an academic perspective.
There’s something to be said that bitcoin and other crypto like it have no intrinsic value but can represent value we give and be used as a decentralized form of currency not controlled by one entity. It’s not how it’s used, but there’s an argument for it.
NFTs were a shitty cash grab because showing you have the token that you “own” a thing, regardless of what it is, only matters if there is some kind of enforcement. It had nothing to do with rights for property and anyone could copy your crappy generated image as many times as they wanted. You can’t do that with bitcoin.
Ah, good old “both sides” argument. Half the reason we are in this mess.
I’ve no love lost for cops, but I also have no sympathy left in me while I wonder if I’m going to be able to keep my job, access healthcare, and generally exist as myself in society without someone decided to attack me. Fuck him, fuck the cop. There are no innocents in this story.
So I’m going to take some catharsis in that one of the people who would likely have murdered me or anyone like me in the future is gone.
After a week of bad news after bad news where I am both fearful for my job and very right to exist in the country, quite frankly, I couldn’t care less about your moral grandstanding. This guy tried to overthrow democracy for a man who is currently doing a speed-run of fascism.
He should still be rotting in prison, instead a criminal was released into the streets by the party of “law and order” for political reasons. And while I have no love lost for cops, this guy getting shot is anything but a tragedy.
Fuck him, fuck Trump, fuck republicans, and fuck anyone who has sympathy for these monsters. These people want me dead for trying to be comfortable in my own skin. I will wish the worst on every last one of them because I know what they want to do to me and I’m tired of people using kid gloves when talking about these people.
Been playing around with local LLMs lately, and even with it’s issues, Deepseek certainly seems to just generally work better than other models I’ve tried. It’s similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.
Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.
In the US, it’s a bunch of assholes basically only wanting to replace workers with AI they don’t have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn’t make sense to. It’s all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.
I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It’s not being created to make anyone’s lives better, it’s being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.
And then there’s the “want results now” where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. “needs more CUDA” has been the mindset and in the western AI community you are basically laughed at if you can’t or don’t want to use Nvidia for anything neural net related.
Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of “own the west” there, sure, but that isn’t opposed to the research.
LLMs are just another tool for people to use, and I don’t fault a hammer that is used incorrectly or to harm someone else. This tech isn’t going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.
Living in an area with very hard water, yes.