

Apple could pitch in just for the sake of sticking it to Google.
Apple could pitch in just for the sake of sticking it to Google.
For most people, use Open Web UI (along with its many extensions) and the LLM API of your choice. There are hundreds to choose from.
You can run an endpoint for it locally if you have a big GPU, but TBH it’s not great you have at least like 10GB of vram, ideally 20GB.
Why doesn’t Mozilla just fork Chromium? Anything bad sneaks in, they rip it out. New feature? Develop it specifically without paying for the whole browser. From the user’s perspective, very little changes, but cost savings would be massive.
It would also be a good high profile tab of “bad things Chrome/Chromium is doing”
EDIT: It would also justify regulating Chromium like a monopoly, though I think that government ship has sailed.
Trump/Musk (especially Musk) could totally come out against this if it gains traction.
I guarantee, your family’s tune would change
Very much intentional, as with any song advocating for “violence”
Honestly there are one or two communities I was helping keeping alive so they don’t go to Twitter, but it seems that’s happened to one and the other is… not worth it anymore.
Look at it this way:
Many (most?) defective GPUs ship that way, and fail early. If you buy used, it’s already lasted that long.
Then don’t buy on launch day? It pretty much always sucks, better just to go back a generation.
Again, not trying to be disrespectful, but launch GPUs just seem super unappealing to me.
I mean this in the most polite way possible, but why did you need a new GPU so quickly? Did the old one conk out?
Maybe I’m lucky, but no used GPU I bought has ever died. Ironically, only a new 6850 I had ages ago was kinda funky.
The Silverstone Sugo SG09 and SG10 were awesome if you can find them. A bit bigger, but they support full PSUs, with the option for cheaper micro atx mobos. Cooling is good too.
Also, the node 202 width restriction is a bit conservative, it fits my much larger/thicker ftw3: https://www.evga.com/products/specs/gpu.aspx?pn=E2763314-163F-4391-8935-EA2C5DFFD06B
But like you said, no full sized psu.
Well if you are getting a new case, and are into modding, I adore my fractal design node 202. It’s tiny, fits a 420W 3090, and you can duct the cpu/gpu to its vents with like $5 of foam strips. I have no case fans, yet the cpu/gpu idle with their fans off and barely spin up because they only suck in outside air.
The newer fractal ridge is a bit bigger, but should work much better with no modding since it’s more “open”
It’s just corporations being shitty and cultish, no different than usual.
The sad thing in LLMs can be quite cool with the right implementations (like structured output to force correct syntax, complex grounding, and so on) but the sea of garbage floods any possibility of that.
The 3060 TI is 8GB IIRC, the base 3060 is 12 but somewhat slower, probably not a worthy upgrade over a 1080 TI.
You can always buy used. Even a 2080 TI would be a big step since you can use dlss with it, and prices for them are better now.
I consider myself an “AI evangelist,” but I hate Altman with a burning passion, probably more than you do, and hate data centers burning the planet.
I think running models locally, as hackable tools you understand, trained on very modest hardware (as Chinese companies are doing by necessity with the import restrictions), is a distinct thing. Doubly so if bitnet takes off and running stuff on-device becomes super cheap.
I feel like there’s even a smaller group of programmers that used blockchain for utilitarian purposes, and not pyramid schemes, but TBH it seems vanishingly small.
What I’m saying is… the problem is not AI, it’s billionaires.
It’s always billionaires.
7900s aren’t as outrageous:
https://www.ebay.com/sch/i.html?_nkw=amd+7900&_odkw=rtx+4080&_udhi=1000
You are not wrong though, those 4080 prices are messed up.
Oh yeah, you could swap a kingpin for a 4080 if you wanted, lol. Everyone wants 24GB for local AI.
But honestly they will still probably be expensive if/when you sell them, so no rush I guess :P
Also, I must say AMD makes great hardware, and tries as hard as it can to price/sell them uncompetitively.
The MI300X? Better than the H100, yet no one bought them because they priced gouge them, and priced gouged any sane lesser GPU for a dev machines. People would have worked around the funky software stack, but no.
Pro cards? Total joke. Gaming? See the article.
Even Strix Halo, the framework APU, is price gouged to hell. A cheap 32/40CU config with one cut-down CCD would be killer, but no, it’s mega expensive or a completely neutered integrated GPU, your choice.
I really don’t get it. The only explanation I can think of the Nvidia/AMD CEOs are colluding because they are second cousins (and they are actually second cousins).
The golden ticket for me is buying previous gen. Used.
I picked up a 7950 during the crypto crash. A 980 TI when everyone was doing exactly this and jumping on new stuff. My used 3090 is like $200 more than when I bought it, last I checked.
And the risk of it being faulty? Heck, I could’ve bought two of them and still come out ahead of these stupid new card prices.
This gen is still pretty screwed, though. 7900 prices may drop some with this, but still…
But they do have an interest in displacing Google’s monopoly, kinda like how they contribute to OpenStreetMaps with Apple Maps, or how Facebook finds llama.