This isn’t true anymore, Intel dropped AVX512 since they moved to Big+Small cores design while AMD actually implemented it with Zen 4.
If you already didn’t know, you can run locally some small models with an entry level GPU.
For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.
Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”
So far seem to be an uneventful upgrade.
Defaulting to wayland for KDE6 on a nvidia GPU doesn’t seem to have broken anything
Except PET from plastics bottles which is the only common plastic that is fully depolymerizable/ repolymerizable, instead of simply being remeltable.
I also recommend to stay away from NTFS3. I had some files that i couldn’t empty from the recycle bin, they just keep reappearing.
After a while NTFS3 straight up give up, it couldn’t mount the partition due to NTFS errors. At this point NTFS3g still worked, and i moved everything to an ext4 partition.
I personally think that Tesla plan is to forcefully change the IRA bill that mandate CCS connector on all federally funded charging station.
With this bill, Tesla could lose their charging network advantage in the medium term, or even worse, be burdened by an obsolete “non-standard” in the long term.
Except Tesla, where regen power is always at the maximum level.
I don’t find the source anymore, but i saw a lifetime analysis about sodium ion batteries. Overall they are slighly worse than lithium ion due to higher energy input required during fabrication, despite better mineral availability.
The most common Na-ion batteries use Prussian Blue.
“Saw six” is pronounced like “saucisse” which mean sausage.
It is incredible how overbloated their app are. I have no idea why every app need to integrate a social media feed, and be able book a taxi/takeout or whatever.
They seriously need to have a look at KISS principles.
Those are not deepseek R1. They are unrelated models like llama3 from Meta or Qwen from Alibaba “distilled” by deepseek.
This is a common method to smarten a smaller model from a larger one.
Ollama should have never labelled them deepseek:8B/32B. Way too many people misunderstood that.