

Same, I was ready to just buy the first chinese mini pc that had the 128G strix halo processor. This is way better as it will have actual support and likely be better made.
Same, I was ready to just buy the first chinese mini pc that had the 128G strix halo processor. This is way better as it will have actual support and likely be better made.
It does ok with that. better than the default model, but worse than the built in search on my phone.
The best one I have found was one of the newer ones that was added a few months ago. ViT-B-16-SigLIP__webli
Really impressed with the accuracy even with multi word search like “espresso machine”
Yea, he was CEO of VMware from 2012 to early 2021. All the issues VMware has now came from broadcom buying them which happened well after he left.
That sounds like he doesn’t understand how to use one pedal driving.
You shouldn’t be comparing with DIMMs, those are a dead end at this point. CAMMs are replacing DIMMs and what future systems will use.
Intel likely designed Lunar lake before the LPCAMM2 standard was finalized and why it went on package. Now that LPCAMMs are a thing, it makes more sense to use those as they provide the same speed benefits while still allowing user replaceable RAM.
AWS has multiple teirs of storage options in s3, some replicate and some dont. by default those that do replicate do so in multiple availability zones, but not across regions. unless you turn on cross-region replication (CRR) which is an additional charge.
So, for example without CRR if your bucket is in us-east-1 and 1 availability zone goes down you can still access the data, but if all of us-east-1 is down, you cannot.
As if managers even know what RISC-V is
Its the server world that is demanding it. For most consumers 4.0 is more than enough, but servers are already maxing out 5.0 and will probably immediately max out 6.0 when devices actually become available.
the NSA (which lacks a mandate to act on US soil, and CF is a US company)
They absolutely do have a mandate to operate on US soil, that is actually the main mandate and there is a separate military agency (CNMF) that operates on foreign soil. They are both headed by the same guy though so they might as well just be one agency.
the thing about Recaptcha is that it didn’t always gate keep a google provided service, so that logic doesn’t really work. i agree though that we all benefit from less bots.
yahoogle
There is one extra step. I have an 6700xt, and with the docker containers, you just have to pass the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.0
to allow that card to work. For cards other than 6000 series, you would need to look up the version to pass for your generation.
Here’s an example compose file that I use for ollama that runs ai models on my 6700xt.
version: '3'
services:
ollama:
image: ollama/ollama:rocm
container_name: ollama
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
group_add:
- video
ports:
- "11434:11434"
environment:
- HSA_OVERRIDE_GFX_VERSION=10.3.0
volumes:
- ollama_data:/root/.ollama
volumes:
ollama_data:
have you tried the rocm docker containers that amd makes for your needs? it pretty much makes installing rocm on the base OS unneeded for me. https://hub.docker.com/u/rocm https://github.com/ROCm/ROCm-docker
it doesn’t, what this is suggesting is the vpn was routing traffic through it so they could analyze snapchat traffic. not the contents of it but essentially meta analysis of the traffic. how often it was sending data, how much data, where it was going etc.
just a small correction, /etc does get snapshotted when upgrades happen and will roll back along with everything else. you are correct though that home does not get snapshotted and is fully mutable.
I don’t have an answer to your nvidia question, but before you go and spend $2000 on an nvidia card, you should give the rocm docker containers a shot worh your existing card. https://hub.docker.com/u/rocm https://github.com/ROCm/ROCm-docker
it’s made my use of rocm 1000x easier than actually installing it on my system and was sufficient for my uses of running inference on my 6700xt.
Immutable isnt really the best word for these distros. Its why fedora is changing the name to atomic, as in changes made to the system are done atomically like git. This also means changes can be rolled back just as easily as they were made.
The trick was to install VSCodium from the Toolbox
Another option you can try that I use is the dev containers extension which allows you to move your workspace to different containers from within vscode. I will say however, i have tried many times to get it working in vscodium and have been unsuccessful and it only seems to work in vscode proper.
Have you looked at grapheneOS? Its essentially a fork of the android open source project with extra privacy features. So, regular android apps still work for the most part, but you dont have google spyware built in.