

I wonder if she signed an NDA that prevents her from saying she was greeted on her last day by a manic sweaty wide-eyed Elon musk demanding he be referred to as mechahitler.


I wonder if she signed an NDA that prevents her from saying she was greeted on her last day by a manic sweaty wide-eyed Elon musk demanding he be referred to as mechahitler.
Discourse already has an activitypub plugin

One million hours in gimp


I’m already planning a Minecraft world with my 7 month old. Been trying to find a TV that allows multiple HDMI connections so we can do three way split screen with the wife when he’s old enough.


As funny as this is, I’d rather people understood how the AI actually works. It doesn’t reveal secrets because it doesn’t have any. It’s not aware that Musk is trying to tweak it. It’s not coming to logical conclusions the way a person would. It’s simply trying to create a sensible statement based on what’s statistically likely based on all the stolen content that it’s trained on. It just so happens that Musk gets called out for lying so often that grok infers it when it gets conflicting data.


When it comes to searching the database, the index will have already been created. When you create an index, it might take a while as the database engine reads all the data and creates a structure to shadow it. Each engine is probably different and I don’t know if any work exactly like that, but it’s an intuitive way to understand the basics of how B-trees work. You don’t really need to think much about how it works, just that if you want to use a column as a filter, you want to index it.
However, when you’re thinking about the structure of a database it’s a good idea to think what you’ll want to do with it before hand and how you’ll structure queries. Sometimes searching columns without an index is unavoidable and then you’ve got to come up with other tricks to speed up your search. Like your doctor might find you (i’m presuming gaz is sort for gary and/or gareth here) with a query like
SELECT * FROM patients WHERE birthdate = "01-01-1980" AND firstname LIKE "gar%"
The db engine will first filter by birthdate which will massively reduce the amount of times it has to do the more intensive LIKE operation.


I’d go for syncthing over nextcloud for your specific usecase. Nextcloud isn’t good for unreliable connections and they’re sticking with the annoying decision of not supporting server to server synchronization.


If there’s something you want to search by in a database, you should index it.
Indexing will create an ordered data structure that will allow much faster queries. If you were looking for the username gazter in an unindexed column, it would have to check literally every username entry. In a table of 1000000 entries it would check 1000000 times.
In an indexed column it might do something like ask to be pointed to every name beginning with “g”, then of those ask to be pointed to every name with the second letter “a” and so on. It would find out where in the database gazter is by checking only six times.
Substring matching is much more computationally difficult as it has to pull out each potentially matching value and run it through a function that checks if gazter exists somewhere in that value. Basically if you find yourself doing it you need to come up with a better plan.
Cartesian explosion would be when your query ends up doing a shit load of redundant work. Like if the query to load this thread were to look up all the posters here, get all their posts, get the threads from those posts and filter on the thread id.


Lol why are people such dicks? My vets sent pics when my dog was recovering from an operation. It’s a pretty normal thing.
Anyway, if it was me I’d just set up a stream of the webcam on peertube via OBS.


I don’t know how far owncloud and nextcloud have diverged, but in the nextcloud client you can add filters to ignore files by clicking the three dots on the folder in settings.
You can also free up local space by using virtual folders, but it only works properly on windows.
Afaik Godot is designed specifically to be portable , so unless you’re wanting to use cutting edge features of unreal or something, you can use that and let everyone else focus on their own tooling.
Also do some research over whether you actually do need cuda if you need cuda. It’s synonymous with a lot of AI stuff, but in my experience it all works with rocm anyway.


It was funny when Europeans would use BIPOC. Like 90% of us are indigenous and Americans invented that term specifically to exclude us.


I’m running deepseek-r1:14b on a 12GB rx6700. It just about fits in memory and is pretty fast.
No. It’s juiceSSH. I’m logged into my gaming pc from my phone.
Deepseek-r1:b8



All I want is a 3gb model for the raspberry pi. 7b is too big and 1.5b is too stupid.


I’m using an Rx6700xt which you can get for about £300 and it works fine.

Edit: try using ollama on your PC. If your CPU is capable, that software should work out the rest.


My theory is more based on vibes and multiple continuous layoffs from the majority companies over the last few years.
The only thing that we can be pretty much certain on is that the banks think twitter is overvalued and think other Investors see it as undervalued.
It doesn’t need all that. All the complexity is because of stuff that’s been added to the spec. 500ma@5v will just work if all you only connect is + & -