

Yeah, I was thinking diesel powered trains
Yeah, I was thinking diesel powered trains
This article is comparing apples to oranges here. The deepseek R1 model is a mixture of experts, reasoning model with 600 billion parameters, and the meta model is a dense 70 billion parameter model without reasoning which preforms much worse.
They should be comparing deepseek to reasoning models such as openai’s O1. They are comparable with results, but O1 cost significantly more to run. It’s impossible to know how much energy it uses because it’s a closed source model and openai doesn’t publish that information, but they charge a lot for it on their API.
Tldr: It’s a bad faith comparison. Like comparing a train to a car and complaining about how much more diesel the train used on a 3 mile trip between stations.
I think at this point we should maybe adopt a #human hashtag to indicate that a given work of art is human generated.
This is not a service I personally use, but I’ve thought about it: services like mysudo let you select and create new phone numbers. https://anonyome.com/individuals/mysudo-plans/
In your situation I might research and select a service like this. Then create a few disposable numbers. Give one to your trusted friends and family, another to employers and banks, etc, and the third to anyone else you need to contact.
Once you’ve transitioned everything important to the new numbers, get yourself a new phone number, and don’t give it to anyone. Maybe just your parents, for emergencies.
This has 2 downsides and 2 big advantages I can see.
Cons:
1, it cost you monthly. I think 3 numbers from mysudo is like $5 a month
2, it’s a pain to transition folks to your new number.
Pros:
1, if your stalker finds one of your new numbers, it’s easier to change them.
2, you can narrow down who it might be. Like, if you have a number dedicated to work contacts and the stalker starts texting it, you know they either are a coworker or got it from a coworker.
I think Google voice can also give you some free numbers, so look into that. Good luck!
I got ratted out by the thumbnail 😢
https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/
Here is a direct quote from openai:
“In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly.”
It’s not a conspiracy. It was explicitly thier policy not to have the ai discuss these subjects in meaningful detail leading up to the election, even when the facts were not up for debate. Everyone using gpt during that period of time was unlikely to receive meaningful information on anything Trump related, such as the legitimacy of Biden’s election. I know because I tried.
This is ostentatiously there to protect voters from fake news. I’m sure it does in some cases, but I’m sure China would say the same thing.
I’m not pro China, I’m suggesting that every country engages in these shenanigans.
Edit it is obvious that a 100 billion dollar company like openai with it’s multude of partnerships with news companies could have made gpt communicate accurate and genuinely critical news regarding Trump, but that would be bad for business.
Perhaps now it is, but leading up to the election, I found gpt would outright refuse to discuss Trump in voice mode. Meta ai too. It was very frustrating. It would start, and then respond with something like, “I’m not able to talk about that, yet.”
https://www.wired.com/story/google-and-microsofts-chatbots-refuse-election-questions/
There are plenty of examples of Ai either refusing to discuss subjects of the elections (I remember meta ai basically just saying “I’m learning how to respond to these questions.” Or in the above case, just hand waving away clear issues of wrong doing.
Chat gpt advanced voice mode would constantly activate its guardrails when asked about trump or “politically charged” topics.
Incidentally, no Western ai would make a statement on Donald Trump’s crimes leading up to the election. Ai propaganda is a serious issue. In China the government enforces it, in America, billionaires.
I frequently forget that chrome is installed on my phone. The only time I’m forced to use it is about once a year when I order Papa John’s Pizza takeout. Their checkout page doesn’t seem to work in any other browser.
Something which clarified Zuck’s behavior in my mind was an interview where he said something along the lines of, “I could sell meta for x amount of dollars, but then I’d just start another company anyways, so I might as well not.”
The guy isn’t doing what financially makes sense. He’s Uber rich and working on whatever projects he thinks are cool. I wish Zuck would stop sucking in all his other ways, but he just doesn’t care about whether his ideas are going to succeed or not.
All is well, shutters worked great
We have some snorkels!
I actually don’t think this is shocking or something that needs to be “investigated.” Other than the sketchy website that doesn’t secure user’s data, that is.
Actual child abuse / grooming happens on social media, chat services, and local churches. Not in a one on one between a user and a llm.
East Tampa here. Luckily not in an evacuation zone. I hope you and your family are well and making preparations. Meet you again here on Thursday!
The storm will have past and the survivors will log into Lemmy
I’m right in the track of the eye. Metal shutters in place! Hope I see y’all Thursday!
Yes, sorry, where I live it’s pretty normal for cars to be diesel powered. What I meant by my comparison was that a train, when measured uncritically, uses more energy to run than a car due to it’s size and behavior, but that when compared fairly, the train has obvious gains and tradeoffs.
Deepseek as a 600b model is more efficient than the 400b llama model (a more fair size comparison), because it’s a mixed experts model with less active parameters, and when run in the R1 reasoning configuration, it is probably still more efficient than a dense model of comparable intelligence.