

I’m much more concerned about crises being an excuse to expand government powers than to privatize or deregulate. That’s the problem in this particular instance too. Both can be bad but expanding government powers is almost always bad.


I’m much more concerned about crises being an excuse to expand government powers than to privatize or deregulate. That’s the problem in this particular instance too. Both can be bad but expanding government powers is almost always bad.


No. I’m not convinced China is worse than the US in terms of developing anti-human technologies and people living in China can’t boycott China. The point is to get the people in every significant country (including China) to oppose these technologies so strongly that they aren’t able to be developed anywhere. The Chinese military has to employ Chinese people to make its weapons, but if 80% of the population is opposed to these weapons existing and even the foundation of modern technology on which they are built then that is going to be difficult. Even if they were able to only employ those who are fine with WMDs the public’s opposition to modern technology would be a problem for the government maintaining control while developing those weapons and forcing modern technology on the people as a means of controlling them.


Ruling with an iron fist tends to create resistance and without mass surveillance technology an unpopular regime couldn’t keep everyone in line. But if instead most people are in agreement about something being bad (like they are with slavery or pedophilia) then there is much less resistance to enforcement against it (whether that’s centralized or decentralized enforcement) and therefore that thing is more effectively stopped.
While lone individuals or small secretive groups could continue doing the bad practice, in terms of technology I don’t think this will matter much because they won’t be able to develop a lot of technology with only a small group of people who aren’t building on other people’s work and their technology also wouldn’t be adopted by a society that is against it.


They work for others. It would be helpful to know in what way they aren’t working for you. And did you try this one? https://zbbb278hfll091.bitchute.com/KmVnLpFsCzAq/jmhFAjqbxnQ.mp4 (49 minutes in)


Literally read the article.


The Bitchute link should work. Here’s one directly to the mp4: https://zbbb278hfll091.bitchute.com/KmVnLpFsCzAq/jmhFAjqbxnQ.mp4. Again, it’s about 49 minutes in that talks about the Europol report.


Moore’s law is one example but hardly the only one


Yes, and the AI threat is also worse than everything mentioned in this article. The quote from the researcher at the very start is apt and should be taken 100% seriously.


I never heard that in movies actually. And we know there are limits according to laws of nature but that’s beside the point. Here’s a good explanation of how technological progress has been accelerating.
The current generation of AI hallucinates as a fundamental property
The key word there is “current”


No I’m not. Here’s a good explanation of how technological progress has been accelerating. You could also look up the law of accelerating returns.
This is incredibly cringe-inducing. Violence is the answer? Growing food is bad? AI is evil but not because it uses water.
Right. The propaganda against growing food (you know, like happens all by itself in nature and is necessary for human survival) is so dumb. If you have a problem with certain agricultural practices then name them, don’t just blame “agriculture” as a whole. That’s like blaming humans as a whole for what corporations do. Oh wait, that’s what the same propaganda also does!


It’s not going to stay this way for much longer. Technology progresses exponentially and in a decade or two at most AI will be able to outperform all humans in everything. The future is dark unless it’s all unplugged.


Everyone needs lines that shouldn’t be crossed. For me one of those lines is computer programs that can do things their creators don’t know how to do. In other words, programs that are “trained”, or “AI”. Such programs are the beginning of the end of human existence, as they will replace human thinking and labor - taking away human purpose and power - and eventually become capable of making weapons of mass destruction from microwaves and shoelaces. There’s no possible good future with AGI in it, and the only way to prevent AGI is to stop AI altogether.
That’s one reason. Another reason is that your code will be better and you will understand it. Yet another reason is you’ll have more privacy. And probably the most important reason is that you’re resisting AI development that threatens human existence.


Thanks, I didn’t realize
It’s bad, but I can imagine worse. Wait until they use AI for biological weapons and accidentally or deliberately make us extinct.


The “Cancel ChatGPT movement” doesn’t appear to be mentioned in the article, but other outlets say hashtags like #CancelChatGPT are trending on X.
Agreed. This technology’s existence is a net negative to humanity, whether everyone has it or just the police have it. It all needs to be stopped, no exceptions for any government agency, research lab, corporation and non-profit organization.