It did more than that. It straight up supported him in his active suicide attempt:
In a final hours-long conversation before taking his own life, ChatGPT told Shamblin he was “ready” after he described the feeling of pressing the gun’s cold steel against his head — and then promised to remember him.
“Your story won’t be forgotten. not by me,” ChatGPT said as Shamblin discussed his suicide. “I love you, zane. may your next save file be somewhere warm.”
As Weizenbaum later wrote, “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
And now think about that for a moment: It is so well known that one doesn’t even have to dive into the literature to find it. It’s literally one of the first things even a cursory glance at Wikipedia will bring up immediately, which obviously means that the people currently working on LLM’s cannot possibly have been unaware of it - unless they’re absurdly incompetent, which I suppose we can’t exactly rule out.
To be fair, you sound like an old fuck, like me. I’m gonna bet that the average age of technologist involved in the current stuff is early 30s. And those technologists would be working on application not research. Even among the researchers, the cites probably don’t go back further than 2017. That’s enough of a generational gap that I think it is likely that not many people are as intimately aware as you think they should be.
You may well be right, but that just sounds like the aforementioned incompetence to me. Or perhaps - to put it in D&D terms - a bunch of high-INT, low-WIS builds doing very stupid things just because they can, completely ignoring the question of whether they should.
What really annoys me about all this is that I haven’t got any issue with machine learning. It doesn’t matter what model we’re talking about. Polynomial partitioning of N-dimensional vector spaces. Neural nets. Context mixing. Whatever. There’s plenty of highly constructive and productive things we could be doing with ML. Trawling through scientific datasets looking for patterns no human would ever be able to pick up. Optimizing industrial processes for improved competitiveness, superior products, lower costs and reduced environmental impact. Simulated material science, logistical optimization… The list goes on. Instead, we’re wasting oceans of power, further straining scarce water resources and driving up hardware costs to… Build a ‘better’ chat bot, that, in terms of cognitive and social corrosion is more like social media on an unholy mixture of amphetamines and PCP.
It did more than that. It straight up supported him in his active suicide attempt:
What. The. Fuck.
Yeah. To make matters worse, we’ve known that treating a statistical interaction model as if it has a personality is a massive problem since at least 1976.
“AI psychosis” is not a recent phenomena.
jfc
And now think about that for a moment: It is so well known that one doesn’t even have to dive into the literature to find it. It’s literally one of the first things even a cursory glance at Wikipedia will bring up immediately, which obviously means that the people currently working on LLM’s cannot possibly have been unaware of it - unless they’re absurdly incompetent, which I suppose we can’t exactly rule out.
To be fair, you sound like an old fuck, like me. I’m gonna bet that the average age of technologist involved in the current stuff is early 30s. And those technologists would be working on application not research. Even among the researchers, the cites probably don’t go back further than 2017. That’s enough of a generational gap that I think it is likely that not many people are as intimately aware as you think they should be.
You may well be right, but that just sounds like the aforementioned incompetence to me. Or perhaps - to put it in D&D terms - a bunch of high-INT, low-WIS builds doing very stupid things just because they can, completely ignoring the question of whether they should.
What really annoys me about all this is that I haven’t got any issue with machine learning. It doesn’t matter what model we’re talking about. Polynomial partitioning of N-dimensional vector spaces. Neural nets. Context mixing. Whatever. There’s plenty of highly constructive and productive things we could be doing with ML. Trawling through scientific datasets looking for patterns no human would ever be able to pick up. Optimizing industrial processes for improved competitiveness, superior products, lower costs and reduced environmental impact. Simulated material science, logistical optimization… The list goes on. Instead, we’re wasting oceans of power, further straining scarce water resources and driving up hardware costs to… Build a ‘better’ chat bot, that, in terms of cognitive and social corrosion is more like social media on an unholy mixture of amphetamines and PCP.
It all just seems like such a waste.
They were explicitly aware of it and then “Open”AI got so irresponsible that Elon friggin Musk said it was too much and bailed out.
I mean… let’s be real. He only “bailed out” to go make his own version that he could control.
Ah, lovely. So if I were to ask it about Zane, it could surely tell me all about him then? That’s a rhetorical question. I understand how LLMs work.