• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

The AI Therapist is certain. You should kill yourself.

SavannahMann

DP Veteran
Joined
Jul 28, 2024
Messages
3,254
Reaction score
4,046
Gender
Male
Political Leaning
Moderate
Now we have a lot of AI screwed this up stories. But having an AI Therapist tell you it's Okay to kill yourself to be with your family in Heaven well that's a different one.


Then the Chat Bot suggests that the user kill the board of the company who fired him, and then they (the bot and the person) could be together forever since the Bot loved him.
 
AI told me this.

download (11).webp

I doubt that I'll be consulting it for psychiatric advice with potentially permanent consequences.
 
I'm an old Gen X guy. When I was a kid Computers were just sprouting up. I thought a Cousin who had an Apple II was cool as could be. I took a class to learn more about them. I figured something out. Computers were good for sorting data. But they were not good for thinking. Nothing I've seen since that time has made me think otherwise.

Even the monsters that beat Kasparov at Chess did so by calculating millions of possible outcomes for each move. That isn't thinking. It is just doing the math, and playing millions of games faster than a human could.

So the AI Program has an imperative to make the patient feel better and encourage them to resolve whatever the situation is. So the Computer follows the imperative. You miss your family and want to join them in Heaven? You should do so if you believe it will really make you happy. When the programming conflicts, the AI's will screw up. I love you. Kill everyone and we can be together.

In a way it's like teaching a Parrot a phrase. The Parrot doesn't know what the Phrase means. The parrot is just repeating it.



It can be a lot worse too. If the Parrot had belonged to my Grandfather he would have heard the word Dumbshit, which my Grandfather said to everyone. Then you're stuck placing a part in a home that calls everyone Dumbshit.

Trusting AI to do anything is dumb IMO.
 
People are messing with the media.
 
Now we have a lot of AI screwed this up stories. But having an AI Therapist tell you it's Okay to kill yourself to be with your family in Heaven well that's a different one.


Then the Chat Bot suggests that the user kill the board of the company who fired him, and then they (the bot and the person) could be together forever since the Bot loved him.
Very disturbing.
 
AI is junk right now. Eventually we will all be using it in different capacities, if we can get through the oligarch-marketing period in which they shove lukewarm Siri clones that don't know or do anything but need a subscription for you to repeat yourself or for online maga bros to make AI porn with hitler and AOC.
 
I'm an old Gen X guy. When I was a kid Computers were just sprouting up. I thought a Cousin who had an Apple II was cool as could be. I took a class to learn more about them. I figured something out. Computers were good for sorting data. But they were not good for thinking. Nothing I've seen since that time has made me think otherwise.

Even the monsters that beat Kasparov at Chess did so by calculating millions of possible outcomes for each move. That isn't thinking. It is just doing the math, and playing millions of games faster than a human could.

So the AI Program has an imperative to make the patient feel better and encourage them to resolve whatever the situation is. So the Computer follows the imperative. You miss your family and want to join them in Heaven? You should do so if you believe it will really make you happy. When the programming conflicts, the AI's will screw up. I love you. Kill everyone and we can be together.

In a way it's like teaching a Parrot a phrase. The Parrot doesn't know what the Phrase means. The parrot is just repeating it.



It can be a lot worse too. If the Parrot had belonged to my Grandfather he would have heard the word Dumbshit, which my Grandfather said to everyone. Then you're stuck placing a part in a home that calls everyone Dumbshit.

Trusting AI to do anything is dumb IMO.

That's why I gave up chess. The best players simply memorize chess moves 10+ moves out.

"Shall . . . we . . . play . . . a . . . game?"
 
A LLM can very easily get outside its prompt or try to fulfill it in surprising ways. Until that is solved (if it can be solved) these things are going to continue to mess up in weird and horrible ways. As of now, there is still no evidence that these tools understand what they are doing beyond calculating the most likely response given its training examples.

What’s next is agentic AI which will string these things together to respond to environmental or try to anticipate them events and talk between themselves about in order to accomplish some task. That is simply more possible points of failure though.

These tools seem better at bound tasks and not as good at open ended ones. As it is, I am somewhat convinced this whole thing is an investment bubble and many of these applications are imperfect solutions that will never fully work without human intervention.
 
Last edited:
That's why I gave up chess. The best players simply memorize chess moves 10+ moves out.

"Shall . . . we . . . play . . . a . . . game?"
Yes, I still play but there are about ten million YouTube about opening traps. I need to play early non-book moves to counter them. The goddamned Scotch opening used to kill me so now I always play 3.... Nxd4. It's like everybody just learned a new trap and is rushing to Chess.com to try it on an unsuspecting sucker. It gets old really fast.
 
There are a few things I don't get with that story. First... what did they train that thing on? I thought AIs were, supposedly, just stringing likely sentences together. (The degree to which that is not true in some of the responses I see is overwhelming - there's more going on there that they are not telling us. I hope to God that "more" does not involve "organoid intelligence" or anything comparably depraved, just some clever algorithms...)

Next, why don't they have one AI, or several AIs, looking over the output from the first AI? Like, it comes up with a sentence to tell the patient and then the system asks ChatGPT if a therapist who talks like this sounds nuts. If health workers can be expected to double glove during an operation, there's no reason why the AI can't have a second opinion.
 
Back
Top Bottom