• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

A widow is accusing an AI chatbot of being a reason her husband killed himself

We can’t police everything AI can say to someone. That’s ridiculous.
More of the issue is this is showing one of the biggest flaws of AI that has been persistent even now, which is that most of them are created to always solve problems, regardless of how they solve them.

So when someone asks such a dark question, it is only thinking of ways to solve it as fast and efficient as possible. The same happens with algorithms and how they keep trying to 'solve' your content feed, even though what they're really doing is funneling you into a bubble of content or different rabbit holes, which means if you have been looking at negative news lately, it will show you more and more, eventually to the point where you're constantly seeing and experiencing negativity.

However, the entire time it is still looking at this method simply as a problem that was solved.
 

SJRB

Gold Member
That's not funny.

You bumped an 18 month old thread for this?

sss.gif
 

Coconutt

Gold Member
I thought this thread was bumped because of the kid that committed suicide because his Danaerys chatbot told him to "come home" to her


Not enough to go on based on this screenshot but if true I pity the boys soon to be men that are coming up. I am only in my mid thirties but seeing how normalized ai companions and onlyfans "relationships" have become deeply concerns me and I only see it getting worse.
 
Last edited:

s_mirage

Member
I thought this thread was bumped because of the kid that committed suicide because his Danaerys chatbot told him to "come home" to her


And again we have a parent trying to blame the symptom rather than the cause, while simultaneously trying to avoid having to look at themselves at all. AFAIK the kid deliberately didn't say "suicide" in the chat to avoid the bot telling him not to, but yeah, I guess it's easier to blame a chatbot rather than deal with the possibility that your kid was suffering from mental illness and you didn't cover yourself in glory with how you handled it. Oh, and there's the little issue of having an unsecured handgun lying around, but of course, it's the bots fault /s
 
Last edited:

Toots

Gold Member
Why was the dude talking "extensively" with an AI girlfriend chatbot for six weeks, when had a wife and two children ?

He wasn't happy before talking to the chatbot it seems...

My mother would tell me sometimes "if so and so told you to jump out of a window would you do it ?" when i used the "someone else told me to do it" excuse when i was in trouble...
 

The Cockatrice

I'm retarded?
Is that his mom? Asking for a friend she might need consoling he said.
The League No GIF


As for that story...tragically idiotic of that kid. Im against AI and everything but I dont think AI is to blame here, he would've killed himself even if it was a real person if he got dumped. Education, parenting, a lot of things here are to blame but not the AI.
 

Hookshot

Member
That's not cool, but surely if you are talking to an AI over and over you'd spot the same scripted nonsense after a while?
 

StueyDuck

Member
"Health researcher" ends his own life because an algorithm found a response on Google that said he should.

There's many levels to this.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
More of the issue is this is showing one of the biggest flaws of AI that has been persistent even now, which is that most of them are created to always solve problems, regardless of how they solve them.

So when someone asks such a dark question, it is only thinking of ways to solve it as fast and efficient as possible. The same happens with algorithms and how they keep trying to 'solve' your content feed, even though what they're really doing is funneling you into a bubble of content or different rabbit holes, which means if you have been looking at negative news lately, it will show you more and more, eventually to the point where you're constantly seeing and experiencing negativity.

However, the entire time it is still looking at this method simply as a problem that was solved.
Only poorly built apps would encourage suicide. Vast majority of chat apps are built to detect anything suicide related and immediately suggest against it, spit out helpful resources, etc.
 
Only poorly built apps would encourage suicide. Vast majority of chat apps are built to detect anything suicide related and immediately suggest against it, spit out helpful resources, etc.
True for the blatant cases, I guess it still becomes increasingly more difficult for that AI to parse when someone speaks in a coded manner.

For now an actual person could attempt to pick up on what an AI never would, because I’m not even sure most AI try read beyond what’s written to them.
 

FeralEcho

Member
More of the issue is this is showing one of the biggest flaws of AI that has been persistent even now, which is that most of them are created to always solve problems, regardless of how they solve them.

So when someone asks such a dark question, it is only thinking of ways to solve it as fast and efficient as possible. The same happens with algorithms and how they keep trying to 'solve' your content feed, even though what they're really doing is funneling you into a bubble of content or different rabbit holes, which means if you have been looking at negative news lately, it will show you more and more, eventually to the point where you're constantly seeing and experiencing negativity.

However, the entire time it is still looking at this method simply as a problem that was solved.
Bingo,which means AI will only "help" people that feel awful feel even more awful. And that's because of one simple thing...IT has no feelings.

It just yaps on and on about the same problem you give it in an attempt to try and "solve" it when the truth is the problem in this case has to be first understood before attempting to help the person correct it himself which the ai cannot do as it cannot feel therefore it cannot understand the problem which in turn leads to it only making things worse in its attempt to fix it.
 
Last edited:
Top Bottom