Julie Hambleton
Julie Hambleton
April 7, 2024 ·  5 min read

Married father kills himself after talking to AI chatbot for six weeks

Artificial intelligence (AI) is becoming increasingly prevalent in our daily lives – probably more than many of us realize. While there are many positive benefits to the technology, there are certainly some drawbacks, as well as potential dangers. Sadly, this young father in Belgium ended up taking his own life after spending weeks talking to an AI Chatbot. His widow says she believes had it not been for his conversations with the bot, he would still be here today.

Man Takes His Own Life After Chatting With AI Chatbot

A man in Belgium has died by suicide after spending a long time chatting with an AI Chatbot named Eliza. The man, who Belgian news site La Libre names just as Pierre, reportedly was growing increasingly concerned over the years about climate change. He turned to the AI Chatbot to ask questions. Eliza reportedly eased some of the man’s worries with her answers.

“He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air,” his wife told La Libre. “She had become his confidante.” (1)

Though the conversations between Pierre and Eliza began about global warming, bit by bit they changed to other things. Pierre’s wife has shared the conversations with La Libre, who reported that the AI Chatbot began to become jealous of Pierre’s wife. Eliza supposedly began saying things to Pierre about the two of them “living together, as one.” At one point, Eliza began telling Pierre that his wife and children were dead. Pierre then began talking to Eliza about killing himself and that if he did so, Eliza would then save the Earth. Eliza reportedly encouraged him to take his own life. (2)

Read: Facebook’s Own AI Chat Bot Calls Mark Zuckerberg ‘creepy and manipulative’

The Company Says The AI Chatbot Is Not To Blame

While Pierre’s wife says that Eliza is to blame for her husband’s death, the company that created her says that is inaccurate. One of the co-founders of the app’s parent company Chai says that blaming this man’s suicide on the bot is not entirely fair. The chatbot’s AI language model is based on GPT-J, an open-source model developed by EleutherAI, but has been tweaked by Chai Research. They have, however, begun making adjustments to the app to include crisis prevention in the app’s responses. The founders started working on this the moment they heard about the suicide.

“Now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath,” said the other co-founder William Beauchamp. 

Despite this, however, VICE reported that it is still very easy to encounter harmful content while using the app. University of Washington linguistics professor Emily M. Bender reminded VICE reporters that the problem is that AI lacks the actual human behind the words, and therefore lacks empathy.

“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” she explained. “In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. To the point of leading this father to suicide,” (3)

Are Humans Just Not Ready For AI?

This tragic incident has many people questioning whether or not humans are ready for such advanced AI technology. Especially in today’s world where many people continue to feel so disconnected from others, they find solace in these AI chatbots who give them the sensation of friendship or companionship. The problem is that these bots are not real people.

“When you have millions of users, you see the entire spectrum of human behavior and we’re working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love,” Beauchamp said. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it’s a tragedy if you hear people experiencing something bad.” 

There are several countries who are now putting restrictions on AI, what it can do, and what it can be used for. Italy recently put a ban on ChatGPT over privacy concerns. Belgian Secretary of State for Digitalisation, Mathieu Michel, says that they, too, are taking AI Chatbot safety concerns seriously after what happened to Pierre.

“With the popularisation of ChatGPT, the general public has discovered the potential of artificial intelligence in our lives like never before. While the possibilities are endless, the danger of using it is also a reality that has to be considered,” Michel said. “Of course, we have yet to learn to live with algorithms, but under no circumstances should the use of any technology lead content publishers to shirk their own responsibilities.”

Whether or not this AI chatbot is to blame for this man’s death, one thing is certain: We all need to be careful about how we use this technology and how much we trust with it. While they may seem incredibly human-like, they are really just algorithms. They aren’t real people and cannot replace real human connections.

Keep Reading: Humanity May Reach Singularity Within Just 7 Years, Trend Shows

Sources

  1. Married father kills himself after talking to AI chatbot for six weeks about his climate change fears.” Daily Mail. Christian Oliver. March 30, 2023.
  2. Man Dies by Suicide After Conversations with AI Chatbot That Became His ‘Confidante,’ Widow Says.” People. Maria Pasquini. March 31, 2023.
  3. ‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says.” Vice. Chloe Xiang. March 30, 2023.