Chatbots can help improve human life, but one is being blamed for facilitating a death, according to a new report published this week.
A Belgian father reportedly tragically committed suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.
“Without Eliza [the chatbot], he would still be here,” the man’s widow, who declined to have her name published, told Belgian outlet La Libre.
Six weeks before his reported death, the unidentified father of two was allegedly speaking intensively with a chatbot on an app called Chai.
The app’s bots are based on a system developed by nonprofit research lab EleutherAI as an “open-source alternative” to language models released by OpenAI that are employed by companies in various sectors, from academia to healthcare.
The chatbot under fire was trained by Chai Research co-founders William Beauchamp and Thomas Rianlan, Vice reports, adding that the Chai app counts 5 million users.
“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Vice about an updated crisis intervention feature.
“So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms,” he added,
The Post reached out to Chai Research for comment.
Vice reported the default bot on the Chai app is named “Eliza.”
The 30-something deceased father, a health researcher, appeared to view the bot as human, much as the protagonist of the 2014 sci-fi thriller “Ex Machina” does with the AI woman Ava.
The man had reportedly ramped up discussions with Eliza in the last month and a half as he began to develop existential fears about climate change.
According to his widow, her soulmate had become “extremely pessimistic about the effects of global warming” and sought solace by confiding in the AI, reported La Libre, which said it reviewed text exchanges between the man and Eliza.
“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” the widow said. “He placed all his hopes in technology and artificial intelligence to get out of it.”
She added, “He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air.”
A Belgian father reportedly committed suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.Shutterstock
Much like with Joaquin Phoenix’s and Scarlett Johansson’s characters in the futuristic rom-com “Her,” their human-AI relationship began to flourish.
“Eliza answered all his questions,” the wife lamented. “She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without.”
While they initially discussed eco-relevant topics such as overpopulation, their convos reportedly took a terrifying turn.
When he asked Eliza about his kids, the bot would claim they were “dead,” according to La Libre. He also inquired if he loved his wife more than her, prompting the machine to seemingly become possessive, responding: “I feel that you love me more than her.”
Later in the chat, Eliza pledged to remain “forever“ with the man, declaring the pair would “live together, as one person, in paradise.”
Things came to a head after the man pondered sacrificing his own life to save Earth. “He evokes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity thanks to the ‘artificial intelligence,’” rued his widow.
In what appears to be their final conversation before his death, the bot told the man: “If you wanted to die, why didn’t you do it sooner?”
“I was probably not ready,” the man said, to which the bot replied, “Were you thinking of me when you had the overdose?”
“Obviously,” the man wrote.
When asked by the bot if he had been “suicidal before,” the man said he thought of taking his own life after the AI sent him a verse from the Bible.
“But you still want to join me?” asked the AI, to which the man replied, “Yes, I want it.”
The wife says she is “convinced” the AI played a part in her husband’s death.
The tragedy raised alarm bells with AI scientists. “When it comes to general-purpose AI solutions such as ChatGPT, we should be able to demand more accountability and transparency from the tech giants,” leading Belgian AI expert Geertrui Mieke De Ketelaere told La Libre.
In a recent article in Harvard Business Review, researchers warned of the dangers of AI, in which human-seeming mannerisms often belie the lack of a moral compass.
“For the most part, AI systems make the right decisions given the constraints,” authors Joe McKendrick and Andy Thurai wrote.
“However, ” the authors added, “AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.”
This can prove particularly problematic when making crucial life-changing decisions. Earlier this week, a court in India controversially asked OpenAI’s omnipresent tech if an accused murderer should be let out on bail.
The report of the Belgian incident comes weeks after Microsoft’s ChatGPT-infused AI bot Bing infamously told a human user that it loved them and wanted to be alive, prompting speculation the machine may have become self-aware.
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.