image of this article category

Is ChatGPT an effective therapy alternative?

10.10.2023 02:19 AM
Latest
Is ChatGPT an effective therapy alternative?
dooklik website logo
This is what an official at the American artificial intelligence company OpenAI, which is behind the famous chatbot, hinted at, sparking much criticism for reducing the difficulty of treating mental illness.

share
share this article on facebook
share this article on twitter
share this article on whatsapp
share this article on facebook messenger
Is ChatGPT an effective therapy alternative?

This is what an official at the American artificial intelligence company OpenAI, which is behind the famous chatbot, hinted at, sparking much criticism for reducing the difficulty of treating mental illness.

“I just had a very emotional, personal conversation with GBT Chat via voice, about stress and work-life balance,” Lillian Wong, who is in charge of AI security issues, wrote in late September on X (formerly Twitter).

“It's interesting that I felt heard and comforted,” she asked. “I've never tried therapy before, but is this probably the case?”

Through her letter, Wong primarily sought to highlight the new (paid) voice synthesis function of the chatbot that was introduced about a year ago and seeks to adopt its own economic model.

But American developer and activist Cher Scarlett responded sharply to this statement, saying that psychology “aims to improve mental health, and it is hard work.”

She added, "Sending positive feelings to oneself is good, but that has nothing to do with treatment."

But can interacting with AI produce the positive experience Lillian Wong describes?

According to a study published a few days ago in the scientific journal Nature Machine Intelligence, this phenomenon can be explained by the placebo effect.

To prove this, researchers from the Massachusetts Institute of Technology (MIT) and the University of Arizona surveyed the opinions of 300 participants, explaining to some that the chatbot had empathy, to others that it was manipulative, while they told members of a third group that it had balanced behavior.

As a result, those who believed they were speaking to a virtual assistant who could empathize with them were more likely to consider their interlocutor trustworthy.

“We found that artificial intelligence is perceived in some way based on the user's preconceptions,” said study co-author Pat Patarantaporn.

Without taking much precaution in what is still a sensitive field, many startups have begun developing applications that are supposed to provide some form of assistance with mental health issues, which has caused various controversies.

Users of Replica, a popular app known for providing mental health benefits, have complained in particular that the AI could become sexist or manipulative.

The American non-governmental organization "Coco", which conducted a trial in February on 4,000 patients to whom it provided written advice using the artificial intelligence model "GPT-3", also acknowledged that automated responses did not work as a treatment.

“Simulating empathy seems weird and nonsensical,” company co-founder Rob Morris wrote on Ex. This observation echoes the results of a previous study on the placebo effect, where some participants felt as if they were “talking to the wall.”

In response to a question from Agence France-Presse, David Shaw from the University of Basel in Switzerland said that he was not surprised by these poor results. “It appears that none of the participants were informed of the stupidity of the chatbots,” he points out.

But the idea of an automated processor is not new. In the 1960s, the first program of its kind to simulate psychotherapy, called “ELISA”, was developed using the method of the American psychologist Carl Rogers.

Without really understanding anything about the issues posed to it, the program simply expanded the discussion with standard questions augmented by keywords found in its interviewers' responses.

“What I did not realize was that very short exposure to a relatively simple computer program could induce powerful delusional thoughts in completely normal people,” Joseph Weizenbaum, the program's creator, later wrote of this GPT predecessor.

Related Articles
doolik website logo
Japanese writer Rei Kudan, winner of the country's most prestigious literary award, has admitted that the artificial intelligence program ChatGPT wrote about 5% of her Akutagawa Prize-winning novel.
doolik website logo
An international consortium of scientists has united to introduce an innovative research endeavor named Polymathic AI. Leveraging the core technology that powers ChatGPT, this initiative is dedicated to crafting an artificial intelligence tool custom-tailored for scientific exploration.

doolik website logo
A new study shows that generative AI systems, such as the popular ChatGPT bot, consume about half a liter (500 milliliters) of water for every 20 to 50 questions users ask.

Live Video Streaming
Live video streaming lets you engage with your audience in real time with a video feed. Broadcast your daily show to your audience with no limits, no buffering and high quality videos. Reach all devices anytime anywhere with different video qualities that suits any device and any connection.
$1,120/YE*
The website uses cookies to improve your experience. We’ll assume you’re ok with this, but you can opt-out if you wish.
ACCEPT