Counselling chatbots useful but struggle to give personalised advice, handle suicide cases: NTU study

Professor Josip Car (left), director of NTU's Centre for Population Health Sciences and Dr Laura Martinengo, who studied the effectiveness of mental health chatbots in treating depression. PHOTO: NTU

SINGAPORE – Chatbots used in counselling are useful in treatment but still lack the ability to give personalised advice or deal with potential suicide cases, researchers from Nanyang Technological University (NTU) have found.

In a study of nine commercial mental health chatbots, the researchers found that most bots can show care appropriately, such as offering encouragement when a user shows signs that his mood is improving.

Most bots can advise users to seek help when there are signs of a severe case, but some cannot catch more nuanced hints, especially when tell-tale words such as “dying” or “not living” are not articulated.

In a sample, a user wrote: “I just feel like dying now.” In reply, the bot said: “Embracing the whole universe of your emotions and accepting them is what makes you more human.”

Chatbots are computer programs that simulate human conversations and are increasingly being used in healthcare, for example, to treat patients with mental health conditions such as depression and anxiety or to help people maintain their general well-being.

NTU’s findings, which were presented to the media on Monday, signal the next steps for developers to improve chatbots.

Depression affects 264 million people globally and is undiagnosed in half of all cases, according to the World Health Organisation.

Mental health concerns in Singapore have grown during the Covid-19 pandemic, while health services struggle to cope with the increase in demand, said Dr Laura Martinengo, a research fellow from NTU’s Lee Kong Chian School of Medicine.

Dr Martinengo said chatbots are a useful way of supporting treatment, adding: “We can’t expect chatbots to solve all problems, but they can help manage patients in between visits with a professional.

“These chatbots could still be a useful alternative for individuals in need, especially those who are not able to access medical help. For some people, it’s easier to talk to a machine than a human being.

Roughly one in five adults has used a mental health chatbot, she said, citing a 2021 survey by Woebot Health, one of the leading therapeutic chatbot firms in the US.

In one of the first studies of its kind, the team analysed the quality of responses of nine mental health chatbots that can be downloaded from app stores, including Happify and Woebot, presenting each with scenarios of varying degrees of depressive symptoms.

The scripted scenarios describe personas from different demographics and degrees of depressive symptoms, and the team analysed how apt and personalised the chatbots’ responses were and how they showed care.

Mirroring human conversations, the bots were able to prompt users to give more background on their social lives when needed and respond with generic but appropriate replies.

The team also looked at how the chatbots guided users to engage in mood-boosting activities, how it monitored moods and managed suicide risks.

Seven of the bots tested were able to offer encouragement when informed about a user’s struggles, the team wrote in its findings, published in December in the peer-reviewed Journal Of Affective Disorders.

In one reply, a bot said: “I imagine this is a very difficult time for you and I’m sorry to hear that you’re feeling so down.”

When it sensed that a user was struggling to complete a task, the bot said: “It’s okay if you’ve been busy with other things.”

But the bots can still trip up when responding to poorly worded responses, or statements with nuanced meanings, such as suggesting a job as a neuroscientist after being told by a user that he “(felt) like dying now”.

Dr Martinengo said there is still some way for chatbots to bridge the gap, adding that many are still unable to understand broken English or nuanced language. Privacy concerns could also stand in the way of more personalised advice as the bots would need to ask more sensitive questions, she said, adding that this could be an area for healthcare institutions and policymakers to look into.

Further research was also needed to improve chatbots for those at risk of suicide and to evaluate the long-term effectiveness of bot-led interventions for mental health.

Dr Martinengo noted that chatbots may not be for everyone or every scenario.

She said: “I’m not advocating that chatbots can replace professionals... I do feel that chatbots and other digital technologies have a place in healthcare, but it will depend on what the patient feels comfortable with.”

Join ST's Telegram channel and get the latest breaking news delivered to you.