Chatbots Supporting Mental Well-Being: Are We Playing A Dangerous Game?

In recent years, we have noticed the rise of chatbots. The term “ChatterBot” was originally coined by Michael Mauldin to describe these conversational programs. They are supposed to emulate human language and hence pass the Turing Test.

Most recently, chatbots are being designed and targeted to help people with their mental health and well-being. And, there seems to be a crowded market out with several such chatbots popping up and cashing in on the mental wellness drive across the world.

Can chatbots really claim to be ‘wellness coaches’ and ‘mental health gurus’?

Source: GettyGetty

Artificial Intelligence has been for many years trying to be more cognizant, more attuned to the nuances of human language. As an Academic, I have been working with technology for over a decade looking at whether even the most intelligent technology can replace human emotions, and claim to be truly “intelligent”.

Mental health is a complex, multi-layered issue. Having suffered from anxiety and depression myself, I know how difficult it is to articulate my feelings even to a trained human being, who can see my facial expressions and hear the nuanced inflections in my voice, or even my body language. My slumped shoulders, the slight frown as I respond “I am ok” to someone asking me how I am, are hints that all is not well, which a chatbot is unlikely to pick up. As a chatbot asks me “Are you stressed”, I feel annoyed already, as that is not something I am likely to respond well to.

Let us also talk about the underlying prejudice and unconscious bias in these AI tools. A chatbot is trained with underlying neural nets and learning algorithms and it will inherit the prejudices of its makers. However, there is a perception that technology is entirely neutral and unbiased, and people are more likely to trust a chatbot than a human being. Bias in AI is not being given adequate attention, especially when such tools are being deployed in a sensitive domain such as tackling mental health or being advertised as a “coach”. In 2016, Microsoft released its AI chatbot Tay onto Twitter. Tay was programmed to learn by interacting with other Twitter users, but it had to be removed within 24 hours because its tweets included pro-Nazi, racist and anti-feminist messages. There are currently no stringent evaluation frameworks within which such chatbots can be tested for their bias, and the developers are not bound legally to openly and transparently talk about their training process of these AI algorithms.