After sex, football and even art, could artificial intelligence soon replace our doctor? The idea is not new. It appears in many science fiction films, but tends to democratize IRL. This time, it is our psyche that the robots want to attack. Invented in 1966, the first conversational chatbot dedicated to psychiatry has come a long way since the 2000s, and the idea of an AI capable of maintaining our mental health is no longer as dystopian a prospect as it seems.
Soon robots shrinks?
According to several recent studies, human patients are no longer as reluctant as before to confide their moods to a robot. Mental health is an increasingly popular topic, and AI could provide several significant benefits. First, the absence of a real therapist could ensure patients a greater freedom of speech, without fear of human judgment. Then report some works, AI would be the only one able to make truly objective decisions, while remaining available at any time of the day and night.
Note, however, that if the future of psychiatry refers to robot therapists, it is for the moment above all a question of chatbots, and more rarely interactive videos. The absence of a physical interface with a human appearance could therefore constitute a real obstacle to the creation of a link between the patient and his “practitioner”. In addition, several health professionals are already worried about the inability of an AI to detect certain warning behaviors, especially in the case of a suicide attempt for example.
For the moment, and despite all the progress in AI in recent years (to the point that a former Google employee considers a chatbot as a real colleague), robots are still far from equaling humans, both in terms of empathy, but also of internal models. For a healthcare professional in particular, these unconscious mental patterns integrate all past experiences and acquired knowledge to make it possible to establish a human diagnosis.
Especially since the objectivity announced by the defenders of AI has yet to be demonstrated. Robots remain programmed by fallible humans, and therefore do not escape certain biases, whether sexist, racist or social. Recall that Tay, Microsoft’s artificial intelligence launched in 2016, only took a few hours to utter Nazi insults after spending some time on Twitter.