Tag: AI and psychological therapists

  • ‘AI To Take Over Management of Everyday Stresses and Mild Depression and Anxiety’

    That is the prediction of Alan Frances in a just published British Journal of Psychiatry paper. He is a leading US psychiatrist and a major figure in the development of the DSM criteria. Yet just returning from the European Conference on CBT at the University of Glasgow, I did not come across any mention of AI. There were attendees from over 60 countries and over 2000 CBT afficionados. But there is clearly a major issue to address in terms of clients and psychological therapists. Bluntly, jobs, could be on the line.

    Alan Frances notes that psychotherapy and guidance on managing everyday difficulties is the most common use of AI. Frances believes that AI is just as good as traditional treatments for mild anxiety and depression. He suggests that an advantage of AI is that it makes engagement of clients easier, in that it is available 24/7 and that there is no shame in disclosing highly personal information to a machine. Frances notes massive investment in the development of AI for mental health problems with an associated likelihood of massive marketing. In this context empirical investigation of outcomes is likely to be given scant attention. Governments and service providers are likely to be carried away by the anticipated huge savings in staffing costs, ignoring misgivings about effectiveness. Frances argues most clients will gradually opt for the accessability, convenience and reduced costs of artificial intelligence. Artificial intelligence chatbots place a premium on pleasing the user, as this is the most likely to be financially lucrative, rather than being governed by what is clinically important. The developers of AI therapy have little accountability.

    But the evidence base for the effectiveness of AI in treating mental health problems is weak. Frances’s contention that AI is as good for treating mild anxiety and depression as traditional treatments is unproven. Nevertheless, my own research Scott (2018) on NHS Talking Therapies clients, suggest only a tip of the iceberg recovery rate. It would be no surprise to find that AI therapy has comparable efficacy, or more accurately lack of efficacy. But AI may be more engaging. For every 2 people having 2 or more treatment sessions in NHS Talking Therapies 1 has just one assessment/treatment session Scott (2024). To my knowledge AI has no demonstrated efficacy for severe mental illness. It could be argued however that in this context it is no different to traditional treatments. 

    In essence AI is like a photocopy of ‘good practice’. But the presumption is that the ‘good practice’ has first been rigorously evaluated. Importantly that the supplied data was ‘falsifiable’. If data from routine psychological therapy was used as the database, with Service providers stipulating what is ‘good practice’, then no steps have been taken to ensure that it was possible to prove that the service was ineffective. In such instances any observed improvement could be due to time, attention and a credible rationale. Thereby making AI a worthless photocopy. In computer terminology It is a matter of GIGO, ‘garbage in and garbage out’. Routine psychological treatment will be, sooner or later in the ‘dock’, whilst there are no signs of it being able to make a ‘robust’ defence, my guess is that the ‘judge’ [political masters, Department of Health] will be swayed by the plausibility of the case presented and the new-found ease of providing services. The acoustics will prevent listening to the voices of clients. There needs to be a ‘wake-up’ call to examine these issues.

    Dr Mike Scott