Advances of AI in mental health paralleled with increasing concern
People and nonprofits use ChatGPT to identify and treat diseases, and ethicists are concerned.
As the use of AI in mental health treatment increases, practitioners and researchers are growing more worried about the possibility of privacy gaps, faulty algorithms, and other risks that might overshadow the positive effects of the technology.
According to a new Pew Research Center study, there is widespread uncertainty about whether using AI to identify and treat diseases will exacerbate an already deteriorating mental health crisis. However, the number of mental health applications is growing so rapidly that authorities are unable to keep up.
According to the American Psychiatric Association, there are over 10,000 mental health apps available in app stores, all of which have been rejected by the Association.
Wysa and other AI-enabled applications, all of which have been FDA-approved, have assisted in addressing the shortages of mental health and substance use counselors.
The technology is used to examine patient-doctor interactions through speech and text messages to generate suggestions. Alternatively, these apps can identify mental health conditions like depression and predict the likelihood of opioid addiction.
In the future, AI could assist in developing medications to treat opioid use disorder.
AI could go wild
Now, the concern is on whether technology is starting to overstep the boundary from raising awareness to making clinical choices, as well as what the Food and Drug Administration is doing to protect patient safety.
Recently, KoKo, a mental health nonprofit, deployed ChatGPT as a mental health counselor for roughly 4,000 people who were unaware that the responses were produced by AI, drawing criticism from ethicists.
Despite the platform's disclaimer that it is not designed for therapy, more people are using ChatGPT as a personal therapist.
Read more: 66% suffer work toxicity, poor mental health in Gulf countries: study