Artificial Intelligence Is Recording Your Online Therapy Sessions

The Ability Toolbox is a disabled-owned small business. We use affiliate links, which means we may receive commissions at no added cost to you. Thanks!

Artificial Intelligence is now recording your online therapy sessions. Is AI mental health treatment worth the risk? Can you do anything about it?

Can you imagine having your entire online therapy session recorded? Can you imagine a computer program analyzing your session and giving your therapist advice on how to treat you, and possibly being sued or held liable if they do not follow the advice? Could you imagine your soon-to-be ex-husband getting ahold of your session recordings and using them against you in a custody battle? Could you imagine having only access to a computer program for therapy through your insurance company?

I hate to break it to you, but all of this is on the horizon, has the potential to happen, or is already happening. Artificial Intelligence (AI) is infiltrating the therapy space. The makers say it is filling a much-needed gap in services. There is a shortage of providers and affordable care, but is AI really the answer?

AI is dictating how therapists work with their patients in sessions, and AI is being exclusively used for therapeutic treatment, such as text or automated messaging. I see this as a big problem around privacy and security. There are some benefits of AI in the therapeutic environment, but do they outweigh the risk to the field? Should non-therapists (i.e. computer algorithms) be determining if you are a threat to yourself or others? Will they pick up on schizophrenia or trauma?

I am disturbed by all of this as someone who has spent most of her life in some kind of therapy. I am concerned this will become pervasive, due to a lack of practitioners, and the most vulnerable, in particular BIPOC and disabled individuals, will be the most impacted.

Inventors say rural communities are understaffed and need services desperately and this is the only way to get services to the most vulnerable. What they fail to identify is not only do patients need services, they also need quality care. I am not sure this is the solution.

Some of the promises of AI therapy that inventors espouse include:

  • Reaches more people
  • Could be cheaper
  • Insurance could cover as a cost-saving measure
  • Eases people into therapy
  • Location will no longer limit access
  • Time off work will be limited
  • Some of the programs are available 24/7
  • You can just use your phone

The drawbacks and risks of AI therapy:

  • The recording of a session will be forever stored on company servers
  • Artificial intelligence could be analyzing sessions to improve the algorithm
  • Algorithms could be analyzing sessions to identify high-risk patients, and may be inaccurate
  • Computers could be making recommendations to therapists
  • AI cannot assess affect and tone
  • AI cannot serve those with complicated diagnoses and may not be able to assess for them
  • Your insurance carrier now has all of your recordings and can use them to deny coverage to you potentially
  • Your employer can assess you and make decisions about your employment status
  • Your employer could gain access to your mental health records and sensitive information
  • Your employer could limit your mental health insurance options to AI
  • AI may not catch someone in a crisis
  • Who is liable if there is a negative outcome?
  • What if the computer recommends something dangerous or insensitive?
  • There will not be constant real-time monitoring by a therapist
  • Your recordings could be subpoenaed for child custody or criminal court cases
  • The company could be hacked or they could deliberately or accidentally share your data
  • Would the AI have to be licensed?
  • Would the AI have malpractice insurance?

What can we do instead of AI therapy?

  • Free college and graduate programs for future therapists
  • Larger cohorts in graduate programs
  • Countrywide licensing so counselors can practice in all states
  • Peer support models such as Certified Peer Specialist (also pay them a livable wage)
  • Incentives to work in rural communities and other underserved areas
  • Telehealth expansion
  • Expansion of free/low-cost broadband
  • Culturally competent graduate programs (more supportive of BIPOC and LGBTQ+ students so they complete the program and continue into the field)
  • Federal funding and state funding to subsidize salaries
  • Better insurance reimbursements
  • Graduate professors encouraging clinical work instead of research
  • Mental health parity (therapeutic appointment same cost as general office visit)
  • More free/low-cost public mental health programs
  • More access to government-funded home Internet and cell phone services

The data security risks of artificial intelligence-based mental health treatment

There has already been a data breach at Cerebral, a company specializing in mental health. They admitted to sharing patient data with Meta, TikTok, and Google. Even companies like Goodrx and BetterHelp have been fined by the FTC for the selling of private client information to third parties. This all scares me, and it seems these companies are taking advantage of those who are not reading the fine print or are outright deceiving patients and exploiting them. Congress seems to think so as well and has asked online therapy companies to prove information about who they are sharing and selling patient data to and what kinds of data sets they are sharing. 

Flaws and biases in AI

AI is notorious for being biased against Black people and other marginalized groups. Available data on chatbot therapy is heavily weighted toward white males. Who is going to address these biases? Who is going to identify the gaps in treatment and rectify this? Will it be cost-effective? Will we rely on the companies to do this?

The U.S. Veterans’ Administration has a major problem in getting veterans access to their services and so the suicide rate is high. Now they have resorted to using AI to assess suicidality in veterans throughout their system of care. A recent article by Dhruv Khullarin the New Yorker outlines the impact of AI on veterans. They have already run into problems with assessing women's and Blacks' levels of risk of suicide. This has not stopped them from using this system of treatment and assessment.

If the VA can hire top-of-the-line AI providers and still have these kinds of problems, what about the run-of-the-mill doctor’s office or insurance company? What level of quality services will they provide?

Why AI would harm me as a person who receives therapy

I have been in therapy for 14 years (this time) building a rapport and relationship with my therapist. I have been in trauma therapy. AI cannot provide me with trauma therapy. It cannot listen to my stories and help me tease out the past from the present and come up with ways to cope. I have done somatic healing therapy, which involves a practitioner observing your unspoken body language. AI can't do that. I can be vulnerable in a therapy session because I have a level of trust. What if I say things that may indicate suicidal ideation, but the computer program considers them an indicator of a suicide plan? Could I be involuntarily hospitalized based on the “judgment” of a computer?

I cannot see my entire therapy session being recorded. What about a data breach or it being subpoenaed and used against me? I cannot imagine being able to open up freely knowing this is going on. I already am concerned about hacking into my therapy sessions on platforms like Zoom. I would be more able to keep important facts from my therapist if we were only communicating by text. And how is a commuter program going to pick up tone and affect?

My psychiatrist's office just decided to sell out to an AI company in Silicon Valley. They intend to track client files and make recommendations for treatments in the hopes of improving patient outcomes. Needless to say, I have opted out and will be going elsewhere. I do not feel comfortable with my notes being analyzed by an AI program and it making recommendations for my care. I have built a multi-year relationship with my practitioner and do not feel I need a computer program overseeing her treatment of me based on an algorithm.

The long view of AI mental health treatment

As you can see, I am not in favor of AI mental health treatment, but I also see we have a mental health crisis and a lack of service providers crisis. We must take some kind of action, and soon. Especially after the pandemic, people are suffering and we are telling them to get help, but it does not exist.

AI is here to stay. We must learn how to cope in a world with it and make sure it does not compromise our well-being. If you are offered AI-driven therapy, be skeptical and do your homework. I understand this may be your only choice. I just ask that you go in with your eyes wide open and protect yourself the best way you know how.

As an aside: As I am writing this, the AI on my Word program is making sentence suggestions. How creepy is that?

Image by meshcube via Deposit Photos

Alisa Michele, a Black woman with curly hair pulled up in back, standing outside by a painted brick wall.
+ posts

I am Black, lesbian, disabled, mentally ill, fat, a birth mom, mom and grandmom (grand ma Coco to be exact) and Funny. I am a woman who is constantly fighting for my and your liberation.

I have a history of working for those living at the margins mostly in activist and nonprofit spaces. I currently work in the mental health field serving those who have been convicted of felonies and are in mental heath court. I am also a writer. I write about disabilities, chronic illness, mental health, racial trauma, sexual violence and disordered eating. I am also a public community speaker on the same topics. Hit me up if you need my writing or speaking skills.

Please use she or her pronouns when referring to or about me.

The Ability Toolbox
Logo
Register New Account