How AI Can Enhance Your Counseling Practice

    12:54PM May 12, 2025

    Speakers:

    Donna Sheperis, PhD

    Keywords:

    AI in counseling

    telehealth practice

    ethical use

    AI documentation

    mental health chatbots

    privacy concerns

    informed consent

    AI training

    AI efficiency

    AI bias

    AI supervision

    AI treatment plans

    AI in education

    AI in mental health

    AI future trends.

    Music.

    Hello and welcome to the thoughtful counselor, a podcast dedicated to bringing you innovative and evidence based counseling and mental health content designed to enhance your life, whether you're a clinician, supervisor, educator, or a person wanting to learn more about the counseling process. We are here to demystify mental health through conversations with a wide range of counseling professional powerhouses. In each episode, you'll learn about current issues in the field, new science and real life lessons learned from the therapy room. Thank you for joining us on our journey through the wide world of counseling. There's a lot to explore here, so sit back, take a deep breath, and let's get started.

    Welcome to the thoughtful counselor. I'm Margaret Lamar, and today I am here with Dr Donna shaparez, welcome

    Donna, thank you. This is fun. Glad to be here. Yes, so

    excited to have you. So we're going to talk today about AI in counseling, which I'm so thrilled to learn more about. But I want to start by asking you about your journey into the profession. Tell us what got you here, what's keeping you here, all of

    those things, probably, yeah, the same thing got me here. That's keeping me here, and that is absolutely zero planning on my part. I I've not one of those people who is going through life thinking I can't wait to be a counselor. I can't wait to work with people. My undergrad is in business. I had no intention of coming in this direction. I have a marketing degree, and when I definitely could not proceed through the MBA program, after hitting that dreaded accounting graduate level course, I started looking around for what else can I do? Because I wasn't enjoying the work that I could get, and I spoke with the Chair of the psychology and counseling department about the fact that I really loved my consumer behavior classes in the marketing series, and that was like the Why do people buy things and what gets your attention? And he said, Look, you already have an undergraduate degree and I don't have a graduate degree in psychology. Go talk to the counseling folks in my department and look into that. So I took Intro and theory, just as a lark to see if I liked it. Fell in love, absolutely fell in love, and was happily working for a decade in community mental health agency in the local area when I was invited to come back to that institution where I got my masters and run the clinic, and then they said, why don't you go down the road to this other institution and get a PhD, and then you can teach some classes? Okay, I'll give it a try. Also fell in love with that. So since then, I have been teaching and training and maintaining a little bit of private practice. And in the last well, I've been teaching using technology like distance teaching since 2009 2010 was my first fully Distance Education course, and then I started my clinical practice fully online in 2016 so that was the time when people were telling me, this isn't ethical, this won't work. And then COVID, and now everybody's like, This is so great, and it works so well. Everyone should be using telehealth practice. So I go look for something new, and I fell into AI, and now I'm having a blast with that. Oh

    my gosh. I know you were so ahead of the curve, and it's funny because I just was at the ACA conference, and I was at a couple of round tables, and it was like everybody was doing seeing most of their clients in telehealth also have come such a long way in I

    don't want to go see my regular doctor. Let's do a telehealth session. That's great. That's fine. I'll just tell you how I'm feeling.

    Yeah, exactly. That's what I do. It's great when I am able to do that. Okay, so let's get into AI in counseling. And I will say for myself that I am a, you know, maybe a normal ish AI user, like, I, you know, ask chat GBT for how to, you know what, what my meal plan for the week is going to be. And, you know, Oh, can you help me write a recommendation letter or outline something, right? So, so that's, that's primarily what I use it for, that kind of thing. But so I feel like I have no concept of what even AI and counseling means. So could you start there?

    Sure. I mean, I think, I think the way you're describing the use of AI, you're saying in the normal way, when I start with that, I meet counselors who aren't even doing that, who aren't even leveraging a. The types of platforms that are available to do more than talk to Siri or Alexa like they're not even using chat GBT or Claude to build things. So even just starting there, I think from a clinical perspective, we have opportunities to do really low hanging fruit types of work with AI, all the way up to proprietary systems that are expensive and have a lot of bells and whistles. You know, the bottom line is, we are in a tech revolution that we have never been in before. Even the onset of the internet moved slower from conception to wide use than AI. You know, we have had wide use availability of AI Since 23 and when chat GPT became available freely. And here we are, you know, early 25 and it's already taking over our EHRs. It's taking over our teaching practices. Is taking over our treatment planning, so using AI in a way that is ethical, and what I like to talk about is optimizing the beneficence of AI, because it is just a product. It's not good or evil, it just is. So what are some good things and ethical things that we can do in our setting as counselors and Counselor Educators to provide a better environment for our clients and their progress, for our students in their progress. Mm, hmm.

    So can you give us some examples of how let's start with community mental health counselors. How are practitioners out in the field? How are they using AI? Or, what are the opportunities maybe, how are they using it? And maybe, what are the things coming up that

    the most common form of AI use in clinical practice is a scribe AI that AIDS documentation. So scribe AI is the type of transcript and analysis that you can get even when you do zoom. So right now, a company called otter AI merged with Zoom, and so every zoom session is being transcribed, and you can get the transcript, and could do a content analysis of that, but what we're seeing now in proprietary systems that are bought by clinical agencies is that they are buying systems where AI converts a therapy session into a progress note, and the clinician can accept or reject the elements of that progress note to create their own note. So for myself, when I finish a session, the platform says, Would you like me to create a smart note? Well, yes, I would please do. So after about a minute, I have a soap note that is generated based on what the AI I'm gonna use air quotes here heard in the session, but it's not my voice, and not always my priorities with that client or what I saw in the session, I'm going to edit the heck out of that, but I have a starting point, yeah, and so what we're seeing is about a 70% reduction in documentation turnaround time for clinics that have this kind of scribe ability. There's no audio or video that is saved at the end of the session, and there's no transcript that's saved. The only thing that is saved, if you will, is that starter progress note. But you can also do that with treatment plans. So there's a lot of products where you can put in the general overview of a client, and it will give you ideas for interventions, for goals, using SMART goals, much like we would refer to the psychotherapy treatment planner book series that we all had back in the day, and it's still available, like I want The adult treatment planner or the child adolescent, or the or the anxiety treatment planner, and we would pull those sentences and make them ours for a particular progress treatment plan with a client. And AI is doing that for us now, because it is informed by that type of research.

    I know, you know, I think too, you're starting to see this in the medical field too. I've seen several doctors, yeah, a bit probably, I think, like in my little doctor's office, I think in the last couple of years, I guess maybe the last year is when I've had a doctor come in and say, Are you okay with me wearing this little microphone thing on my neck, and it's gonna help me do Notes. Later. And, yeah, it's kind of interesting to see where all it can integrate, but I love hearing about how we can be so efficient, efficient and

    AI doesn't have subjectivity. It can have bias, but it doesn't have subjectivity in what words are used. So the transcript is the transcript is the transcript, and that objectivity can help us. I've been talking to medical educators, and what they're hoping for is an AI product that would help them in an ER setting during a crisis, that very quickly provides the appropriate pharmacological interventions, like, what meds do I need to start right now? How much? Based on all of this data now, physicians can think that through, but realistic. We're human. We're going to be tired. Some days, we're going to be not see something some days, we're very, very subjective and have our own biases too. So those kind of products, I think are going to really streamline that side of things, which, as a clinician, gives me way more time to take care of my client, or imagine this myself, because I'm not doing documentation all night, and I'm not burdened by progress nets or treatment plants.

    So something that always comes up for me when we think about efficiency tools is how sometimes those tools are sort of a sneaky way for us, just us to just do more, you know. So I wonder, if you have thoughts about, you know, are we going to use that time to take care of ourselves, or is there other, are there other things that will, you know? Oh, now we can see more clients. We can make more money for our agencies or for our practices,

    and I can't say that won't happen. You know, just because we want to reduce it from the time on task after hours doesn't mean that the reduction in work hours that's going towards this type of documentation won't be filled by our employers or another client if we're in private practice, that is beyond a discussion of AI, that's just what we do, right? We get more space and we fill it right? We're not really good in a western culture. We're slowing down right, right? But one area where I think that we have an opportunity is how we use AI as an adjunctive tool to our work as therapists. So right now, all therapies provided by me, it's me. You have an hour with me, and I'll see you in another week or two weeks, and there's another hour with me, and that's how therapy goes. But with AI, we have the opportunity to help our clients use mental health chat bots or mental health AI agents to do some work between therapy that I could even have visibility into as a therapist if I have a dashboard. Oh, look, I prescribed that they do some psycho Ed and then some other type of journaling. And I can see that before the session, and I'm not spending the session doing psycho ed, and I am not spending the session maybe reviewing homework or lack of homework, we just go right into the meat of it. So I think that is really what's exciting to me, because there's the research is still really limited on the efficacy, but boy, it looks good like clients reach their anticipated outcomes more quickly. Imagine reducing our wait lists. You know our mental health crisis is creating these long wait lists and people who just can't afford a whole lot of therapy sessions or an EAP that won't allow it, right? That's what I get excited about. That's

    that's so cool. Okay, so talk to us about the training counselor side. Where are you seeing this use really, in cool ways?

    Oh, how are you using it? Well, first of all, I have to say, in my master's program, back in the, you know, 1900s i i was not trained in the use of AI, and I would hazard to guess that most faculty are not trained in any use of AI, right? Same, yeah, and yet, we're sending these clinicians out into the world where agencies are using it, and then practices are using it, but we don't have trained them because we weren't trained ourselves. So that is one hurdle in our work. Because if anything is going to be slow to adopt something, it's going to be education, particularly higher education, bunch of PhDs with opinions, and it's hard for us to move. We can't even, you know, edit a mission statement without 50 people weighing in with this. There are tremendous opportunities. And I honestly think, and I if someone listening has something that I don't know, please share. I say this every time I do a podcast or an interview or I'm doing a presentation to my knowledge, the PA u e clinic was the first university training clinic to use AI as a supported supervision tool while training our students how to use AI as an injunctive co therapist model. I think there's others doing it now, but no one was doing this in 2020, when we launched so with our students, they had the opportunity to work with clients in a telehealth setting. We taught them all the things that I think faculty are teaching now, like how to conduct a zoom setting therapy session rather like how to set up your background and your lighting and your audio, what to do if tech fails, how to know where your client is physically located. Those are things we do train on with the AI that we were using. We were actually scribing the session, so it was being recorded in a transcript, but it did not automatically go away at the end of the session, we held on to it for six months because it's students, and the AI gave us metrics about the session so it's able to analyze the transcript and say, you know, here are your keywords. That's pretty basic content analysis stuff. Let's just do all the keyword search then looking at the keywords and the phrasing, here's some themes that seem to come up for this client. There were a number of mentions of relationships, this number about Father, this number about Mother, this number about boyfriend, then anxiety. Here's how many times the client said stress or fear or other words associated quickly, the clinician or the supervisor could eyeball these metrics and have a sense of what happened in the session. Then a platform could also tell the supervisor where the clinician used certain interventions where they did some value, affirmation, rapport, building, agenda, setting, a CBT intervention, and I could go forward then as a supervisor in the video to that point. So I don't know about you, but when you have a lot of supervises, and they see a lot of clients. I don't watch 60 minute sessions in totality. So this gives me, as a supervisor, the opportunity to see some of the big things that are happening and check that with what the video is actually showing, or the audio or the transcript, to make sure that that's accurate. Is that really what's happening here? Was this really risk? Was this really an intervention? All of that, when we can use that to make our university level supervision more targeted, our students then learn how to make their clinical work more targeted, you know, where am I doing? Well, what could I improve upon? Is is visible, not just in a videotape?

    Yeah, yeah. So let me play the part of the skeptic for a moment. Yeah. Of so then are we, you know, oh, Donna, we're taking away counselors ability to actually learn how to do it, and we're relying on, you know, are they going to really learn the skills the same way? I think is the pushback. And you've probably heard other pushback. You could add your own. I'm sure

    the biggest pushback is not about the skills that are happening in session, because those have to be evident to be tracked. So we still the students have to know how to do that. The biggest pushback is, well, the student doesn't really understand a progress note. If we don't have them hand write it from start to finish, every student should know what goes in a soap note and how to write it. I don't disagree. And yet, if EHRs are moving to a scribe format where you're going to be given some suggestions for your note, I need to make really sure that my student knows exactly what is contained in the s part and. It and what should not be part of the s, what should be part of the O and what should not be part of the O, because I can't rely on AI to know that. My student has to know that. So they become much more critical of what goes in each element, which also means when I get it as a supervisor, I have less work to do to help them get up to speed, because they are choosing what goes in each point. Oftentimes, when you start from scratch with a new supervisee, I am rewriting progress notes in in full with them. Everything needs to be changed, but they still have to learn what goes in each slot, just like with treatment planning. If we took away the jobs series and said, Nope, you can't use these adult treatment planners anymore, I think most of us would find that to be foolish. You know, we want to have some ability to to have some parts of a treatment plan that are somewhat standardized. And this is really goes back to none of us hand calculate our statistics anymore, right? Thank goodness, right. But we're not

    handwriting our dissertations. We're

    not but we're believing our students should hand write, yeah, okay, great. Do that in a class, not in the training clinic. Handwrite some in a class. Sure, learn what goes where, but once you get to the training clinic, would you rather have documentation, or would you rather spend hours and hours and hours training, handwritten SOAP Notes,

    yeah. Okay, so the other piece that you're also an expert in is ethics. And so I want to hear your thoughts about the the ethical things, issues that are already arising, things you anticipate coming up around AI and counseling,

    yeah, the biggest ones, of course, are privacy and confidentiality. If you're using something in your life like chat GPT or Claude AI, those are open source. When chat GPT was developed, it was developed as an engine. That's it. We are a motor. This is what we do. This is how we were created. And then it's up to us to take that motor and put it in different vehicles. And so what companies who sell products do is they take the motor, they put it in their vehicle, they wrap the car around it so nothing gets in that they don't put in, and nothing gets out. So you build an algorithm that is customized to whatever you're doing, whether it's education, clinical work, interior design, you are informing the algorithm with the type of best practices. Then when a, in our case, a client, transcript is analyzed. It stays inside the vehicle. It never leaves regular chat, G, P, T, sure. It kind of dissolves a transcript, but it sends it out into the world so that could include pH i, which is a HIPAA violation, or a reasonable association between a name and an issue that's going to be a problem. So how the data is collected encrypted, I've had to learn to ask about where is it encrypted? Is it encrypted in transit, at rest? Where does it live on? What server, if it's a cloud server, who owns it, and what country for that server, in order to understand if it's truly safe enough to use. So that's one huge ethical thing. Interestingly, right now, this idea that you have a scribe like what your doctor was doing requires no, legally requires no informed consent. There's no law that says that if you're using that type of technology you have to have informed consent right now, legally, it is synonymous with when someone brings a another person into the room who's taking notes, so maybe a medical student or a nurse or A PA. But in our profession, there's an ethical responsibility to do that. So how we provide consent around AI what we share with our clients? Because frankly, if I say to a client, look, I'm going to have this AI observer, and it's going to document everything that happens, including the tone of your voice and your eye movements, and it's going to. Me understand you better. My clients would freak out like no. Thank you. Yeah, but so I need to know a little bit more than that. You know? I need to know how to tell them it's gonna be safe. I need to help people feel secure about what they're saying in session. So that's two big ones. I think one of the others will be the idea that that when we build the engine, bias comes in, yeah. So it goes back to, I mean, when I had computer classes back in the day the the motto was, garbage in, garbage out. So if that algorithm gets built on either bad knowledge, culturally insensitive knowledge, or a lack of culturally sensitive knowledge, it's going to be biased. Chat GPT is hugely biased because it pulled everything from the internet. We can have proprietary programs that are culturally sensitive. We do already. Yeah,

    so I'm just thinking like, this feels like a lot of information. So if I'm a counselor, and I think, Oh, great, I'm sold. I want to start using this to help me do SOAP Notes and treatment plans and all of this. Like, are there resources? Or how would one know when you look at a program that it is doing all the right things with the engine and the car that metaphor? Yeah,

    that is such a good question. It took decades for us to develop a board certification in the provision of telemental health, just basic use of video conferencing software. I think it's going to be a minute for us to come up with, what are the basic requirements for someone using AI in a clinical setting? So until then, we have to create those basic minimums for ourselves, which is going to require a whole lot of continuing education. There are all kinds of free AI courses. Harvard has a bunch. Vanderbilt has a bunch that you can take and become more knowledgeable about the types of AI that exist and what they do. Then, when it comes to picking a program itself, you're going to look at what are the best practices for our profession. What do we need to know before we use with a client, and that's going to be just like every other ethical part of counseling. The first folks who are going to deal with that will be medical and medical ethics will trickle down to counseling ethics. So we will ask the same questions that the medical folks are asking now, about data, about encryption, knowing how to share it with our clients, and that also will come in. CE, there's a lot of ethics AI in counseling. CES, available.

    That's great. So I'm curious what I just have this question around advocacy, and maybe legislative advocacy, or maybe it's professional advocacy, where? Where do you see opportunities for us to be able to sort of get ahead of wherever we're going with AI?

    So I think one of the things doves tail with the dovetails with our use of AI is the counseling compact. The counseling compact is going to open doors for a whole lot of distance provision that might not be happening already, where I would have privilege to practice in states I've never lived in because of that, that's going to be inherently tele mental health. Tele mental health platforms are being embedded with AI right now it's going to be hard to find a telemental health platform and an electronic health record that doesn't contain it, so the compact sets the stage for our use now we need more than in our in our legislative arenas. We need more than the fact that the provider needs to have knowledge. It probably will end up in our educational standards first. I mean, cake rep has basically, you just need to know about it in the 24 standards, but they don't get specific into the role of AI in mental health or any other type of counseling. And I'm not sure I want them to, but we do need to keep an eye out for. What happens when our data is breached, our privacy considerations addressed. And then I think we this, well, this is not a legislative thing. You haven't really asked me this yet. I think we have opportunities for how we leverage it in the future that again, if we have really strong ethical and legal privacy controls, can give us more opportunities to help more people.

    Yeah, yeah, any specific ideas around that?

    100 of them.

    I think that's give me one. I think that's really

    thought, okay, my favorite one is already happening, and that is holographic telepresence. That when we are on a telemental health session and we're in our little Brady Bunch boxes, it's 2d there's only the ability for me to read non verbal cues from, you know, whatever's in the box we have actually AI that is reading tone and other non verbal cues for risk assessment to tag or flag depression, but we miss out on The real experience of being with a person. So holographic telepresence is me sending myself to you and looking like a real person in your environment. So we see that there's a couple several cancer clinics are doing this now in the south, where their patients are very rural, and so the patient has the option, instead of going all the way to the big city where the main clinic is, would you like us to holographically send your doctor to you at the rural clinic, and it is full 3d and it is your doctor like it is your person. You know them. You talk to them. It's in person ish, but it's live. The next level is the fully AI therapist that is a little more attuned to sentiment and attuned to the client. Is it possible to have only fully AI therapy probably, is it going to take our jobs? Absolutely not,

    absolutely not. There's also, this is a question I had for you, is like, you know, is that something we need to worry about is somebody just going to chat GBT to get their therapy as they are? Yeah, they are. I know they are,

    and that's another reason we need to be trained in it, because our clients are already using it, right? So the the idea that it's going to take our jobs dates back to telemedicine using a telephone in the 1940s so we have always been afraid of this. So the idea that our job is going to become obsolete. And that would make sense if our job were managing horses, and the car was invented and we were a carriage driver, at some point, our job would become relatively obsolete. In this case, the need is so great, the demand is so high, and there's so few of us to meet that need. Most counties in the US are medically and from mental health perspective, considered underserved. We can't get enough options out to people, and we can't produce long graduate program clinicians quickly. So taking our jobs is not something that I see or is foreseen as happening. There's always going to be a need for that human to go in and say, Is this accurate? Context is so important, and you know your client way better than a machine can know your client, and it'll make mistakes, so your responsibility is to correct those. Just like now we use statistical software instead of thank goodness, instead of hand calculating everything, then we can also use that AI therapist as a way to reduce the time spent in therapy with me address long wait lists. We're going to do a project in our E clinic where we'll take wait listed families from a Children's Clinic in to our E clinics to work with the parents while their child's wait listed for a psychologist, the wait list is 600 families long, you know. So if we can start making a dent in some of those things, then the people that need care get care quicker. Yeah,

    yeah, yeah. Wow, that's amazing.

    That's so funny. You have a lot of fun with it, and just like playing with Yeah, and doing fun things with it. As a therapist, I do too, like I can develop activities or things to think about with my for my client using AI. I I think that's really fun get creative, way more creative, way more personal in developing. I mean, I go back to the day where I was handing sheets of paper out to clients to do a daily mood trucker, oh yeah, for example. And now we all have one, like on our watches. Right, right,

    right, yeah, absolutely. Oh, wow, yeah, that is fun. I yeah, I just asked chat GBT to tell me what they think I look like. And it was fascinating, and they were not wrong. So oh my gosh, yeah, I know. And then they actually, I said, I want to see an actual photo. Do you have one? And it and it did, and I tell you what it was like, pretty spot on. It was wild. Maybe that means that they it knows too much about me,

    but maybe, I hope you asked it to, you know, what kind of questions should I ask? Donna?

    Oh, you know what? I didn't. I did write those on a little on, like, three post it notes. So, um, yeah, love a post it note. Um, okay, so what would you say? Sort of last thoughts about what would you say to someone who's hesitant about jumping into the to the AI thing? Okay,

    AI is not going to replace us, but counselors who use AI will at some level, likely replace counselors who don't use it at all. So getting a little bit ahead of that, taking some trainings, becoming comfortable with the language understanding what a an open source AI like chat GPT is versus the kind that your EHR company is saying. We're about to embed this in simple practice, or whatever you're using so that you know before it comes at you, you know what, what it's about and how it can be helpful, and have language to communicate to your clients. You know that we have our sessions, I have my knowledge of what happens, but we also use a transcription that is AI augmented to help with recall and help me write progress notes. Clients need to hear it in language they can understand that they feel safe in, and they need you to understand and feel safe in the product, or they're not going to. Yeah,

    absolutely. Well, this sounds so fascinating, and I feel like we could probably talk in 18 months, and we would have a whole slew of new things. So yeah, it's kind of funny to think we've it's just been here so long and or it hasn't been here that long, but it feels like such a integrated part of our lives. So

    every time I think I learned something that is old news and to learn the next thing, but really, this is the time to be involved in it, because we're never going to have a before AI and after AI generation again. Yeah.

    Yeah, yeah, amazing. Well, thank you so much. I just so appreciate this conversation and getting to learn from you absolutely.

    I'll see you in 18 months. Yeah,

    thanks again for tuning in to the thoughtful counselor today. We hope you enjoyed the show. This podcast is made possible through our partnership with concept Palo Alto University's Division of Continuing and Professional Studies. Learn more about the thoughtful counselor and some of the other amazing continuing education offerings provided by concept at Palo Alto u.edu forward slash concept, as always, if you are a fan of the show, we would love to hear your feedback and review on Apple podcast or wherever you subscribe. You.