From search to experience: how AI is changing consumer behavior and safety in healthcare

Episode 11 December 05, 2024 00:47:10
From search to experience: how AI is changing consumer behavior and safety in healthcare
PG Pulse
From search to experience: how AI is changing consumer behavior and safety in healthcare

Dec 05 2024 | 00:47:10

/

Hosted By

Thomas H. Lee, MD

Show Notes

AI has matured from buzzword to building block of highly successful healthcare organizations. It’s fundamentally reshaping how consumers find and choose care, and empowering organizations to deliver safer, higher-quality experiences. But, while its potential is immense, so too are the challenges.

In this episode of PG Pulse, Press Ganey Chief Safety and Transformation Officer Dr. Tejal Gandhi sits down with Reed Mollins, SVP, Digital Services, to discuss AI’s impact on safety and the consumer experience—and more.

 

Tune in to explore: 

View Full Transcript

Episode Transcript

[00:00:02] Speaker A: Welcome to PG Pulse, Press Ganey's podcast on all things healthcare, tech and human experience. In this podcast, we'll be joined by some of the best and brightest minds in the industry to discuss challenges, share insights and innovate the future of healthcare. Thanks for tuning in. We hope you enjoy the conversation. I'm Dr. Tejal Gandhi, the Chief Safety and Transformation Officer at Press Ganey and with me is my colleague Reid Mullins, who is SVP of Digital Services at Press Ganey. So Reid, thanks for joining me and I'm looking forward to our conversation. We're going to focus in today really on AI's role in shaping the future of healthcare, which is a very hot topic these days. So I guess I'll kick it off by starting very broadly for you of kind of what are you hearing out there in the industry about AI and the human experience and you know, where organizations are getting the most excited about the opportunities? [00:01:07] Speaker B: It's a great question. I'm glad to be here. Thanks for having me. You know, the topic of AI is so big and so broad it almost feels like you can't talk about anything without AI becoming one of the centerpieces in our understanding of both what's happening now, what we think is going to come near term, and especially what we think of as aspirational long term vision. When I think about where AI is actually making like day to day impacts, the place where my mind actually goes is the very beginning of the patient journey, them making their decisions about where they're going to go for care because we're really watching it impact search broadly. You know, when somebody goes online to try to make a decision about basically anything, there's an AI tool there to help them. You know, from the moment ChatGPT launched in November of 22, we actually saw the first drop in Google search in almost a decade as people started using this other tool. From then till now, Perplexity AI is a search engine search GPT launched. There's a version coming out of Anthropic. And so what we're finding is that when people are looking for information, AI is finding it for them and it's changing what people are seeing and how they're seeing it. That's been like the most powerful near term impact that I've witnessed. That's not a particularly safety culture concept yet though. I think it can be. Where are you seeing like immediate uses out in the world? [00:02:34] Speaker A: Well, so you know, given my focus on patient safety, that's definitely where I've been focusing. And interestingly it's actually in connection with safety, but also with workforce engagement, because I think people are really seeing the opportunity to help with our workforce issues around burnout, for example, and really like, how can we reduce workload of our clinical staff, in particular, leveraging AI. So, you know, some of the earliest use cases I think have been around, how do we reduce documentation burden and really use AI to help with physician documentation in the chart in primary care, for example? And there's been some really good work there to really measure the impact that it can have. And as I think about that, from a patient safety standpoint, we know that burnout can lead to more errors. And so, you know, anything to try to drive down burnout is going to be helpful. And so that initial use, I think, is documentation that we've been seeing, but a lot of other uses as well. So, for example, very good predictive tools to help you understand patients who might be at risk for things like, or falls or adverse drug events, et cetera. Those are starting to come to the forefront. And then I think there's going to be tons of opportunity with sort of summarization and synthesis of all the stuff that's in the ehr. You know, the notes and the volume of data in the EHR are huge. And so for AI to be able to pull out and say, as I'm trying to make a diagnosis, tell me all the things in the chart related to this particular symptom and summarize that for me can be very helpful so I don't have to go hunting and gathering to find that information. So I think, you know, helping to, to summarize and synthesize is going to be really important as well. [00:04:38] Speaker B: I love this. It was interesting about what you said, is there's actually a few flavors of AI that you just mentioned. And this is one of these parts of the topic that I think is not being as thoroughly explored as I think it could at this point in 2024. If you say AI, people think you're talking about generative AI. They think they're talking about just large language models, you know, producing quirky images or like, writing poetry or making songs or whatever. But the reality is that there's so many flavors of artificial intelligence at play in healthcare, and some of the ones that have been the most thoroughly baked, that I think have had impact already and are now expanding. Their impact is things like machine learning. You get into the prediction of where a safety event may occur based on the reviews that existed in the unit, based on the patient experience surveys that Flow through unit based on the engagement of the staff, sort of the survey environments in that unit, you can start to predict safety outcomes. That's all machine learning stuff, you know, where you're, you're not just like asking the language model to come up with the next best phrase, actually using a statistically rigorous mechanism on definitive and discrete data elements to try to predict the next thing that's going to occur. I love machine learning as an AI tool to drive forward some of these initiatives. And I think we'll be able to blend it with some of these large language model mechanisms of summarization, these large language model mechanisms to help drive understanding of vast and large data sets of surfacing things that are sort of hard to find or hard to understand. You know, two things that come to mind that I've seen really elegantly used. One is, you know, I'm the parent of a child with a rare disease and I get all of these. You know, the documentation that comes out of my medical experiences are not particularly legible unless you're a professional. And so I've gotten into the habit of taking whatever I get, spinning it through ChatGPT and having it try to summarize it for me at a high school level or a college level rather than the PhD doctoral level in which they're often written. That is one of the ways I think we can drive consumers towards accurate information, understandable information about their diagnoses, prognosis, next steps. And there's been some really interesting, elegant work out there around on that front. Obviously there's some concerns because, you know, it, it doesn't know everything. And a hallucination in developing my understanding of my diagnosis could create some pretty negative downstream impact. So some kind of like human in the loop mechanism to get there feels like an important strategy. But I love the idea that we can blend the consumer experience of healthcare with the safety opportunity and helping people understand their care by just translating out of this the kind of jargony, for important reasons, language set into something that people can actually understand and wrap their heads around. [00:07:30] Speaker A: You know, I think that's such an important use case. And as I was talking through the safety benefits, I was thinking about it from the clinician standpoint. But I think there's a huge list of areas where it can be helpful to help us better engage and partner with our patients. And so I've been thinking about literally the same thing. You know, we've had so many challenges, for example, at transitions of care, at hospital discharge with patients not necessarily knowing what the next steps are in their care, et cetera. And you look at the discharge summary that we hand to patients and it's this long thing that, you know, is very hard to read. And I've seen examples of asking the AI to say, take this 10 page discharge summary and turn it into, you know, fourth grade level summary. Oh, and by the way, do it in Spanish and voila, you have this amazing tool for patients. So that's just one example. But I do think that the opportunities to communicate with patients is really important. And we're seeing that not just in things like discharge summaries, but certainly with inbox management, answering patient questions on consumer websites, et cetera. And I know you and I have talked about the, the JAMA study that actually showed that maybe the chatbot responses have a little more empathy than clinician responses, which, you know, is not surprising because clinicians don't have a lot of time to craft something that maybe has a little more of that empathy built in. Whereas, you know, the chatbot has plenty of time to do that. So I think, you know, the patient engagement, patient communication component will be very. [00:09:17] Speaker B: Valuable and that kind of patient engagement side as it drives again this place where the very kind of consumer experience of healthcare can help impact the quality of the outcome that they receive, the safety of the outcome they receive. One of the things I saw was post discharge notes being sort of read out loud by an avatar of the doctor. It was such a good way to communicate the discharge instructions from a voice that the patient trusts. You know, you look at the survey results in med practice and everybody loves and trusts their doctors at a rate that is just incredibly high. Like when you look at the database, you know, you're in the, like the 50th 70th percentile when you're in like the 90% top box range. You know, it's like, it's wild how much people love their doctors and trust their doctors. And so if your doctor could be the one to read you your discharge instructions at a fourth grade level and then potentially answer questions you might have, but it isn't actually your doctor, it's an avatar of your doctor who has access to the complete medical record of yours as well as a data set around content, there's a chance that we can bring people to the place where they actually follow their care instructions and we get to the position where that's, that's no longer such an endemic issue that exists out there. I think it's really a really cool use case. [00:10:41] Speaker A: So you mentioned something, I think that's very important though, which is this issue of trust and patients and providers as they start using these kinds of tools, the trust that they may or may not have in AI. So how are you thinking about that? Because there's a lot of conversations around how transparent do we have to be with patients when we're using AI. That being said, I mean, things like clinical decision support, which leverage AI, We've never really talked about having to tell patients we're using decision support in the EHR in their care. So, you know, but this is maybe the next level of like, oh, your inbox message was responded to using AI as opposed to that. It's your physician typing the notes. So how are you thinking about this trust issue and transparency issue? [00:11:34] Speaker B: It's a, it's a, it's an interesting and almost dangerous, dangerous ethical question. There's a study that I'm seeing sort of looking off to the side here. There's a study I saw the other day that happened over the summer that included the fact that if you use the word AI as a label on your product, it increases the perceived risk of that product and people are less likely to purchase it. And in that context it's like, okay, well does that mean if I label what is really truly great instruction that they need to follow as AI, are they less likely to follow it? Does it make it less trustworthy even though it kind of should be more trustworthy? And is this a situation where if we as an industry know that it's the right instructions and they're more likely to follow them by not labeling it AI, is it actually in everybody's best interest not to label it? But is it ethical not to label it? And where is it acceptable for us to have that consideration? Maybe it's okay if you're talking to the chat bot on the health system website about where care is going to be delivered or the credentials of the doctor you're about to see, maybe it's okay not to label that AI and they think that it's someone in the contact center, but maybe when it's your follow up instructions and you think it's coming from your doctor, it's a response to a MyChart message and you think it's coming from your doctor, but actually it's the AI answering, maybe then it's ethically required to sort of get there. But we need to keep in mind that it will reduce people's trust in that response and will make people less likely to buy what they see when it's labeled like that. [00:13:13] Speaker A: Well, and I think we have to see that over time, too, because I do think that that will evolve over time and people will get more and more used to that being just part of, you know, how things are done. And so it's new now, but, you know, a year from now, maybe patients will just be expecting to see that little annotation that says AI drafted it and your doctor reviewed or whatever. The thing that worries me so certainly thinking about the transparency issue worries me and making sure that we are bringing patient perspectives to the table as we think about this, as opposed to us being paternalistic about what patients would want, we should be asking patients. But also I do worry about this oversight function that is being added to the clinician burden because we know that this is not a function that we're particularly good at in terms of, you know, reviewing a summary and finding the one thing that might be a hallucination when the rest of it is totally fine. I mean, that's a very particular detail that it's pretty somebody to do. And we already know that, you know, when we ask physicians to review their discharge summary or their operative note or whatever, they also don't catch mistakes if it was dictated or, you know, anything else. So we can't expect perfection because the current state is not perfection. So I think we're going to have to build better tools to help clinicians optimally do that oversight function. So when you see that, you know, it was generated by AI, but reviewed by a physician, like, you know, a physician actually did a decent job reviewing and even teaching and training, like, what does that oversight really look like? Which, you know, isn't really part of the curriculum to date. So, you know, there's, there's, there's going to be some reskilling or new skilling that needs to come to our folks that are using these kinds of tools. [00:15:18] Speaker B: You mentioned sort of like bringing patients into the loop, bring consumers into the loop on some of the decision making to, to plug something that's coming from press Ghini. Shortly we're wrapping our 2024 consumer trends survey in healthcare. And this is the first one of these trend surveys where we actually asked a bunch of questions around how they feel about AI, the use of AI, where they'd want to use it, where they wouldn't want to use it. And because, because we really want to know, because we're able to help inform our, our clients and partners, like, where is it acceptable for people? Where are people excited about it? You know, one of the things that interests me excites me is like, I've worked on the search side of life for a really long time. You know, one of the founders of Doctor.com, the whole mission was to make it easier for people to find the information they need to make a great decision around where to go receive care. And one of the things that always, I don't know, saddened me is the right way to say it, but there's all this data out there that's hard to match up and is difficult to put in front of patients. And the patients don't really know where to go to find certain elements and aspects that are public knowledge but are not easily accessible. You know. So for example, we talk about like sort of where's the convergence of safety and consumerism? One of the places I think is really appropriate is that when you're searching for care, you find out is this a magnet designated facility. And the reality is that today, like most consumers don't even know that magnet exists. They don't know what it means, they don't know what it's for. But we in the industry know how crucial that is to indicate that you're going to a safe environment and that my hope is using things like Perplexity AI and search GPT and the new Google Gemini versions, that they're going to end up digging out some of these public safety ratings that exist about these facilities and making them part of the consumer search experience. Because I think it's an important way to decide where you're going to go and get care. And today it's little known and underused. I do think that it does present some dangers for health systems. I don't think anybody's ready for the amount of existing public data that's been buried in weird government websites to become front and center when somebody Googles your name as a doctor or Google your facility or brand name. I do not think marketers are ready for this, but it is coming like a freight train. And things like, you know, the connection between the PubMed ID where somebody is on the research side and their NPI number when they're practicing as a physician about to bill insurance. There's like a weird mix in there. And I think very soon when you Google your doctor's name, you're going to easily see everything they've ever published. And when you say who's the best doctor near me for this rare disease, for this rare condition, it's going to be able to say, oh, these people are published. This is how recently they published. Here's how many People are citing their publications and these are the institutions they work at. These kind of like search opportunities are only available when large data at scale can get consumed, reorganized and put in front of a consumer in a way that they can really engage with it. So that's another place where I'm really hoping that AI can help make safety information a more important part of the consumer search experience. But yet to be seen. [00:18:32] Speaker A: Well, and there are a lot of websites out there that have safety data and safety grades and all that sort of stuff. And consumers, there's no way consumers would know about all these places that they could go look. The other challenge though is often you may have pretty variable ratings based on the website and the methodology. So it can paint a pretty confusing picture. But it's still better to at least know than not know what these sites have. And you know, as someone who's tried to find a physician for a family member in another state, like I'm a physician and trying to figure out who's good is, it's a total black box. Like, you know, for somebody who's a not, you know, I can at least try to reach out to colleagues in that part of the world and see what advice they have. But their advice too is based on, well, you know, they're a nice, you know, a nice colleague, you know, seem good. I mean, it's not data driven usually when you're really making these choices. So having it be more data driven and having that accessible to everyone I think would be a big step forward. [00:19:34] Speaker B: Yeah, 100%. And that's sort of like just sort of the first step in stage. The next place I think about, and maybe this isn't safety related that we could probably get there is the engagement with contact centers and the engagement with the sort of scheduling process, intake process, follow up process, the more kind of procedural communication overhead, things that happen between a patient and the health system in which they're engaged that we find often limit people's ability to get in to see their physicians in the first place. Maybe it's not a safety issue per se, but people wait too long to go see their doctor because it's too hard. So if we can smooth some of the operational overhead processes, it could be that we can get ahead of some issues that would come in later. [00:20:26] Speaker A: Well, I 100% think it's a safety issue because we know delayed diagnosis, a lot of times that's, you know, not being able to get in to see your clinician in a timely way is a start to that. Right. Where you end up with a delayed diagnosis. And there's lots of safety potential for the things that happen after the visit that are more in that operational piece as well, that are tied to safety. So, you know, as clinicians, we have had trouble tracking the follow up plan to make sure it's actually happening. Did they go to that referral? Did they get the test result? Did I get the result of that test result or did it fall through the crack somewhere? And I think we can really leverage these kinds of AI tools to help us find these things. A lot of health systems track that stuff very manually. They may have a nurse who's, you know, trying to track those things. They may have some trigger in the electronic health record that says patient didn't go to X. But then they have to do a chart review and understand, well, was that intentional? Not intentional, did it get rescheduled? Was, you know, going through the whole thing and then they figure out, oh, it really is dropped, we'll have to call the patient. But it's a nurse or somebody like that who's spending a lot of time figuring that out. And I think we can build tools to do a lot of that chart review component through automation and really find these patients before they have a delayed diagnosis that, you know, comes back with a terrible outcome. So I think there's a huge opportunity with these processy type things, but that have real implications for safety outcomes. [00:22:04] Speaker B: You know, a funny thing that just came to mind is like imagine in the medical record or the CRM or some connection between the medical record and CRM. You started capturing people's like Facebook pages and LinkedIn and Instagram and their TikToks and you are using AI to process the things they're saying publicly about where they're going and what they're doing. We've mostly done this in healthcare as a social listening mechanism to understand reputation. But I know when I see people in my community post online some amount of it is like about their health. Not everybody does this, but there are a lot of people who talk openly about how they feel and what their health situations look like. Imagine being able to dynamically pull in anything anybody is saying publicly about themselves and their health into some aspect of their medical record. It could start to make very clear what their follow on experience of Care looks like. You don't need to wait for them to contact and reschedule the next appointment to tell you their health is feeling worse. If they went on Facebook or Instagram the next day and posted that they feel worse, you can actually start Extracting some of that and making it, like, usable as a vision into what the person's life is. Maybe it's the, the. It's the positive flip side of what I think of as kind of like chronic cultural oversharing, but that's more about how I feel about social media than reality. And the flip side of this conversation is, like, about privacy. Exactly. When we think about all of this, it's like we haven't really had the discussion around what's private and what's public. It's just sort of organically created itself, and it's done it in a way that almost everything is kind of public. And AI is just going to exacerbate all of that. I've been thinking about this from the consumer side, but I wonder how you think about it as a clinician. This concept of privacy and AI, it must weigh on you. [00:23:57] Speaker A: I think there's a lot of open questions around things like security and privacy. And we've talked about hallucinations and bias and all of these things. And I mean, I guess the good news is that what I'm hearing from health system, well, from the national level, is there are organizations that are starting to really think about guidance and standards around some of these issues, because these are big, thorny issues and we need to get ahead of them. Or maybe we're a little behind, but we, you know, can hopefully start to get a little more in front of them. And I'm also seeing that organizations, as they're bringing these tools in, are really setting up robust governance structures to answer a lot of these questions as well, like how are we evaluating, you know, privacy, security bias? And on the bias front, I do think there's potential for AI to perpetuate bias, but there's also potential for AI to reduce bias. So it's always kind of spoken in a negative way, but I think it's, it's. It goes both ways. If we do it well, we might be able to actually reduce some bias with AI as well. But anyway, so I think health systems are really starting to think about what governance and structure is to bring these tools into their organizations. What I worry about is a lot of that's happening at larger systems and larger systems, larger hospitals, et cetera. And, you know, if you think about smaller hospitals that don't have the resources, you know, to do that kind of robust governance, I don't want us to end up in a world where some organizations implement a bunch of stuff because they have all this, you know, all these resources, tools, governance, et Cetera and others don't implement or implement poorly because they don't have the ability to do that. So, you know, we don't want to create that two tier system. But I think it's a concern for. [00:26:00] Speaker B: Sure, it's a very valid concern. Like I was just reading again the Harvard Business Review piece from Tom and Pat about the sort of the divergence of the, of the health systems coming out of COVID where there's many that have gotten much better, but there's many that have continued down where there used to be this sort of thick band in the middle of health system averages, it's diverging across these two paths in the woods. And so I think the concern that you're expressing, like it's super valid, it's very real. Like there will be institutions that have the money, resources and understanding to deploy these tools really well and then there's going to be a bunch of institutions that don't have that and are going to try anyway and it's going to be really interesting to see what comes out. I mean, this is, we live in a world where almost all of our private data has almost already leaked. You know that that public domain breach, they're the company who does all of the background checks in the U.S. canada, UK and a bunch globally. They leaked 2.9 billion pieces of personal information, including everyone's Social Security numbers, where car you drove in 1997, what color it was, where you've lived. It's they're the background check company. And then you look at change, health care, it gets hacked. In April, May, they leaked 220 million Americans, almost complete medical records. So all of that data is already out there and AI is going to allow for the mining of that and for the creation of spear phishing attacks that institutions who aren't prepared to have the kind of security needed to prevent future breaches, it's going to turn into a potential cascade of breaches. So those systems that are on the downward trajectory who are going to still try to implement these tools but do it without the resources and expertise they need, are going to continue to expose data in ways that they are not intending to. It creates like a bit of a strange, almost dystopian world where like there is no more private information. You know, we're rapidly approaching that very real possibility. And so while this again, this is very much a safety issue, but it's a safety on a sort of a bigger global scale, it's really the flip side and it really just ensures that we should just be Thinking about both the upside and downside risk of all. [00:28:18] Speaker A: Of these deployments, 100% and I am encouraged. I was reflecting back to electronic health record adoption 20 odd years ago and people were not as focused on some of these unintended consequences. I think we've learned a lot from that and about, and just a lot in the last 20 years about technology and healthcare. So I, I'm, I guess cautiously optimistic that there's, you know, organizations, entities that are really worrying about privacy, security, et cetera, and trying to figure out how to make this work. But that's, it's not an easy task to just to solve by any stretch. And I think your points are really well taken. You know, one thing that you said that I thought was interesting when you were talking about kind of mining Instagram or Facebook for what patients are doing, but you know, one of the other use cases that ties into safety and patient engagement, et cetera, is really capturing patient voice around safety. And I think that's another really interesting thing that we can think about AI for. So for example, you know, if somebody on a review online says, you know, I don't like this doctor because they missed my diagnosis of X. And right now that doesn't really go to anybody in this on the safety team in an organization to say, hey, this patient like thinks they got a misdiagnosis or in a patient experience survey, you know, oh, I, you know, communication was poor. I didn't know about, you know, the reason for this medication, so I didn't take it. You know, whatever, you know, you might see all these comments and all of these are safety concerns, right? And so they can be lost when they come in through these other channels. And I think there could be ways to have this very broad listening strategy for patients around safety that can leverage AI to pull out themes, et cetera, that could go to folks that are, that are leading safety efforts. Because right now safety focuses predominantly on reports that come in from the workforce. We know that the workforce maybe reports 5 to 10% of what's really happening. So there's a huge gap in what we identify. And we know that if you ask patients and they tell you about their safety concerns, there's very little overlap actually between what they say and what the workforce says. But they're all generally valid. Right. So it's just another piece of the, of the pie that we need to Capture. And then AI and LLMs, large language models could potentially even be mining the chart to find other safety issues that patients in the workforce haven't reported. So there's just all these ways that we can augment our input into our safety programs to say, oh, we're listening better, we're hearing more about the harms that are occurring or the near misses and then hopefully we can use AI to help us say, okay, now we're inundated with all this information, but how do we really get the insights out that we need to prioritize on what to focus on? Because we can't just say let's exponentially increase the inputs without having better tools to analyze and help us prioritize and focus. [00:31:50] Speaker B: Totally. A person talking about a safety concern or a safety event in a review on Google or vitals or health grades, like that's a marketing problem. That's the way it's been perceived. Like that's a marketing problem. What do we do about it from a marketing perspective? So we've got something like 160 million reviews from patients about doctors, facilities, institutions, and playing with like dynamic ontologies to try to extract specific concepts like safety issues. That has been almost the entire focus of the R and D team in Q3 because we are recognizing that like it, it, it isn't just a marketing problem. I mean, yes, it is a marketing problem, but it's just a marketing problem. And so how do we bring forward the things that we're finding in those domains to the appropriate people to solve for them? And maybe I'll flip it, flip it the other way because almost all of these things have like two sides of this coin. You know, right now marketers aren't really using safety metrics, safety data, safety content to promote the safety of their orgs from a marketing perspective. Like we're not doing a great job surfacing great work as a way to encourage folks to go there as opposed to other places. And I think marketers are missing a whole slice of why people choose the go where they're going to go. Like marketing has been very focused on convenience, for example, you know, but, but quality and safety and outcomes are marketable concepts if you have the right content in there. And so it could be that I could help us surface what's happening in the existing safety mechanisms to create content that brings people in for those reasons. [00:33:37] Speaker A: Well, and we do have data that says, you know, if patients don't feel safe, their likelihood to recommend drops to the lowest percentile and does not recover. So patients care about safety. I always say, you know, if you don't feel safe, nothing else matters when you're receiving your health care. And so I think that it is something that patients care about. And there's been kind of a fear of talking about safety, like, oh, we don't want to scare patients. Well, I think any patient that has gone through anything in healthcare knows that there are things that don't go right and that there are safety challenges. And so I think we shouldn't be scared to talk about it. And I think we should actually be promoting the things that we're doing to advance safety, because connection between patient perceptions of safety and that likelihood to recommend, which I always equate with kind of patients trust in us, that they would recommend us to somebody else. That means they trust us. And so. So, yeah, I agree that it's an opportunity to really speak more transparently about the importance of safety and what we as organizations are doing. There's a great quote from Harriet Washington, who wrote this book, Medical Apartheid, and she said, we're talking a lot about trust. How do we get patients to trust us? And really what we should be asking is, how do we show that we are trustworthy? And I think that's where talking about the things we're doing to, you know, advance patient and workforce safety is the way to show that we are trustworthy. [00:35:21] Speaker B: Another great place for marketing and safety to hang out together. Who's better at telling the story than the folks who are designed to tell the story? You know, it's a. It's an interesting. It's an interesting concept in here. So in our 2023 consumer trend survey, one of the questions that we asked was, if you read reviews online before you went to the office to see the physician visit the practice, did your experience match what you read? And it was universal. It was super high correlation. People were basically like, yeah, the reviews that I read, that. That's the experience I had when I got there. And we thought two things about that, you know, one, like, great. I mean, it's actually a good way to service what the experience is like. Like, we should be doing as good a job as we're doing and better of, like, getting more reviews, making sure there's enough content out there, because actually it's helping people make great decisions. But the other part of me kind of wonders if there's an inception happening in there that you have to some extent the experience you expect to have. When you think about the, like, the power of cognitive dissonance, somebody has decided to go and visit this doctor at this particular facility, in this particular health system. They are inclined to want to think they made the right decision, and they. So they read Good reviews, and they went and they had the good experience that they expected. Is there an opportunity for us to use a similar strategy to your point, show them that we're trustworthy. You know, create the understanding of what has been happening from a safety perspective in a positive light, because it's still marketing and story, and present it to patients in a way that creates an environment that they feel safer when they get there. They feel that level of trust because you've. You've done the work of sort of incepting the idea of the trust from real data in this context. [00:37:06] Speaker A: Yeah. And even explaining why we do some of the things we do. Because sometimes I think patients get annoyed, understandably, like, why are you asking me my name and date of birth again and again and again, you know, all these things. And we don't really like. The reason we're doing this is to really make sure that you are the person that's supposed to get this medication or supposed to get this procedure. We want to be safe and make sure that no mistakes are happening, that sort of thing. It's, I think a lot. I definitely think safety has a marketing problem and could probably benefit from some marketing expertise of like, you know, how do we really communicate about these things and help build that, you know, build that perception of trustworthiness? But it has to be reality, too. I don't want a facade of trustworthiness. I want it to be the real deal. So, anyway, back to AI. Yeah, I am curious, you know, as you look forward, what gets you excited? What are you excited about? What are the emerging trends that you're kind of. That get you excited? [00:38:15] Speaker B: So the new ChatGPT model is fundamentally different than the previous models. I'm not sure, like, how much you've been down this path, but 01 is different in that previously you'd, like, ask a question, it would reach into the large language model, it would bring you back an answer. But in 01, you ask it a question, and it immediately asks itself a question. How would I answer this question? What would the steps be? And so it lays out somewhere between like 20 and 200 steps and then goes and runs those individual queries. It brings them all back, reads them all together itself, kind of smell tests confidence scores, and then brings you back that final answer. So you asking one question used to be one query. You asking one question is now going to turn into several hundred queries that follow what they call chain of thought. And so when you do that, you can simultaneously make it more transparent because when you get your answer Back from the machine. It tells you like it shows you its work. This is what I did, this is how I was thinking about it. And it gives you this opportunity for it to do more complex functions. And so that gets me really inspired because it can start to do some of the more complex work, especially again on the research side that we know people are going through on a day to day basis and can be more, it will be more trustworthy because it shows its work. It'll actually just be more accurate because it's doing it in a smarter and deeper way and it's going to be able to reach into more and different places to surface that. So from a, like a consumer experience of healthcare perspective, those things are so deeply required in order to create an environment where you can actually have the kind of advances that people are already aspirationally chasing towards. You know, making it easier to get questions answered. When you write in that note to the inbox about your diagnosis, if 01 presents the first answer, all of a sudden you have something that is like deep enrich and comes back right away. You know, you're going to do your research. Who's the best orthopedic surgeon in San Jose, California? It can go and do a level of research that is, could, would take someone like literally hours and it'll do it in 70 seconds. And so you can actually start to facilitate these sort of better and deeper connections. And it gets to the place where we know that the healthcare workforce is overburdened and we need these AI tools to fulfill their promise. And I think some of these new models actually can start to fulfill some of the aspirational vision that we're seeing people talk about. [00:40:50] Speaker A: Well, I think that also is where I get excited because you know, we spent the last 20 years implementing electronic health records and there were all these problems with them. And you know, I constantly talked about, okay, we have to move from getting these implemented to implementing well so we can get all the benefits. And I actually think AI is going to be the way that we start to get a lot of those benefits because what we've seen are the unintended consequences like oh, this data overload and like there's, it is overwhelming and too many alerts and too many of this and too many of that. But I feel like the opportunity with AI is to really get us to that optimization mode with all of these systems that we've spent the last 20 years implementing. And I think the fears around, oh, it's a black box and I don't know where this answer is coming from, et cetera, like what you're describing may be a way to sort of mitigate some of those black box concerns, have more transparency. You know, Tom Lee posed this question to me about a year ago and it was when we, when will we feel unsafe if we are not using AI? [00:42:05] Speaker B: Yeah. [00:42:06] Speaker A: And I think that time is here because we're already using it. I mean we're using it in imaging, we're using it in pathology, we're using it in a lot of places. And so we are using AI and we have to continue to use it. And I think it's going to keep getting better, but we have to always be looking for these unintended consequences and bias and hallucination, all these things we're going to have to pay attention to. But, but the possibilities to really kind of take us to the next level are immense. [00:42:37] Speaker B: You know, there's one you remind me of, one other aspect that is like, it's aspirational in a way, but it's very nerdy and tactical in another way, which is, you know, you have a health system that has a bunch of different systems of record. They've got a CRM, they've got an emr, they've got an HRIS system running, they're using the experience cloud from Press Gainy, they've got all of these tools in place. And there's this issue where the data that comes out of those systems you need to match, you know, unit name concepts, you know, the matching the doctor from the HRS to the credentialing file to the EVS file over into the CRM. It's like those kinds of, like brute force projects of making sure all of your unique IDs and unique concepts match between the systems to generate real correlative insights and real long term insights is a huge amount of work for people. And right now only people can do much of that work. The mapping of unit names from disparate systems is a nightmare. And I think that there is a very real possibility that AI can improve that. And I think that because we've been playing with it and it works. So there are ways to get at some of these more challenging cross domain hierarchy, cross domain correlative insights problems that have plagued the industry because it has all these different systems of record that don't talk to each other. They don't talk to each other because it's hard to do. And I think AI might make it easier to do. And we're seeing some pretty good, interesting early work there. [00:44:10] Speaker A: Well, and I think that totally gets to that point of like, data to insights to drive improvement. Right. Like, we're now overloaded with that data. But this can help us get to those insights. [00:44:22] Speaker B: It looks like Klarna, the CEO of Klarna, has made this big proclamation where he fired both Workday and Salesforce. And he's like, you don't need systems of record. Like, I don't need sale. I need a place where people are typing in Salesforce, where I have my clients paying me. Stripe is already. My payment processor already has all the data. Let me feed all that raw data into the language model. You know, like, Workday isn't the one that's cutting the check. My payment processor is cutting the check. But when he actually writes the check to my employees, let me just run all that raw data into the large language model. Why do I need the system of record for people to be typing everything in when the machine can just read the underlying activities and we can like sort of like follow behind and like sweep up the leavings and turn it into intelligence. That is a fundamental shift, game changer. And like, you know, from Reid's perspective, like, I don't, I don't know that necessarily Klarna is going to be able to pull that off. Like, that sounds, that sounds really hard based on what we're seeing in the language models today. But that vision of not needing all these additional layers of technology will just make everything simpler. [00:45:29] Speaker A: Well, and that's the example of ambient listening right in the practice. Like instead of me having the conversation with the patient and then having to go document it, you know, after the visit at 10:00 at night, which is what often was happening. Like, well, we can just have this thing, you know, synthesize this thing right in real time and be more efficient and, you know, not require all that additional, all those additional steps. [00:45:54] Speaker B: So, you know, the great thing about being a technologist in healthcare is that when you do something cool, you could actually impact saving someone's life. It's very different than being a technologist in any other space. And so it's just always a pleasure to get a chance to hang out with you and get closer to the sort of the clinical needs and the work that we do. [00:46:13] Speaker A: Well, thanks for joining me. And as someone who started out my career in patient safety 25 years ago, studying how technology can improve quality and safety, you can imagine my excitement as, you know, we're entering this new world because, I mean, we were converting people from paper notes and paper prescriptions to, ehrs, and getting a lot of pushback at the time. So it's just a whole. It's a whole new day, which is very exciting. [00:46:44] Speaker B: Can't wait to see what happens next. [00:46:46] Speaker A: That's a wrap. Thank you for joining us today. And special thanks to our guests for sharing their time and insights. Stay tuned for our next episode, which will be released soon. In the meantime, visit our website where you'll find more information on the human experience and a lot more.

Other Episodes

Episode 2

March 20, 2025 00:29:32
Episode Cover

The Hidden Power of Social Capital in Healthcare’s Strongest Teams

What does it take to build a resilient, future-ready healthcare workforce?  The old playbooks won’t cut it. In this episode of PG Pulse, Brad...

Listen

Episode

April 08, 2024 00:20:46
Episode Cover

Episode 3 – Safety and high reliability in healthcare: Making the right thing, the easy thing, every time

Safety is the foundation of healthcare. It underpins all experiences, for patients and employees. And it takes the dedication and diligence of every person...

Listen

Episode 9

September 24, 2024 00:27:35
Episode Cover

Episode 09: Putting members first: Blue Shield’s data-driven approach to exceptional experiences

In today’s complex, ever-changing healthcare environment, member experience is the ultimate battleground. Consumers have more choice than ever, but access remains a massive barrier...

Listen