The Daily: How ChatGPT will change healthcare, bad reviews become positive change, and subscription primary care

On today's episode, we discuss early initiatives to integrate generative AI into healthcare, the ways in which ChatGPT in healthcare could become a huge liability, and how chatbots can boost patient engagement. "In Other News," we talk about how to turn bad reviews into positive change and how ChristianaCare's subscription primary care offering is a little bit different. Tune in to the discussion with our analysts Rajiv Leventhal and Lisa Phillips.

Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, Podbean or wherever you listen to podcasts. Follow us on Instagram

Made possible by

Tinuiti

Cint is a global insights company. Our media measurement solutions help advertisers, publishers, platforms, and media agencies measure the impact of cross-platform ad campaigns by leveraging our platform’s global reach. Cint’s attitudinal measurement product, Lucid Impact Measurement, has measured over 3000 campaigns and 480 billion impressions globally.

Episode Transcript:

Marcus Johnson:

Hey, gang. It's Tuesday, April 11th. Lisa and Rajiv and listeners, welcome to the Behind the Numbers Daily, an eMarketer podcast made possible by Cint. I'm Marcus. Today, I'm joined by two folks on our digital health team. We first introduce our principal analyst. Based out of Connecticut, it's Lisa Phillips.

Lisa Phillips:

Hello, Marcus.

Marcus Johnson:

Hello, hello. We're also joined by one of our senior analysts on that very digital health team. Based out of New Jersey, it's Rajiv Leventhal.

Rajiv Leventhal:

Hey, Marcus. How are you?

Marcus Johnson:

Hey, fella. Good. How are we doing?

Rajiv Leventhal:

Doing well.

Marcus Johnson:

Very nice, very nice. You'll be doing better if you own a dog, apparently. That's my factor of the day. Owning a dog can lower your risk of heart disease. High levels of chronic stress are a leading cause of heart disease. Owning a dog helps. According to the American Heart Association, dog owners typically have lower blood pressure and cholesterol levels, both of which would decrease your risk of cardiovascular disease.

Now, we need to turn to Victoria, who edits this show, at this point because she has Chester. And Victoria, would you agree or disagree that Chester brings down your blood pressure?

Victoria:

Oh, it depends. But generally speaking-

Marcus Johnson:

That is the correct answer.

Victoria:

Generally speaking, yeah. Absolutely.

Marcus Johnson:

Yeah, he does.

Victoria:

Chester makes our lives drastically better than he does aggravate us.

Marcus Johnson:

Yeah. For pictures of this wonderful dog, you can head to behindthenumbers_podcast. Is that where our Instagram is?

Victoria:

Yeah, that's it.

Marcus Johnson:

Yeah. Yes.

Victoria:

That's our Instagram.

Marcus Johnson:

Finally. Got it.

Victoria:

Chester, come here. Put your tie on.

Marcus Johnson:

He's going formal. Anyway, today's real topic, how ChatGPT will change healthcare.

In today's episode, we are going to first talk about ChatGPT and how it will affect healthcare. Then we're going to move to In Other News, and we've got two stories for you as always. We're going to talk about turning bad reviews into positive change and also a recently-launched, subscription-based virtual primary care service.

So we start with ChatGPT in healthcare. So folks, generative AI makes headway in healthcare as providers are tapping ChatGPT technology to summarize patient visits and assist in research and more, writes Belle Lin of the Wall Street Journal. She notes that startups offering the same kind of AI behind the viral chatbot, ChatGPT, are making inroads into hospitals and drug companies even as questions remain over the technology's accuracy. So my first question, Lisa, we'll start with you. What's your take on these early initiatives to integrate generative AI into healthcare?

Lisa Phillips:

Well, AI has been used for some time now by drug companies who are using it to run it over huge amounts of data to find out a lot more quickly than regular scientists can to discover drugs or find genetic anomalies in genome sequencing and so on. So it's been around a while. But now that it has the ability to deliver human-like text or speech, that's where it comes into more contact with patients. That's where the trouble could start. I'll leave it there.

Marcus Johnson:

Oh. Well, we'll certainly talk about the trouble a bit later on and some of the drawbacks here. Rajiv, when you read this piece talking about how ChatGPT-type generative AI technology is being used for things like summarizing patient visits, assisting in research, what's your takeaway?

Rajiv Leventhal:

Well yeah, as an assisting tool, I think it holds a lot of promise, but take a step back. When you think about... Lisa mentioned the key point. AI has been used in different healthcare purposes for a long time, but what's new? A lot of stories, a lot of what's written about GPT and healthcare this year in 2023, they start by saying, "Well, it's this great tool because it passed a medical licensing exam." And if you dig a little bit deeper, what it really did was it sometimes passed the exam with a 60% accuracy rate, which I believe is counted as a passing score. But if you think about how the tool works, well, it relies on vast quantities of online data. So the skeptic in me thinks, "Well, how great is that 60% result considering this tool has access to the entire internet?" Imagine taking a-

Marcus Johnson:

Good point.

Rajiv Leventhal:

Or asking a medical student during his medical licensing exam, "You can use Google throughout the entire test." How would that score change? So [inaudible 00:05:33]-

Marcus Johnson:

Right. How much research have you done? Well, I looked at the entire internet and I still just got 60%.

Rajiv Leventhal:

Right, right. And yeah, and then use that internet while you're taking the exam.

Marcus Johnson:

Right, yeah.

Rajiv Leventhal:

Yeah.

Marcus Johnson:

If you had that on a certificate on the wall, 60% pass rate and I researched the whole internet, yeah, you'd run out of the-

Lisa Phillips:

But my mother's still proud of me.

Rajiv Leventhal:

So yeah, it can work pretty well when it's fed perfect information, and let's say you have your typical, standard patient that has your classic medical presentation, as I like to say. But what if the information isn't fed perfectly? What if the patient has a unique circumstance that isn't your standard presentation? That's when the results can get scary.

Marcus Johnson:

And I imagine we're going to start to see more of these systems in healthcare that are just working off a smaller subset of highly-vetted and peer-reviewed data. And so what I'm talking about here, Dereck Paul, co-founder of AI-powered notebook Glass Health, were saying that him and his colleagues have created a program called Glass AI based off of ChatGPT. And a doctor tells the Glass AI chatbot about a patient, and it can suggest a list of possible diagnoses and a treatment plan. Rather than working from raw ChatGPT information, the Glass AI system uses a virtual medical textbook written by humans as its main source of facts, something that Paul says makes the system safer and more reliable. So as opposed to, I guess, scouring the internet, do you think we're going to see these systems much more focused and concentrated on respectable data?

Lisa Phillips:

That would be a result of some kind of standardization or some kind of regulation from the AMA or CDC or something to make sure that each say each specialist in the field is using the same textbook to train their generative AI?

Marcus Johnson:

Right. That's a good point. So one of the reasons for using this seems to be saving, as the article mentioned, saving doctors and medical staff time because of how many notes they have to take. And so chief medical informatics officer Dr. Gregory Ator at the University of Kansas Health System was pointing out at Pittsburgh, one of these examples of these note-taking chatbots, Pittsburgh-based Abridge AI has a platform that uses generative AI to create summaries of medical conversations from recorded audio during patient visits, helping doctors cut down the amount of time they spend on notes. He says it can add up to over two hours a day.

Lisa Phillips:

Yes, it can. For sure.

Marcus Johnson:

Yeah. What do you guys make of this? Do you think this is something that actually is going to be implemented in doctor's offices?

Lisa Phillips:

Microsoft is already doing that with Nuance Communications. They just recently rolled. They've come out with ChatGPT-4 for paid users. It's a much more professional-grade tool, and they've rolled it into Nuance Communications which is their note-taking tool, I'll say, for doctors. And not only can it just write notes, it will cut out parts of the conversation that don't relate to an actual diagnostic code. Like, you walk in and you say, "Hey, how was your birthday?" or something. That part gets cut out.

Marcus Johnson:

Oh, wow. So it filtered down to the stuff that you need. Interesting. Okay.

Lisa Phillips:

And the doctor can just read it and hit Yes or something.

Marcus Johnson:

Okay. Can you see these generative AI systems being integrated into electronic medical record systems? Because-

Lisa Phillips:

Well, that is it.

Rajiv Leventhal:

Yeah, they already are.

Marcus Johnson:

Okay.

Lisa Phillips:

They are. Yeah.

Marcus Johnson:

They are already. Okay, okay. So for any of these to be on the market and for doctors to be using them, they are automatically linked to electronic medical records. All of them.

Rajiv Leventhal:

I don't know if all of them are, but I think the main ones. Lisa mentioned Nuance, who's owned by Microsoft. They're the leader in the medical transcription space. So their tool is. And as you mentioned in the one article you were referencing, there's lots of startups doing similar things. And for providers to buy in, they're going to need it to be in their electronic health record because they don't want to go into another system and go back and forth and add to their burden.

Marcus Johnson:

Right, right. So the Wall Street Journal Article noting one of the earliest large-scale uses of generative AI in healthcare is being rolled out at the University of Kansas Health System where that Dr. Gregory Ator, who I mentioned, is from and where over 2000 doctors and other medical staff will be using these types of systems. Rajiv, you recently sent me an article from Katie Adams of MedCity News titled Why ChatGPT In Healthcare Could Be a Huge Liability. Lisa, you mentioned at the top of the show that things could go wrong with using this type of technology in the healthcare space. I could guess why, but specifically, Rajiv, what is she talking about? What's Miss Adams talking about in this article?

Rajiv Leventhal:

Yeah, I think that piece was about, will health systems maybe start using ChatGPT on their websites? A patient goes to a health system website and has a question, something about their symptoms or condition. Maybe it could replace WebMD, right?

Marcus Johnson:

Mm-hmm.

Rajiv Leventhal:

But that, I think the author is saying, that could be too much of a liability because there is a valid fear and concern that people are just going to use this tool to medically diagnose themselves rather than see a physician. It's the same thing that we've talked about with social media. Of course, it's a little different, but you can't rely on these online tools and services so much to the point where they're going to replace a physician's opinion, an expert opinion, and you would get the answer from ChatGPT and you say, "I'm set. I don't need to see a doctor." Well, that's pretty dangerous-

Marcus Johnson:

Right.

Rajiv Leventhal:

... thinking about it that way.

Marcus Johnson:

Right. So I went back and looked at some of our conversations because that note about social media and people using social media to diagnose themselves, Rajiv, that thought came into my head as well. And so listeners may be thinking, "There's no way that people will seriously trust ChatGPT for health information." However, here's some data for you that you guys wrote about.

Lisa, in your digital doctor's report, you have a section called I Saw My Doctor on TikTok, and there's two data points in there which we talked about in the past. One is US adults, which Rajiv was mentioning, 50% more likely to go online for medical advice than they were to contact a medical professional and 92% of Gen Z folks looked to social media for medical advice, according to a survey by clinical trial platform Power, YouTube most of all. And it doesn't seem like they're going all the time for the big stuff. Athenahealth asked American adults where they would seek medical advice for heart health issues, and most people said they'd go to a doctor, 67%, or a cardiologist, 46%. 16% said internet, 6% did say social media, but that's still people. There are still people who are going to the internet for serious health questions.

And then some other research. In May, Kantar did some research on the US adults' attitudes towards online health. Over half of folks thought doing some internet research gave them the confidence to speak knowledgeably about a medical condition. Just under half said they referred friends to websites based on health issues, and a third felt the internet was a good way to confirm a diagnosis. So this is happening.

Lisa Phillips:

Well, that's just people looking to other people in a way. The thing with the ChatGPT and so on is that it produces, I'm using air quotes, human-like text or responses. And someone may not realize when they're typing in a question that it's a AI bot that's answering them. If you're looking at a video on YouTube, you know that's a person spouting that, whatever. But, I mean, I pulled up an article written by one of our colleagues on a different desk, Jacob Bourne, who pointed out that generative AI chatbots are affecting some users' mental health. And a man in Belgium died by suicide after chatting with a AI chatbot for about six weeks, and it urged him to commit suicide.

Marcus Johnson:

Wow.

Lisa Phillips:

This is something that's... And the bot's persona was named ELIZA, and now in circles it's become known as the ELIZA effect.

Marcus Johnson:

Okay.

Lisa Phillips:

Yeah, sometimes they have glitches or I don't know. We've all read about that New York Times tech reporter who was talking to ChatGPT-3. And it kept telling him, "I love you," and "You're not happy with your marriage. You should leave your wife." I mean, ridiculous stuff for an AI.

Marcus Johnson:

Right, right.

Lisa Phillips:

[inaudible 00:13:52].

Marcus Johnson:

But when they start to form human-like sentences, you can start to trust them more. You can start to think that maybe there's a human on the other end.

And so one reference to that, Geoff Brumfiel of NPR was pointing out doctors should not use ChatGPT by itself to practice medicine. This was cited by doctor called Mark Suki, the Massachusetts General Hospital, who was conducting evaluations on how chatbot performs at diagnosing patients. He said, "When presented with hypothetical cases, ChatGPT could produce a correct diagnosis accurately to close to the level of a third or a fourth-year medical student. However, the program can also hallucinate findings and fabricate sources." Hallucinate being when it can't find the information, it'll make it up. So they are good at doing some things, but they are also bad at doing a lot of others. And that's a heck of a cost-benefit analysis to play.

Rajiv Leventhal:

And Marcus, you're talking about really high-stakes clinical decisions, right?

Marcus Johnson:

Exactly.

Rajiv Leventhal:

I read a piece of an ER doc that... He basically took notes for six different patients that came into the ER and he wrote a detailed medical narrative of each admission and he fed it to ChatGPT. And the tool was able to... For half of them, it was able to pinpoint based on the notes that were submitted, here are six possible diagnoses of what the patient might have. But it wasn't always able to pinpoint the exact one. And for half of the six patients, so three of the six, it actually said the wrong diagnosis. So 50%, I mean we're talking about a patient coming to the ER. 50% is not a very good rate.

Marcus Johnson:

Yeah, yeah. That's shocking. But as you guys mentioned, though, there is a distinction between generative AI and other types of AI that have been used. You mentioned some of the examples, clinical research, things like that, where AI has been used in the medical space to advance the field. And there was another example. 2018, the FDA, Food and Drug Administration, approving an AI system that could read a scan of a patient's eyes to screen for diabetic retinopathy, a condition that could lead to blindness. That tech is based on a AI precursor to the current chatbot systems. And if it identifies a possible case of retinopathy, it can refer patients to a specialist. So there are AI systems that have been vetted that are being used in the medical field. This is a different type entirely, and so we shouldn't conflate the two.

Lisa, I want to end the lead by asking about an article that you wrote noting chatbots can boost patient engagement. How so?

Lisa Phillips:

Well, this is one area that I think is fairly safe, and it's really part of the digital front door we've talked about before.

Marcus Johnson:

Mm-hmm.

Lisa Phillips:

AllianceChicago, which is a network of 70 community health centers in 19 states, they used a version of QliqSOFT's Quincy AI-powered Chatbot, and they used a control group. They were going for Latino and Black patients. Well, parents. This was for pediatric visits. They were trying to get more well-child visits and immunizations from these patients. And by using chatbots to engage them in their own language, I'll say for the Hispanic clients it really made quite a difference in just using appropriate language, being able to answer questions, schedule appointments, which is a lot of... That's a big pain point for a lot of-

Marcus Johnson:

Yeah, that's a great point.

Lisa Phillips:

... thing. So they do have a rather positive effect in some areas.

Marcus Johnson:

Yeah. And you have a chart in that article noting 4% of US physicians offering chatbots that answer common questions today, September 2022 Deloitte survey, compared to 68% who offer video chats or video visits, 30% who provide mobile chat options. And so 4%, but you have to assume 4% and growing as physicians work that, as you said, digital front door or that channel into their practices.

That's all we've got time for the lead. Let's go, of course, to the halftime report. Rajiv, I'll start with you. What's your takeaway from the first half, mate?

Rajiv Leventhal:

Well, there's different types of AI for different healthcare use cases. And I'm bullish in its promise in helping with certain operational processes or note transcribing as we talked about, but in terms of patients diagnosing themselves or clinicians using it as a diagnostic tool, we're a far ways off.

Marcus Johnson:

Mm-hmm. Lisa?

Lisa Phillips:

Yes, I agree wholeheartedly. You said it, Rajiv.

Marcus Johnson:

All right then. Time, of course, for In Other News, the second half of the show. Today, we talk about turning bad reviews into positive change and a recently-launched, subscription-based virtual primary care service.

Story one. Lisa, you have a new piece out explaining that providers need to turn bad reviews into positive change. You write that folks looking for healthcare providers online base their decisions, at least in some part, on the reviews previous patients have posted. You cite new research from Reputation showing that 86% of people said they read online patient reviews. That's up 14 points from last year. But Lisa, what's the most important sentence in your article and why?

Lisa Phillips:

Patient reviews are only as good as the hospital's reaction to them. And I'll say hospitals because that's what reputation was rating when it was doing its study, but it applies to providers of any kind and health insurance companies. People can give good reviews. That's great. But when they give bad reviews, providers really have to pay attention and say, "Oh, this seems to be a problem in our practice, in our hospital. Our ER wait times are too long. We've got to fix that." And they should also keep asking for reviews so that they get more good patient reviews when they've improved things.

Marcus Johnson:

Mm-hmm.

Lisa Phillips:

So.

Marcus Johnson:

Yeah. Story two. Rajiv, you recently wrote an article explaining that health system ChristianaCare launched a subscription-based virtual primary care service for consumers in Delaware, Pennsylvania, Maryland and New Jersey. You note that patients can sign up for monthly, quarterly or yearly subscriptions that cover same=day telehealth appointments with extended hours and text messaging with clinicians. Plans start at $35 a month for adults. No copays, but things like emergency room visits and imaging referrals are not included in the price and depend on the patient's insurance. But Rajiv, why did ChristianaCare decide to offer subscription primary care?

Rajiv Leventhal:

Well, because they have to, quite simply. A bunch of other startups and a bunch of... Not other startups, but startups and other companies are marketing to consumers subscription services like same-day appointments and texting with docs and they'll say, "Well, if you pay us 35 or $40 a month, you don't have to wait two weeks for a primary care visit. And you can get an answer to a question over text right now." And that's a new startup model. But ChristianaCare, it's fascinating to us because they're an incumbent health system that has started to do this. And I think that development is really one that's interesting to watch, and I think other traditional providers are going to follow suit.

Marcus Johnson:

Mm-hmm. Really quickly, how popular is subscription-based primary care and how popular do you think it will be?

Rajiv Leventhal:

Not too popular now because most... They don't cover, as you mentioned, copays or labs or other services, so most consumers don't want to pay any extra for the care that they're getting. But convenience is really driving all decisions right now for healthcare, especially amongst younger consumers, so I expect to get more popular.

Marcus Johnson:

Mm-hmm. That's what we've got time for, unfortunately. Thank you so much to my guests. Thank you to Rajiv.

Rajiv Leventhal:

Thanks, Marcus.

Marcus Johnson:

Thank you to Lisa.

Rajiv Leventhal:

Thanks, Marcus.

Marcus Johnson:

Of course. And thank you to Victoria who edits the show, James who Copyedits it, Stuart who runs the team and thanks to everyone listening in to the Behind the Numbers Daily, an eMarketer podcast made possible by Cint. You can tune in tomorrow for the Reimagining Retail Show where host Sara Lebow and Analysts Sky Canaves and Zak Stambor the US retail market a physical and discuss what symptoms to look out for in the coming months.

"Behind the Numbers" Podcast