Generative artificial intelligence sparks fear, excitement among healthcare experts

The trend: Clinicians and healthcare technology executives are both eager and nervous about the potential of generative AI applications like ChatGPT and GPT-4 in healthcare.

Quotable: The pros and cons of generative AI in healthcare can be summed up in this recent quote from Micky Tripathi, PhD, who runs the health IT branch of the federal government: "I think all of us hopefully feel tremendous excitement and you ought to feel tremendous fear also.”

What’s driving the excitement? Healthcare is a data-rich industry that's in dire need of automation. Generative AI is already proving it could automate manual tasks, ingest patient data sets, and analyze vast amounts of information. And the tech is getting smarter every day.

The tech’s most promising healthcare use cases today are:

  • Speech-to-text. Speech recognition technology from Nuance (a Microsoft company) is used by 90% of US hospitals, predominantly for transcribing clinicians’ medical notes. Nuance recently rolled out OpenAI’s GPT-4 capabilities into its latest clinical documentation application.
  • Medical imaging. Generative AI can comb through x-rays and CT scans faster and more accurately than humans in some cases.
  • Drug discovery. AI can accelerate the process of drug discovery and development by analyzing clinical trial data and other sources to identify potential drug candidates.

What’s driving the fear? This is where it gets scary. Medical experts caution that generative AI is not ready to be a diagnostic tool that clinicians can trust.

  • For example, an emergency room physician recently submitted his medical notes on nearly 40 admitted patients to ChatGPT. He found that the tool misdiagnosed several patients who had life-threatening conditions.
  • A study by Stanford’s human-centered AI team found that GPT-4 provided mostly safe answers to clinical questions about patients, but “hallucinated” on others—meaning if the tech doesn’t have the answer, it makes one up.

On the patient-facing side, AI chatbots are negatively affecting some users’ mental health.

  • A Belgian man recently committed suicide after six weeks of chatting with an AI chatbot based on a variation of the open-source model GPT-J.
  • The chatbot encouraged the suicide, according to the man’s widow and chat transcripts seen by reporters.

What’s next? Tech players will continue to jockey for position in the AI arms race with updates on how their tools are being used by healthcare customers and getting more refined. Those that demonstrate accuracy and safety will be seen as winners in healthcare circles.

  • Some of Google’s cloud customers will pilot the company’s Med-Palm 2 generative AI tool to see if it could reliably scan large amounts of patient data and answer complex medical questions.
  • Google’s announcement came right after Microsoft rolled out updated generative AI tools that aim to automate workflows for health insurers.
  • Amazon just announced late last week that it’s developing AI language models on AWS. Customers—including healthcare organizations—could potentially use the service to develop their own chatbots.

Go deeper by listening to our recent Behind the Numbers podcast on how generative AI could change healthcare.

This article originally appeared in Insider Intelligence's Digital Health Briefing—a daily recap of top stories reshaping the healthcare industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.