The data: Some two-thirds (65%) of US hospitals use predictive AI models, but fewer evaluate their tools for accuracy (61%) and bias (44%), according to a January 2025 study published in Health Affairs. Researchers analyzed responses from 2,425 acute care hospitals.
How hospitals are using predictive AI models: The most common clinical use cases in hospitals include:
The problem: Many predictive AI models aren’t being tested for accuracy or bias.
Most hospitals lack the resources to develop AI models in-house, which in turn leads to less internal testing and evaluation. This could end up harming patients by perpetuating or exacerbating health inequities.
For example, a patient might not get appropriate follow-up care or treatment if a hospital is relying on recommendations from an AI model that’s trained on data reflecting only white men or that’s based on race-based medical misconceptions.
The final word: The study’s findings highlight the need for more rigorous testing and oversight of AI models in clinical settings. This could become a more difficult undertaking following President Trump’s recent decision to roll back a Biden administration executive order that included rules designed to ensure the healthcare industry responsibly implemented AI.