The Promise (and perils) of AI in healthcare
Healthcare is highly regulated. Can artificial intelligence be deployed responsibly?
In the U.S., there are over 1 billion visits to a doctor every year, according to the CDC. That’s a tremendous volume of data being collected about each patient and the types of care they receive. However, all that information is currently siloed, mired in legacy technology and stuck in heterogeneous structures (or no structure, in the case of information-rich notes).
So it’s no surprise that healthcare leaders are buoyed by the opportunities presented by AI with its capacity to process, normalize, and synthesize vast amounts of semi-structured data. Potential benefits of AI in healthcare include increasing access to care, improving personalization and patient experiences, and reducing provider burnout. In the life sciences, AI could help bring life-saving treatments to market faster.
Yet, despite more than 900 FDA-approved AI medical devices (more than 10x the number in 2020), reports show that in the U.S., healthcare is slow to come around to AI. A survey by the Center for Connected Medicine of executives from hospitals and health systems found that only 16% have system-wide AI governance policies in place. Those that are progressing are starting with piloting lower-risk applications, such as internal operational efficiencies.
Epic, the country’s leading electronic health record (EHR) system provider, recently unveiled more than 100 new AI features for doctors and patients. Epic’s AI tools, developed in collaboration with Microsoft, are designed to alleviate burnout — one of the top complaints of EHRs is the amount of time spent on them. One particular development, “ambient listening,” claims to remove the need for physicians to take notes during patient visits. Legitimate concerns about the speech recognition tool regarding data privacy and accuracy are popping up.
As healthcare cautiously evaluates the AI-powered solutions landscape, the ROI for “AI as doctor” commercial applications has yet to become clear and the risks are high.
Lessons from Dr. Watson
Let’s rewind, for a moment, to a little over a decade ago. AI was a buzzy topic in healthcare thanks to IBM’s investment in an oncology program. In 2012, IBM partnered with Memorial Sloan Kettering to train its natural language processing (NLP) technology, Watson Health, on a clinical decision support tool for treating cancer patients. The promise was that the system would be able to examine and analyze multiple sources of information, including data from the medical literature, large databases of clinical records (including unstructured doctors’ notes), and genomic data with unprecedented depth, breadth, and speed to make its recommendations.
In 2013, the company joined forces with MD Anderson Cancer Center in Texas in one of the more publicized deployments of the technology. By 2016, MD Anderson canceled the project after spending $62M on it. So, what went wrong? A number of things:
- Marketing before the technology was ready. This was a classic case of a hammer looking for a nail. Rather than start small and iterate with feedback from end users, IBM threw everything it had at Watson in an attempt to get ahead of competitors like Google and Microsoft. When IBM defended the MD Anderson Oncology Expert Advisor (OEA), they stated that 90% of the time, the recommendation matched expert opinions. I don’t know about you, but when it comes to cancer, I feel like this needs to be higher for real-world deployment.
- The AI was trained on narrow data. One of the biggest controversies surrounding Watson Health was the revelation that it was trained on synthetic data from MSK, a New York hospital with a fairly wealthy patient population. Many complaints about Watson surfaced when the technology was applied outside of the U.S. In India, physicians found Watson’s advice matched treatment recommendations 73% of the time. In South Korea, Watson’s recommendations for colon cancer patients matched experts only 49% of the time.
- Healthcare is inherently super complex. It’s one thing to create an AI to play chess in a highly contained and structured environment, but it’s quite another to integrate with complex data sources that are notoriously opaque and constantly shifting. For example, during the OEA project, MD Anderson updated their electronic health record (EHR) system, which broke the integrations that had been put in place, requiring a re-work. Between strict privacy regulations and legal compliance issues, the industry’s ability to access the data AI requires can be daunting.
Fast forward to today, and AI technology has improved exponentially. The failure of Watson Health doesn’t mean that AI can’t significantly modernize care and improve healthcare outcomes, but there are lessons to apply today from this cautionary tale of old. To truly avoid bright shiny object syndrome, we need to take an approach that is grounded in business strategy and human needs, with a demonstrable impact on outcomes.
Healthcare will need infrastructure to use AI
Before the healthcare industry can apply AI to fuel productivity and better patient engagement, many health systems first need to establish solid AI governance and risk management, as well as modernize their data management — huge lifts. A study by data provider Hakkoda found that executives at large healthcare organizations are facing several data management and operational challenges — 46% said creating a data-driven culture was a significant barrier, followed by ensuring data quality and governance at 44% and integrating data across silos at 42%.
Sixty percent of Americans are wary of providers relying on AI to diagnose and treat them.
There’s another major hurdle for healthcare to clear when it comes to AI — patient buy-in. A 2023 survey by Pew Research Center found that 60% of Americans would be uncomfortable with their healthcare provider relying on AI to diagnose and treat them. Providers are also wary. Nurses have been especially vocal about concerns about AI usurping skilled decision-making.
Currently, for many organizations, questions on how to procure and deploy AI sit with strategic leadership. One of the major challenges that we see with our clients is that technology is moving so fast, that by the time their committees draft recommendations, several policies are already out of date.
This is why it’s important for governance policies to align with organizational philosophies. Regardless of the tools and applications, responsible and ethical implementation of AI and ML begins with regulatory requirements, strategic goals, transparency, broad data representation, and continuous training, auditing, and monitoring. Human oversight, testing, and validation are critical to vetting the performance and reliability of these powerful tools.
Where is AI showing promise in healthcare?
So, can AI replace doctors in places where healthcare staff are stretched thin, hard to access, or nonexistent? The tech and the data accuracy are not quite there yet. AI’s outputs are still not consistent enough. But, with each evolution of AI, it does feel like we’re on the cusp of major, profound breakthroughs. AI and machine learning are starting to provide real value in healthcare in two main areas:
AI is improving diagnosis accuracy
In the U.S., an estimated 15-20% of medical visits result in a diagnostic error each year. Studies show that the primary source of these mistakes is that physicians are rushed for time, and forced to rely on “System 1” thinking — snap, “life or death” judgment — as opposed to more analytical “System 2” thinking based on data and literature.
Enter AI-assisted computer imaging, which is making waves due to its ability to recognize patterns in seconds. Consider a condition like acute stroke care, where every second matters. UC Davis Health has deployed Viz.ai to analyze patients’ CT scans and alert care teams of a potential stroke seconds after images are taken — much faster than a human radiologist could do it. Hospital leaders say physicians will still review the scans, but AI can help prioritize cases.
Earlier this year, researchers at Northeastern University unveiled an AI-powered solution that delivers faster, more accurate diagnoses of prostate cancer, before announcing a system designed to detect breast cancer that is nearly 100% accurate. Developments like this are emerging in diagnosing conditions such as sepsis, pneumonia, melanoma, and retinal disease, just to name a few.
AI is streamlining healthcare operations
Even as EHR vendors seem to have a hold on provider’s desktop computers (which is controversial in and of itself), with the use of APIs and plug-ins, EHR data can be integrated with AI solutions. This opens the door to innovation without an expensive rip-and-replace of legacy software.
For the least skilled and least experienced customer support agents, an AI chatbot boosted productivity by 35%.
While the FDA and other government agencies grapple with how to regulate patient-facing AI tools, one area that is showing promise in terms of productivity improvements are tools designed for customer support agents. A 2023 study by the National Bureau of Economic Research found that an AI-based conversational assistant rolled out to 5,000 customer support agents helped boost productivity by 14% as well as improvements in customer sentiment and employee retention. Remarkably, the AI tool was most helpful for the least skilled and least experienced workers, lifting their productivity by 35%.
AI and the future of healthcare
In 2021, reflecting on the demise of Watson Health, IBM chief executive officer Arvind Krishna said, “Healthcare is always going to turn out to be more subtle, as well as more regulated, for all the right reasons… It’s a decision that may impact somebody’s life or death. You’ve got to be more careful. So in healthcare, it turns out maybe we were too optimistic.”
AI and machine learning algorithms are expensive, and the question of who pays for AI-based tools — and who is liable when things go wrong — is still a gray area. The federal government may need to play a bigger role, similar to rapid testing solutions rolled out during the pandemic.
This is not to say that researchers should delay the work to refine AI platforms’ ability to democratize expert care and assist in the battle against cancer and other life-threatening diseases through experimentation and using small pilot programs.
But in terms of where to spend millions, there are other, more immediately impactful, albeit less hype-worthy, applications of these technologies to improve healthcare. Or simply focus on improving user experiences for patients and caregivers around existing systems. Fighting cancer is great, it’s just not elementary, Dr. Watson.