The Problem (and Promise) of AI in Healthcare

Jay Erickson, Partner and Chief Innovation Officer

Jay Erickson

Partner | Chief Innovation Officer

a cartoon of a patient lying on a bed, a doctor and a robot standing next to him

After world chess champion Gary Kasparov lost to IBM’s Deep Blue in 1996 he recalled:

“I had played a lot of computers but had never experienced anything like this. I could feel — I could smell — a new kind of intelligence across the table.”

Two decades later, artificial intelligence (AI) is beginning to make it’s way into healthcare with all the ingredients to make a real impact. Today’s AI systems (which include the realms known as big data, machine learning, and deep learning) can:

  • Quickly comb through massive amounts of data in disparate places
  • Model a dizzying number of possibilities in seconds
  • Make sense of unstructured data including voice, text and medical images
  • Dynamically adjust and chain together algorithms based on results and outcomes

In 2013, IBM and MD Anderson Cancer Center joined forces to use their latest AI platform Deep Blue’s progeny, Watson, to tackle cancer diagnosis and treatment recommendations in one of the more publicized deployments of this technology. The promise was that the system would be able to examine and analyze multiple sources of information, including data from the medical literature, large databases of clinical records (including unstructured doctors’ notes), and genomic data with unprecedented depth, breadth and speed to make its recommendations. I recently attended the massive healthcare IT conference in Orlando, HIMSS, where MD Anderson confirmed that the project was placed on indefinite hold after more than $62M had been spent.

What Went Wrong?

1. It's just not there yet. IBM recently defended the MD Anderson Oncology Expert Advisor (OEA), stating that 90% of the time the recommendation matched expert opinions. Wait, 90%? I don’t know about you, but when it comes to cancer I feel like this needs to be higher for real world deployment.

2. Contract mismanagement. According to a University of Texas review, OEA was victim to the same patterns of contract and project (mis)management that led to $600M being spent to get to a nonfunctional healthcare.gov, including poorly structured contracts, ignoring IT best practices, and overbilling.

3. Inherent complexity of healthcare IT. It’s one thing to play chess in a highly contained and structured environment but it’s quite another to integrate with complex data sources that are notoriously opaque and constantly shifting. For example, during the OEA project, MD Anderson updated their electronic health record (EHR) system that broke the integrations that had been put in place, requiring a re-work.

There are also a host of unknown risks associated with AI-led diagnosis and treatment plans including potential issues with compliance, fair balance and liability. While some have argued that shifting liability away from practitioners could reduce the burden of malpractice insurance and claims on the healthcare system overall, who (or what) does one sue when a thinking machine causes a patient harm? What happens when algorithms or medical science is updated — should providers be obligated to run them on stored data to correct errors or uncover new diagnoses? Regulators and industry leaders may need more time to develop guidelines around these questions before the technology can be used.

The bottom line is that, while this technology in the clinic has great promise, the ROI for these types of “AI as doctor” commercial applications has yet to become clear and the risks are high.

What's Next For AI in Healthcare?

Big data, AI and machine learning are starting to provide real value in healthcare in two main areas: medical research and population health.

Medical research is benefitting from cheap computing power and large, merged data sets with more subjects and richer data (especially genomic data through low-cost gene sequencing). These tools are allowing researchers to more quickly understand the pathology of diseases and develop better treatments and is driving the rise of immunotherapy options for cancer.

One of the core practices of population health involves overlaying social, environmental, cultural and physical data (collectively referred to as social determinants of health or SDOH) with clinical data to identify high-risk groups or individuals and intervening on their behalf. It is estimated that 40% of all avoidable mortalities in the U.S. are caused by SDOH. (1) Because this is an emerging practice that is being fueled, in part, by shifting to value and outcome-based incentives, there are not well established practices as there are in oncology and because it is essentially data-driven it is natively prime territory for emerging big data technologies.

Of course, data interoperability is foundational to both of these arenas and to efforts like OEA. We need standardized ways of sharing data that are simple, secure, and scaleable. Some significant progress has been made in this arena but too much data remains locked in too many systems.

This is not to say that researchers shouldn’t continue to work to refine Watson’s and other AI platforms’ ability to democratize expert care and assist in the battle against cancer through experimentation and using small pilot programs. But in terms of where to spend $62M, there are other more immediately impactful, albeit less press-worthy, applications of these technologies to improve healthcare (or simply focus on improving user experiences for patients and caregivers around existing systems). Fighting cancer is great, it’s just not elementary, Dr. Watson.

(1) McGinnis JM, Williams-Russo P, Knickman JR. 2002. “The Case for More Active Policy Attention to Health Promotion.”

This article originally appeared on Healthcare Business Today.

More Ideas & Insights