BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why AI And Healthcare Must Learn To Play Together

Following
This article is more than 6 years old.

Because artificial intelligence (AI) has become so buzzy, and applied so indiscriminately–AI for pot, AI for beer brewing, AI for horse care, AI for sex ed (all examples courtesy of CB Insights)–it’s easy to dismiss as just another passing trend, like slap bracelets, Fitbits or a dignified presidency.

Rising popularity of "AI for X" in media articles, courtesy of CB Insights.

CB Insights

As Venrock VC and physician Bob Kocher recently commented:

I get pitched at least two companies per day claiming to have AI. I always ask them, "Tell me what AI is," and they have yet to say the same thing. I have not seen one [product] that is truly learning or is truly intelligent.

That’s the bear view, and fairly representative of how many in healthcare see AI–just the latest bright, shiny object that’s unlikely to meaningfully impact their daily work.

What’s so striking to me out here in Silicon Valley, however, is not so much the brash claims of disruption–of which there are many–but rather what I’m hearing in more considered conversations with experts who’ve been involved in AI for years, through previous bursts of optimism followed by long periods of disappointment. Almost to a person, these veterans see the emergence of the discipline from the latest “AI winter” as both hard-won and well-deserved. These experts believe the discipline has made incredible progress and is truly, finally positioned to deliver on its extraordinary and long-awaited promise.

“Take any old classification problem where you have a lot of data, and it’s going to be solved by deep learning,” computer scientist Geoffrey Hinton told New Yorker writer Sid Mukherjee. “There’s going to be thousands of applications of deep learning.”

In fact, some technologists and data scientists believe, ardently, that the future is already here–that we already have ability to use AI to solve important problems in healthcare, and that it’s intransigent docs who are standing in the way.

It would only be a slight exaggeration to say that from the perspective of many of these data jocks, an ideal healthcare system would consist of a front end of diligent data gatherers, to collect as much information as possible. These data would then be fed into a giant data warehouse, where they could be thoughtfully analyzed, and result in significantly improved clinical recommendations that would either be returned directly to the patients themselves (ideally), or to a health provider (reluctantly) who could then relay the information, with requisite empathy, back to the patient. At best, the health professional in this context would be a friendly and agreeable customer service representative, while the insight would be provided by the data scientists and the computational horsepower behind them.

Surprisingly enough, most health professionals haven’t been particularly enthralled by this scenario, and have questioned many of the fundamental assumptions. For starters, many wonder whether the technology is as a good as promised; as MedCityNews recently reported:

Although medical image analysis has proved a popular area of investment for investors, [Venrock VC] Kocher doesn’t believe the technology from these companies has reached the point where it’s better at assessing these images than radiologists.

A second group of concerns, raised with characteristic eloquence by Mukherjee in the New Yorker, relate to the idea that reducing the medical problems to data that can be fed into a computer inevitably removes critically important elements. For instance, in eliciting a patient’s history, a skilled dermatologist may pull out a relevant feature that rote data analysis might have missed. The interaction an engaged doctor has with her patient, including “laying on of hands,” may also have important therapeutic value in itself, as Mukherjee nicely conveys.

Presumably, data scientists would respond that skilled healthcare customer service representatives could continue to perform these functions, working in harmony with sophisticated computers–ultimately applying best practices from around the world, so every patient would effectively receive the benefit of engaging with an expert diagnostician.

But what about the role of inquisitive clinician-scientists in exploring the relationship between clinical presentation and disease pathology–the route for so many important medical advances. If you turn curious, frontline physician-scientists into decerebrate data collectors, won’t you be missing critically important opportunities for insight? Effectively, won’t you be deadening the inquisitive mind?

My suspicion is that data scientists would argue they’re not seeking to obliterate clinical scientists, but instead, empower them and catalyze their evolution, enabling them to make far greater progress by relying not only on their own experience and review of the literature, but also the accumulated insights from within all available data, not to mention the entire corpus of published knowledge.

This sounds lovely enough, but until this promise is a lot more palpable–and until engaging with these data is a lot more intuitive and a lot more productive than it seems today, I suspect even interested clinician scientists may not immediately sign on.

Is the answer, then, simply for data scientists to do a better job of proving their value–with the idea that once this happens, their methods will be adopted with gratitude by the medical establishment?

If only. While healthcare professionals may accurately critique techno-optimism (or techno-fantasy), data scientists who are trying to establish the utility of their techniques face a serious problem: the health professionals their techniques often challenge tend to be the judge, jury and, too often, executioner.

Healthcare professionals, it seems, are assiduously avoiding (for as long as they can) the fatal mistake French winemakers made in the famous 1976 Judgment of Paris, when underdog California wines beat out the French wines in a blinded tasting, thus establishing California as a world-class wine region. Often pathologists or radiologists will participate in such competitions only under extremely restrictive conditions.

This response is somewhat more understandable when you consider that in a surprising number of cases, the gold standard in medicine is far more fragile than you might think; evaluators often disagree with each other, and even agreement doesn’t mean the classification is particularly useful, informative or prognostically accurate, even on those occasions when it is precise. Yet, as I’ve discussed in detail, emerging technologies generally need to meet these often dubious standards.

A related issue around gold standards relates to the icons who permeate our clinical narratives. For example, Mukherjee describes an expert diagnostician, able to discern subtleties in coughs and skillfully work his way to the underlying cause, “usually” correctly. Mukherjee and I trained together, and while I knew many of the same experts and watched many of the same demonstrations–and would have referred a family member to any of these experts in an instant–I confess I was always a bit more skeptical of their physical exam pronouncements, and was less convinced that they (or anyone) could make diagnoses as reliably as they believed. And for all the rhapsodizing about the skills of doctors in an age before technology was readily accessible, it turns out you generally do better with a history, physical exam and great technology than you do with a history and physical exam alone. Even the best clinicians may not be as good as they, or we, want to believe; and technology gives most physicians the opportunity to perform a lot better.

Besides the challenge of persuading entrenched incumbents, technology innovators also complain, justifiably, about the relatively feeble data collection and aggregation that occurs in medicine, and are shocked by the profession’s tolerance for incredibly slow innovation cycles. Technologists are also quick to point out the many perverse incentives not to improve care, especially under fee-for-service systems. Cardiogram app co-founder Brandon Ballinger makes many these points with particular eloquence here.

My Take

I’m more optimistic about the utility of the emerging technology than many in healthcare, and more optimistic about the motives and behavior of doctors than many technologists, and I see at least four reasons to be hopeful.

First, as I’ve been arguing in the case of precision medicine (and reminder/disclosure: I work at DNAnexus, a genomic data company), it will be important for AI to identify specific areas of impact, promising AI use cases where the benefit will be both apparent and of conspicuous value to relevant stakeholders. I believe most doctors are passionate about providing their patients with the best care possible, and if the use of an AI-based algorithm demonstrably and persuasively improves the care and treatment of patients, I suspect most physicians will enthusiastically adopt it–faster if it's readily incorporated into existing workflows.

Second, I continue to believe in the value of data-focused pioneer medical centers–what I’ve described as “data-inhaling clinics of the future.” These organizations would be explicitly built around the shared commitment to data gathering and analysis, and founded on the core values of empathy and patient partnership.

Third, I suspect it might be useful to apply some of the lessons I’ve seen in the context of precisionFDA (disclosure: DNAnexus partnered with the FDA in 2015 to build this platform). The basic idea–conceived by former Chief Health Information Officer Dr. Taha Kass-Hout, who left the agency last year–was to create a trusted environment where a community of stakeholders –mostly companies and academic scientists–could develop and refine quality standards; the FDA would essentially create a level playing field, but the lifting would be done by the stakeholders themselves, the ones most invested in the outcome. I can envision a similar transparent approach applied to emerging areas of disease diagnosis, to ensure that diagnostic performance, not professional incumbency, carries the day.

Finally, and perhaps most hopefully, I am confidant that time is actually on our side here. As young, brash programmers age, they are likely to discover health and disease are more complex than they originally conceived; similarly, thoughtful physicians navigating the illnesses of patients and loved ones will increasingly ask whether there isn’t a better way.

Soon, I trust, there will be.