Skip to Main Content

As friends who work as pediatricians, we’ve had several conversations recently about artificial intelligence and its growing role in medicine. Machine learning and computer algorithms, it seems, are on the cusp of changing the medical profession forever.

Prestigious medical journals publish a steady stream of studies demonstrating the potential of deep learning to replace tasks that are currently the bread and butter of highly trained physicians, like reading CT scans of the head.

advertisement

We are attuned to artificial intelligence in part because our city, Montreal, has become an international AI hub. One of the city’s biggest teaching hospitals, the Centre hospitalier de l’Université de Montréal, just launched an AI school for health professionals. And to the south of Canada, U.S. presidential candidate Andrew Yang has built a political campaign around planning for a future disrupted by AI.

We see the writing on the wall, yet it’s hard for us in our day-to-day clinical work to imagine a time when artificial intelligence will have a real impact on what we do.

It might be that, as physicians who work in a public health care system that relies on fax machines, carbon copies, and actual rubber stamps, a system run by AI seems like science fiction. But the prospect of a purely technological health care system also irks us and strikes us as undesirable.

advertisement

We aren’t alone in feeling this way. According to a recent Harvard Business Review report, “patients believe that their medical needs are unique and cannot be adequately addressed by algorithms.”

So is an AI-driven health care future one that patients will want and accept?

There is no question that machine-learning algorithms are increasingly rapid and accurate in analyzing medical data. Diagnostic medical specialties like radiology, which deal more with the interpretation of data than with direct patient interactions, are the first to see artificial intelligence integrated into their practice. But no clinical discipline will be unaffected: Tools such as counseling chat bots and machine-learning algorithms that use data from patient monitors are being developed for more hands-on work, such as mental health and intensive care.

Artificial intelligence clearly has the potential to pitch in where physician judgment falters. When doctors are tired or behind schedule, they are more likely to prescribe opioids and unnecessary antibiotics. They also achieve lower vaccination and cancer screening rates. The reason? Decision fatigue, by which doctors who make countless decisions each day lose steam as the day unfolds and increasingly rely on suboptimal default options. Algorithms that never get tired or hungry do not experience decision fatigue.

Still, techno-utopianism does not appear to be what patients want. In the Harvard Business Review report, people preferred receiving health advice from a human, even when a computer was more accurate. This makes intuitive (and self-serving) sense to us as clinicians. Seeking care is built around receiving the undivided attention of real human beings, flawed as they may be. Patients, it seems, see doctors’ humanity less as a bug and more as a feature.

As a medical student, one of us (O.D.) saw an 83-year-old man in a surgical oncology clinic who was recovering from prostate cancer. After being greeted and asked the simplest of questions, “How are you doing today?” the man began to cry. He confessed feeling that, even though his cancer was cured, his life would never be the same. He felt he had been forever scarred by his illness. Who knows if Siri or Alexa would have triggered such a response?

An algorithm could have calculated the man’s life expectancy and determined exactly how frequently he needed follow-up imaging and blood tests to detect a recurrence of his cancer. But would that have been health care?

It probably depends on who you ask. In our experience, feeling properly cared for means more than just receiving scripted advice and prescriptions for tests and medications, something a smart, Watson-like AI physician could conceivably do.

We have both experienced moments as clinicians when an apparently cursory or incomplete encounter left patients dissatisfied, even if it was rational and evidence-based. For instance, patients expect doctors to perform elements of the physical exam that are not strictly speaking necessary and will have no impact on the final treatment plan. Once you’ve diagnosed an ear infection in a child with fever, you usually don’t need to look for strep throat as well, but many parents feel good only once everything has been checked.

The truth is that every medical encounter contains an element of ritual and performance that is therapeutic. Patients want us to ask, look, and touch in response to their concerns, their bodies, and their unique circumstances. Few people appreciate a physician who seems to be working from a script, in the room but not truly present or connected. Who is, in other words, behaving like a machine.

It’s worth asking where high-performing AI “clinicians” would ultimately lead us. Potentially to a less expensive and more accurate health care system that leaves human carers in the dust, relegated in their imperfection to simpler, lower-stakes tasks they can’t mess up. Perhaps, in an act of self-preservation, doctors would demand to be repurposed as medical bureaucrats, rubber stamping the machines’ indefatigably consistent and unimpeachable work.

Although it can be exciting to think about the outcomes that a minimally flawed and maximally accurate artificial intelligence system would yield, such as a 100% detection rate for potentially deadly skin cancers like melanoma, it’s still worth considering whether that’s the health care we really want.

How much of health care is about outcomes, and how much is about humans’ deeply social nature? Is healing someone a purely technical endeavor, or is it part of human culture, something we have done since we decided we were better off banding together than simply killing one another to get the next meal and sleep in the better cave?

These questions echo broader anxieties about the impending shocks that artificial intelligence will visit upon society in the form of massive disruptions to major industries and fundamental transformations to how people make their living and spend their time. In the face of these changes, Andrew Yang speaks of the importance of promoting a “human-centered capitalism” in which the basic unit of value is each person, not each dollar. His message is that we must remain focused on what the human project is supposed to be about: taking care of each other.

We aren’t arguing for a dismissal of AI in medicine. Missing this opportunity to harness powerful tools to improve patient care and outcomes would be foolish. But we are arguing to keep artificial intelligence in its place, and to stake a claim to ours.

Physicians should embrace technology that will help us make better decisions and give us more time to focus on what we do best. Whether it is delivering babies, accompanying individuals through their dying days, or announcing a new diagnosis of leukemia to the parents of a 4-year-old, physicians’ greatest contribution resides in informing, guiding, and supporting patients.

Human doctors still have a monopoly on what people want most: human care.

Olivier Drouin, M.D., is a pediatrician and researcher at Sainte-Justine University Health Center in Montreal. Samuel Freeman, M.D., is a pediatrician and writer in Montreal.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.