BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Artificial Intelligence Is Infiltrating Medicine -- But Is It Ethical?

This article is more than 6 years old.

Artificial intelligence (AI) is being embraced by hospitals and other healthcare organizations, which are using the technology to do everything from interpreting CT scans to predicting which patients are most likely to suffer debilitating falls while being treated. Electronic medical records are scoured and run through algorithms designed to help doctors pick the best cancer treatments based on the mutations in patients’ tumors, for example, or to predict their likelihood to respond well to a treatment regimen based on past experiences of similar patients.

But do algorithms, robots and machine learning cross ethical boundaries in healthcare? A group of physicians out of Stanford University contend that AI does raise ethical challenges that healthcare leaders must anticipate and deal with before they embrace this technology. “Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes,” they wrote in an editorial published this week in the New England Journal of Medicine.

Their warning was timely, considering developments such as this one, announced today with a rather breathless headline: “Smart software can diagnose prostate cancer as well as a pathologist.” A group of researchers from Drum Tower Hospital in Nanjing, China, who are attending the European Association of Urology congress in Copenhagen, said they have developed an AI system that can identify prostate cancer from human tissue samples and classify each case according to how malignant the cancer is.

“This may be very useful in some areas where there is a lack of trained pathologists. Like all automation, this will lead to a lesser reliance on human expertise,” said an Italian researcher who reviewed the work of the Chinese team, in a statement.

Few medical experts expect AI to completely replace doctors—at least not in the short term. Instead machine learning is being used mostly for “decision support,” to help guide physicians towards accurate diagnoses and tailored treatment plans. These can be quite useful. Forbes contributor Robert Pearl, a professor at Stanford, wrote earlier this week about an AI application developed by Permanente Medical Group that uses data compiled from 650,000 hospital patients to identify which people admitted to hospitals today are at risk of needing intensive care. The system alerts physicians to the at-risk patients so they can try to intervene before patients end up in the ICU.

But one reason that all this AI concerns the physicians who published the NEJM editorial is that biases could inadvertently be introduced into algorithms. That has already been demonstrated in other industries that use AI, they report. For example, there are AI systems designed to help judges make sentencing decisions by predicting whether a criminal is likely to commit more crimes. Those technologies “have shown an unnerving propensity for racial discrimination,” they wrote. “It’s possible that similar racial biases could inadvertently be built into healthcare algorithms.”

Those algorithms could be written to avoid biases, but doing so is extraordinarily challenging, the authors argue. That’s especially true in the U.S., where healthcare systems often face profit pressures. The motives of the people programming the AI systems may not match up with those of physicians and other caregivers. That invites bias, the authors argue.

"What if the algorithm is designed around the goal of saving money?” asks senior author David Magnus, director of the Stanford Center for Biomedical Ethics, in a statement. “What if different treatment decisions about patients are made depending on insurance status or their ability to pay?"

Stanford itself is experimenting with AI and is currently running a pilot study of an algorithm to predict which patients should be offered palliative care—pain management, counseling and other services often offered to chronically ill patients. Physicians and software engineers are working together to design the system, taking precautions meant to prevent machines from misinterpreting data.

They urge other healthcare leaders to adopt a similar system of checks and balances when designing and implementing AI. If they don’t “…the electronic collective memory may take on an authority that was perhaps never intended. Clinicians may turn to machine learning for diagnosis and advice about treatments—not simply as a support tool,” they warned in the NEJM article. “If that happens, machine-learning tools will become important actors in the therapeutic relationship and will need to be bound by the core ethical principles, such as beneficence and respect for patients, that have guided clinicians.”

Follow me on Twitter or LinkedInCheck out my website