FINGER POINTING

When artificial intelligence botches your medical diagnosis, who’s to blame?

Finding fault instead of help.
Finding fault instead of help.
Image: Reuters/Francois Lenoir
We may earn a commission from links on this page.

Artificial intelligence is not just creeping into our personal lives and workplaces—it’s also beginning to appear in the doctor’s office. The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis?

Medical error is currently the third leading cause of death in the US, and as many as one in six patients in the British NHS receive incorrect diagnoses. With statistics like these, it’s unsurprising that researchers at Johns Hopkins University believe diagnostic errors to be “the next frontier for patient safety.”

Enter artificial intelligence. AI can potentially transform medical practice and drastically reduce the number of medical errors, as well as provide a host of other benefits. In some areas, AI systems are already capable of matching, or even exceeding, the performances of experienced clinicians. In an interview for Smart Planet, MIT scientist Andrew McAfee, co-author of The Second Machine Age, is convinced, “If it’s not already the world’s best diagnostician, it will be soon.”

Recent reports show systems already capable of matching specialists when diagnosing skin cancer or identifying a rare eye condition responsible for around 10% of global childhood vision-loss. AI systems can even exceed human doctors in identifying certain types of lung cancer. These successes will continue to grow as the technology matures. Add to this the benefits derived from faster diagnoses, reduced costs, and a more personalized medicine, and it’s easy to see there are compelling reasons to adopt AI throughout medical practice.

Of course, there are downsides. AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?

Doctors using AI today are expected to use it as an aid to clinical decision-making, not as a replacement for standard procedure. In this sense, the doctor is still responsible for errors that may occur. However, it is unclear whether doctors will actually be able to assess the reliability or usefulness of information derived from AI, and whether they can have a meaningful understanding of the consequences of those actions.

This inability arises from the opacity of AI systems, which—as a side effect of how machine-learning algorithms work—operate as black boxes. It’s impossible to understand why an AI has made the decision it has, merely that it has done so based upon the information it’s been fed. Even if it were possible for a technically literate doctor to inspect the process, many AI algorithms are unavailable for review, as they are treated as protected proprietary information. Further still, the data used to train the algorithms is often similarly protected or otherwise publicly unavailable for privacy reasons. This will likely be complicated further as doctors come to rely on AI more and more and it becomes less common to challenge an algorithm’s result.

So if a doctor cannot be held fully responsible for a decision made using AI, who is there to fill the gap? There are several options. One would be to hold the AI system itself responsible, though this is tricky owing to the inability to punish, reprimand, or seek compensation from software. It is also likely to leave affected patients unsatisfied, as well as invoking deep, potentially unanswerable philosophical questions on the nature of machine intelligence.

A second option would be to hold the designers of the AI responsible. This too has its problems, not least the difficulty in pinning down individuals responsible for particular features, as the teams creating these AI can often number in the hundreds. The temporal and physical distance between research, design, and implementation also often preclude any awareness of later use. Consider IBM’s Watson, which is today used in clinical decision-making. Though now used as a medical tool, Watson was initially designed to compete in the quiz show Jeopardy. It would be unreasonable to hold these researchers—who designed Watson to answer trivia questions—responsible for its potential failings as a medical aid. Furthermore, even if it were reasonable to hold designers to account, doing so would likely dissuade many from entering the field, and therefore delay the many benefits of AI from being realized.

A final option would be to hold the organization running the system accountable. For Watson, this would mean IBM. While this has the benefit of providing a clear target for retribution, it is unclear whether this path would work, either. In the absence of design failures or other misconduct, it is unlikely one could hold such an organization responsible for how others have used its product; this would be like holding car manufacturers responsible for each and every accident involving one of their cars. Even if it were reasonable to do this, doing so would likely lead many to abandon the field, as it would offer considerable risks with very few opportunities.

In order to fully realize the benefits of AI in healthcare, we need to know who will be responsible when something goes wrong. Not knowing undermines patient trust, places doctors in difficult positions, and potentially deters investment in the field. Incomplete models of responsibility benefit no one, and while there will be no easy solutions, not finding one will only delay the further development and use of this lifesaving technology.