Can Tech 'Objectively' Assess Pain?

Some combination of data and machinery may finally provide an objective marker of pain—but beware of bias.
patient being examined by a doctor
Some combination of data and machinery may finally provide an objective marker of pain—but beware of bias.Jeff Pachoud/AFP/Getty Images

Pain flickers across people’s faces in inconsistent, contradictory ways. Charles Darwin, ever the meticulous observer, noticed this problem early: “The mouth may be closely compressed, or more commonly the lips are retracted,” he wrote in The Expression of the Emotions in Man and Animals. “The eyes stare wildly as in horrified astonishment, or the brows are heavily contracted.” And the experience of pain differs just as widely as its expression—tolerance is a matter of genetics and life experience. What’s agony for you may be merely uncomfortable for someone else.

Ambiguity has always made pain assessment an inexact science for health care providers, which in turn frustrates the sufferers themselves. A doctor’s assessment may not line up with their sense of the issue; in some cases, patients are told there’s no apparent explanation for their pain whatsoever. Many of these patients, hoping for a second opinion, are turning not to other doctors for answers but to technology.

Pain diaries and tracking apps are all over the App and Google Play Stores, advertised to chronic pain patients as ways to identify trends in their symptoms. Other apps render pain as animations that change in intensity and saturation in place of a 1-to-10 scale, in the hope that a more visual metaphor makes pain easier to talk about or describe.

A word you’ll encounter often in this area—not only in these apps and services but in research investigating ways to apply technology to pain assessment and in pain science in general—is “objectivity.” It’s an inherently Silicon Valley notion: Take the subjectivity out of something by applying ostensibly impartial, data-driven technology. Inevitably, the buzzwords have followed, everything from facial recognition and machine learning to the blockchain. This isn’t just classic disruption, though. The call to bring objectivity to the experience of pain comes from the National Institutes of Health, in part as an effort to curb the overprescription of opioids. Some combination of data and machinery may, go the pronouncements of the tech world, do what millennia of humans have been unable to: accurately feel someone else’s pain.

At the moment, the best way to precisely gauge someone’s pain is, quite simply, to ask them about it. But tech can provide some assistance there, too. Janet Van Cleave, whose research at NYU’s Rory Meyers College of Nursing centers on improving cancer patient care, has developed an Electronic Patient Visit Assessment for patients with head and neck cancers. Essentially, the ePVA is a survey on an iPad—tap where it hurts and answer yes-or-no questions about your pain and quality of life. Doesn’t sound that impressive, but the results are. “In patients who are highly symptomatic, web-based measures can help improve survival,” she says. “It’s a powerful tool.”

The reasons why have to do with the physical ways pain is reported. Head and neck cancer patients have difficulty speaking and are frequently tired from treatment. Their doctors get more and better-quality information from them because lifting a single finger to a touch screen is easier than verbally answering questions or writing things down. It’s still a challenge for some, though. “It’s like going through hell,” Van Cleave says. “Their hands shake when they’re pressing on the screen, so we’ve made it extra sensitive.”

According to Van Cleave, their preference for the iPad might extend beyond physical ease. She suspects some patients feel more comfortable telling a machine about their pain and symptoms than another person. This is one of the central, most crucial arguments for telemedicine—that something about tech as intermediary increases comfort. On its own, it’s a good, sensible, testable idea. But—especially in more complicated or algorithmic applications—machines can be just as biased as the humans they’re designed to improve upon and replace.

Stated more starkly: Tech-enabled bias could devastate pain assessment. According to Ran Goldman, a pediatrician and pain investigator at the University of British Columbia, pain assessment is already deeply biased, something that’s hard to fight because it’s so multifaceted. The first layer comes from the individual patients themselves, who might, because of their upbringing, be concerned about appearing weak or, because of their addiction, be seeking drugs. Then there are cultural confusions. “In my practice, children from different cultures respond differently,” Goldman says. “Some will cry, others will be stoic, and that’s based on what their culture is saying.” A child from, say, war-torn Syria (or even just more-reserved Japan) might remain in shocked silence despite having sustained an injury that would have had an American kid bawling.

The touchiest bit of bias is also the best documented: the preconceptions of the doctors themselves. Some of those biases are personal: what the individual doctor considers to be painful, what specific cues they’re expecting from the patient. Others are cultural. As Goldman puts it by way of segue: “We need to talk about race, ethnicity, and gender.”

Doctors routinely underestimate the pain experienced by women and people of color. Studies have found that doctors perceive women to be more emotional when describing their symptoms and are more likely to misdiagnose women’s chronic pain as mental illness. It’s also been shown that emergency rooms make women wait longer than men to receive medications. When treating people of color, and especially black people, doctors rate their pain as lower, make less accurate treatment recommendations, are more likely to read the patient’s behavior as “drug seeking,” and are more likely to deny them pain medication. These biases persist even when the patients are simply described, not seen. (Not all doctors, it should be said, think physician bias is an issue. One I spoke to got spitting mad at the very suggestion, insisting that the routine bias training doctors go through was enough to prevent poor or unevenly applied treatment.)

For doctors like Goldman, the inescapability of bias has shaped his career. “I’ve been studying pain for 20 years. Finding an objective measure would be like finding the Holy Grail,” he says. Goldman sees promise, specifically, in facial recognition technology. In a pilot study, using pictures of children’s faces taken while they were getting their blood drawn, he compared the results (analyzed by Microsoft’s AI-enabled emotion tracker, Emotion API) to an existing pain scale. The API identified the children’s faces as primarily displaying sadness, so Goldman hopes that someday this kind of system can be fine-tuned to objectively measure the emotions associated with pain. He and other scientists think using facial expressions for pain assessment could be especially useful in treating young children, elderly people with dementia, and others who are unable to express their pain verbally.

Trouble is, systems designed by biased individuals—which all humans are—tend to pick up those selfsame biases. Facial recognition systems, including Microsoft’s, are notorious for their inaccuracy in analyzing the faces of people of color. Systems trained by biased doctors will only replicate today’s problems and algorithmically amplify them under the guise of techno-objectivity. In pain medicine circles, the hunt for objective measures of pain must be followed by similar caveats. “We have two crises,” Van Cleave says. “Opioids, and pain.” The NIH is encouraging research aiming to find objective biomarkers of pain as part of its opioid-fighting HEAL initiative. One doctor I spoke to fears that if a marker were ever found, it would become a way for profit-driven companies to deny patients medication. In that late-capitalist nightmare universe, your pain might be assessed by an algorithm trained by an insurer. That might stamp out opioids, but it won’t end human suffering.

The problem with pain assessment, and with all the ways to bring objectivity to pain assessment, is people. Communication across age, gender, race, and class lines remains poor throughout American society. Tech can step in to help—Goldman speaks inspiringly of AI sifting through pain patients’ data, finding patterns and connections humans can’t—but it won’t fix what’s broken.

Correction (May 21, 2019, 9:00 am PT): This article has been updated to correct the spelling of NYU Rory Meyers College of Nursing.


More Great WIRED Stories