Skip to Main Content

SAN FRANCISCO — Digital health apps, which let patients chat with doctors or health coaches or even receive likely medical diagnoses from a bot, are transforming modern health care. They are also — in practice — being used as suicide crisis hotlines.

Patients are confessing suicidal thoughts using apps designed to help them manage their diabetes or figure out why they might have a headache, according to industry executives. As a result, many digital health startups are scrambling to figure out how best to respond and when to call the police — questions that even suicide prevention experts don’t have good answers to.

advertisement

“To be honest, when we started this, I didn’t think it was as big an issue as it obviously is,” said Daniel Nathrath, CEO of Ada Health.

The European company built a chatbot to provide smartphone users with possible explanations for their medical complaints. Since the app launched in late 2016, people around the world have used it to complete more than 10 million health assessments. In about 130,000 of those cases, users have told Ada that they’re struggling with suicidal thoughts or behaviors, the company said.

For digital health startups, suicidal patients present just one of a number of unforeseen crisis situations. At the telemedicine company American Well, a physician once conducted a video visit with a woman who said that she had been punched by her spouse; he was shouting in the background of the call, while the concerned physician called 911. Another company, which spoke on condition of anonymity, said it had to get authorities involved when one patient using its service threatened to hurt her own child.

advertisement

Admissions of suicidal thinking and planning, though, come up more often. And it’s not just digital health startups that are grappling with what to do. After Facebook developed an algorithm to flag posts from potentially suicidal users for review, the company called first responders about 3,500 times in 12 months to check in on users deemed to be at high risk. Hospitals and doctors’ offices are dealing with their own new challenges, as patients respond to their social media accounts with suicidal comments or voice them on clinics’ private online portals.

The phenomenon is, in some respects, no surprise: There’s a large body of research showing that people are more willing to confess potentially taboo thoughts to a computer than to a fellow human a few feet away.

“People are going to express their suicidality. We’ve destigmatized it. What we’ve not done is prepared everybody” to respond, said April Foreman, a psychologist who works on digital crisis care at the veterans crisis line run by the Department of Veterans Affairs.

STAT screen capture of Ada app

If you open Ada’s app and tell it you’ve been having difficulty concentrating for months, the chatbot will ask you: Have you been feeling low or depressed? Is it impacting your daily life? Say yes, and you’ll be asked: Any thoughts or urges related to ending your own life?

Say yes again, and the chatbot will ask you: Might you try to end your life today? If you say you’ve already tried today, the chatbot will tell you to call an ambulance right away. And if you say you have plans to do so today? You’ll be encouraged to call someone and tell them about your location and emotional state.

That’s where the conversation about suicidality ends with Ada’s chatbot. But some users don’t stop there. They send emails expressing suicidal thoughts that go to Ada’s customer support team — which often then takes on the task of researching and recommending local mental health resources.

All of the digital health companies that STAT surveyed for this story said they have a response plan in place that they follow when patients express suicidal thoughts on their platforms. Such incidents may be rare in the context of all the cases they see, but the frequency — and the stakes — are high enough to demand protocols, executives said. In situations deemed to be lower risk, that often means getting patients on the phone, directing them to a crisis hotline, or encouraging them to contact a friend.

And in situations in which a patient is deemed to be at higher suicide risk? Some startups say they don’t hesitate to get emergency medical services or police involved to conduct a wellness check when they think it’s in a patient’s best interest — whether or not the patient wants it.

San Francisco-based Omada Health approaches things differently. The startup relies on a network of coaches to exchange messages with patients with chronic conditions like hypertension and type 2 diabetes. While Omada makes a phone call to check in on patients who express suicidal thoughts, calling the authorities in these cases “isn’t within our scope of practice,” said Dr. Carolyn Jasik, the company’s vice president of medical affairs.

In many of these cases, startups are effectively trying to assess the likelihood that patients will try to harm themselves. The trouble is, not even the experts know.

“This has been a problem that people have been struggling with for a really long time — and there’s just no science on this,” said Matthew Nock, a Harvard psychologist who studies suicide and self-injury. “People are largely winging it and using their clinical wisdom to try and figure out when and how to intervene.”

Nock was part of a team of researchers that published a 2016 analysis of the past 50 years of studies trying to predict suicidal thinking, suicide attempts, and suicide deaths. Nearly all of those studies looked at how to know if a person is at risk for suicidality using relatively long prediction windows, such as one year from now. Just 0.1 percent of those studies looked at a window of less than a month. Even fewer looked at a window of days or hours — the type of data that might be particularly useful for digital health startups trying to help their suicidal patients.

Nock said he wants to see digital health companies conduct research and evaluate their own practices to gather data on what works and what doesn’t, at different levels of suicide risk. Just as a decision not to respond carries risk, so too does a decision to respond too aggressively. Calling an ambulance for patients who are not at immediate risk could backfire, if it makes them hesitate to seek help in the future.

Still, as digital health companies and traditional clinicians alike try to assess and mitigate a patient’s suicide risk, not everyone is convinced that chatbots and messaging apps are ready to play a useful role.

Dr. Peter Antall is chief medical officer at Boston-based American Well, a 13-year-old company where telemedicine visits between a patient and a physician happen exclusively by video, or, when the internet connection is poor, by phone. Antall is excited about the potential of medical chatbots — he even advises one such startup, called Gyant, in San Francisco — but he’s worried about what gets lost when a clinician can’t see patients’ faces or hear their voices when they might be suicidal.

“Given the acuity and the seriousness of somebody potentially trying to kill themselves, I don’t believe that any of those other technologies are there at this point,” Antall said.

By way of contrast, Antall pointed to a visit that took place a few years ago on American Well’s platform. A patient video-conferenced in from her home, complaining of chronic pain and seeking pain medication. The physician on the other side of the video chat started the conversation from that vantage point, but quickly picked up on concerning signs that pointed in another direction: The patient had a depressed affect. She spoke in a low, monotone voice. She described other signs of major depression and eventually admitted having active thoughts of suicide — and a plan to do so.

It took some persuading, but the physician was able to convince the patient to go to the emergency department. They ended the video call so the patient could head out on her way. But the physician called back by phone within the hour to check in. The news was good: The patient was already being assessed at the hospital.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.