Artificial Intelligence Is Learning to Predict and Prevent Suicide

Doctors at research hospitals and even the US Department of Veterans Affairs are piloting new, AI-driven suicide-prevention platforms.
SuicideTA511323071Converted.jpg
Getty Images

For years, Facebook has been investing in artificial intelligence fields like machine learning and deep neural nets to build its core business---selling you things better than anyone else in the world. But earlier this month, the company began turning some of those AI tools to a more noble goal: stopping people from taking their own lives. Admittedly, this isn’t entirely altruistic. Having people broadcast their suicides from Facebook Live isn’t good for the brand.

But it’s not just tech giants like Facebook, Instagram, and China’s up-and-coming video platform Live.me who are devoting R&D to flagging self-harm. Doctors at research hospitals and even the US Department of Veterans Affairs are piloting new, AI-driven suicide-prevention platforms that capture more data than ever before. The goal: build predictive models to tailor interventions earlier. Because preventative medicine is the best medicine, especially when it comes to mental health.

If you’re hearing more about suicide lately, it’s not just because of social media. Suicide rates surged to a 30-year high in 2014, the last year for which the Centers for Disease Control and Prevention has data. Prevention measures have historically focused on reducing people’s access to things like guns and pills, or educating doctors to better recognize the risks. The problem is, for more than 50 years doctors have relied on correlating suicide-risk with depression and drug abuse. And the research says they're only slightly better at it than a coin flip.

But artificial intelligence offers the possibility to identify suicide-prone people more accurately, creating opportunities to intervene long before thoughts turn to action. A study publishing later this month used machine learning to predict with 80 to 90 percent accuracy whether or not someone will attempt suicide, as far off as two years in the future. Using anonymized electronic health records from 2 million patients in Tennessee, researchers at Florida State University trained algorithms to learn which combination of factors, from pain medication prescriptions to number of ER visits each year, best predicted an attempt on one’s own life.

Their technique is similar to the text mining Facebook is using on its wall posts. The social network already had a system in which users can report posts that suggest a user is at risk of self harm. Using those reports, Facebook trained an algorithm to recognize similar posts, which they’re testing now in the US. Once the algorithm flags a post, Facebook will make the option to report the post for “suicide or self injury” more prominent on the display. In a personal post, Mark Zuckerberg described how the company is integrating the pilot with other suicide prevention measures, like the ability to reach out to someone during a live video stream.

The next step would be to use AI to analyze video, audio, and text comments simultaneously. But that’s a much trickier engineering feat. Researchers have a pretty good handle on the kind of words people use when they’re talking about their own pain and emotional states. But in a live stream, the only text comes from commenters. In terms of the video itself, software engineers have already figured out ways to automatically tell when someone is naked on-screen, so they’re using similar techniques to detect the presence of a gun or knife. Pills would be way harder.

Prediction Before Prevention

Ideally though, you can intervene even earlier. That’s what one company is trying to do, by collecting totally different kinds of data. Cogito, a Darpa-funded, MIT-spinoff company, is currently testing an app that creates a picture of your mental health just by listening to the sound of your voice. Called Companion, the (opt-in) software passively gathers all the things users say in a day, picking up on vocal cues that signal depression and other mood changes. As opposed to the content of their words, Companion analyzes the tone, energy, fluidity of speaking and levels of engagement with a conversation. It also uses your phone’s accelerometer to figure out how active you are, which is a strong indicator for depression.

The VA is currently piloting the platform with a few hundred veterans---a particularly high-risk group. They won’t have results until the end of this year, but so far the app has been able to identify big life changes---like becoming homeless---that significantly increase one’s risk for self-harm. Those are exactly the kinds of shifts that might not be obvious to a primary care provider unless they were self-reported.

David K. Ahern is leading another trial at Brigham and Women’s Hospital in Boston, Massachusetts, where they’re using Companion to monitor patients with known behavioral disorders. So far it’s been rare for the app to signal a safety alert---which would activate doctors and social workers to check in on him or her. But the real benefit has been the stream of information about patients’ shifting moods and behaviors.

Unlike a clinic visit, this kind of monitoring offers more than just a snapshot of someone’s mental state. “Having that kind of rich data is enormously powerful in understanding the nature of a mental health issue,” says Ahern, who heads up the Program of Behavioral Informatics and eHealth at BWH. “We believe in those patterns there may be gold.” In addition to Companion, Ahern is evaluating lots of other types of data streams---like physiological metrics from wearables and the timing and volume of your calls and texts---to build into predictive models and provide tailored interventions.

Think about it. Between all the sensors in your phone, its camera and microphone and messages, that device's data could tell a lot about you. More so, potentially, than you could see about yourself. To you, maybe it was just a few missed trips to the gym and a few times you didn’t call your mom back and a few times you just stayed in bed. But to a machine finely tuned to your habits and warning signs that gets smarter the more time it spends with your data, that might be a red flag.

That’s a semi-far off future for tomorrow’s personal privacy lawyers to figure out. But as far as today’s news feeds go, pay attention while you scroll, and notice what the algorithms are trying to tell you.