Skip to Main Content

Over the past few years, Facebook has stepped up its efforts to prevent suicide, but its attempt to help people in need has opened the tech giant to a series of issues concerning medical ethics, informed consent, and privacy. It has also raised a critical question: Is the system working?

Facebook trained an algorithm to recognize posts that might signal suicide risk and gave users a way to flag posts. A Facebook team reviews those posts and contacts local authorities if a user seems at imminent risk. First responders have been sent out on “wellness checks” more than 3,500 times.

advertisement

“We don’t have any data on how well this works. Is it safe? Does it cause harm?” asked Dr. John Torous, the director of the digital psychiatry division at Beth Israel Deaconess Medical Center.

Experts have acknowledged that Facebook’s efforts can connect people with help they need — but without data, the benefits aren’t clear. Neither are the risks. Experts argue that what Facebook is doing should be considered medical research, but the tech giant isn’t protecting its users like scientists would a study’s participants.

“It’s important to have innovative approaches. But just because people are suicidal and in crisis doesn’t mean they don’t deserve rights,” said Torous, the co-author of a new paper on Facebook’s suicide prevention efforts published Monday in Annals of Internal Medicine.

advertisement

Facebook said it is using technology to proactively detect posts that might express suicidal thoughts and quickly get people help.

“Facebook is in a unique position to help because of the friendships people have on our platform — we can connect those in distress with friends and organizations who can offer support,” Antigone Davis, Facebook’s global head of safety, said in a statement to STAT. Davis also said the company is “committed to being more transparent” about its suicide prevention work.

Here’s a look at the key questions researchers are asking about that effort:

Should it be considered research?

Torous argues that Facebook’s suicide prevention efforts, however innovative, are a type of medical intervention: “They’re evaluating information, making a decision about someone’s mental state, and activating the health care system,” he said. It’s precisely because Facebook’s efforts are so unprecedented that Torous believes they should also be considered medical research.

“The fact that it’s innovative, the fact that it’s novel, is in essence research,” he said.

Mason Marks, a health law scholar and a visiting fellow at Yale Law School who has written extensively about Facebook’s suicide prevention efforts, echoed that argument.

“This type of massive suicide screening has never really been done ever,” he said. But Marks said Facebook isn’t sticking to any of the standard steps for conducting research, like publishing its data for peer review.

Emily Cain, a spokesperson for Facebook, pushed back on the idea that the algorithm is compiling health data.

“This does not amount to us collecting health data,” she said. “The technology does not measure overall suicide risk for an individual nor anything about a person’s mental health.”

How does the algorithm work?

Facebook trained its algorithm on posts that users had flagged as potentially containing thoughts of suicide. Some of those were false examples — Facebook has cited “I have so much homework I want to kill myself” as an example — that taught the algorithm what to ignore. The tool also takes into account the comments on a post, as well as the day and time it’s posted. Cain said the algorithm can recognize keywords and phrases in English, Spanish, Portuguese, and Arabic.

Marks said he has serious concerns about whether those posts can serve as an accurate predictor of suicide risk, given that they’re a “proxy” for suicide data. And Torous also wants to know whether the algorithm works equally across different races, genders, nationalities, and other categories of Facebook users.

Without more information about the algorithm, they said, there’s no way to be sure.

“They shouldn’t be treating this as a proprietary, black box algorithm,” Marks said. “They should be opening it up, if not to the public, then to the scientific community that is doing great work in this area.”

What are the risks of Facebook’s suicide prevention efforts?

Experts say there are clear potential risks with Facebook’s suicide monitoring program, starting with false positives. People who are not suicidal might be forced to undergo psychiatric evaluation against their will. People might be arrested in the process of a wellness check.

“There are numerous people who have been shot and killed by police in response to a mental health call,” Marks said. He added that police don’t need a warrant to enter a person’s home if they believe there is a risk of bodily injury.

“They are relying on Facebook’s predictions to make that determination,” Marks said.

Facebook has said it does not keep track of what happens when first responders, acting on information from the company, conduct a wellness check on a user.

“Privacy considerations mean emergency responders rarely provide information about the outcome of a case, but we do regularly get thank you notes from emergency responders who were able to reach these people in time,” said Cain.

Torous said he’s also concerned that, without transparency about how their posts are being monitored and used, people might become wary of talking about mental health concerns on Facebook. He and others also pointed to potential privacy concerns if Facebook were to use the data collected from the program for other purposes, such as targeted advertisements. Cain said the posts are not used for any unrelated purposes and added that the company has collaborated with experts across the globe on its efforts.

“We work with experts and seek their input on current research and best practices to ensure everyone’s safety is being considered,” she said.

What does informed consent look like?

Facebook’s data policy lays out how the company uses the data it collects from user’s posts, including to detect “when someone needs help.” That document points users to a blog post that explains how the company’s suicide prevention tools work.

But experts who argue Facebook’s program constitutes medical research say there’s a need for an informed consent process — and burying the information in a long document that many people might not read doesn’t cut it.

“You want people to know what they’re partaking in,” Torous said. In standard medical studies, researchers have to walk potential participants through the ins and outs of how an experimental intervention works, the possible benefits, and the risks it poses. It’s a critical step designed to protect people who are participating in research.

Experts say Facebook should be doing the same. That might look like showing each user a video that explains how the suicide monitoring program works. Facebook could also give users the choice to opt in — or opt out — of the program, experts said. They also called for an independent board to oversee the program, like the institutional review boards that pore over research proposals to make sure they’re ethical.

“People may be vulnerable [or] in crisis, but that doesn’t mean you can abnegate basic ethical principles,” Torous said.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.