clock menu more-arrow no yes mobile

Filed under:

Facebook says it will crack down on health misinformation

The company has had longstanding problems with posts containing promises of miracle cures, unproven scientific claims, and outright lies.

Facebook on a laptop computer in a room. flickr/Getty Images

You’ll soon see fewer “miracle cures” on your News Feed, according to Facebook. The company vowed last week to moderate posts touting unproven health claims, make it harder to find products and treatments ballyhooed as heal-alls, and “minimize low-quality health content.”

“We know that people don’t like posts that are sensational or spammy,” said Facebook product manager Travis Yeh in a press release, “and misleading health content is particularly bad for our community.” The update comes after recent reports on private Facebook groups claiming, for instance, bleach as a “cure” for autism and baking soda injections as an “alternative treatment” for cancer.

Tackling the rampant online spread of health-related misinformation is a tall order. 2.38 billion people worldwide use Facebook monthly — equal to roughly a third of the world’s population — with 1.56 billion of that total reported to use Facebook daily. According to a 2018 study from the Pew Research Center, two-thirds of American adults turn to Facebook for news. Estimating exactly how many Facebook posts are “sensational or spammy” is a tremendous undertaking tantamount to a fool’s game.

Last year, Facebook said that it deleted 1.23 billion spam posts in 2018’s third quarter; an April statement from Cisco cybersecurity subdivision Talos, however, found that at least 74 spamming, phishing, and hacking groups — with membership totaling 385,000 — were left by Facebook after the clampdown “to police themselves,” as reported by the Verge. While those groups themselves didn’t contain spam posts targeting consumers, they were what Talos called “criminal hangouts,” where joining one signaled Facebook’s algorithm to suggest joining another.

Spam and “false news” fall into a category that Facebook’s community standards call “Integrity & Authenticity.” The company’s existing policy rationale states, “[W]e don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.” (Requests to Facebook for further comment were not answered by publication time.)

The company previously announced in March that it would make vaccine misinformation harder to find by “reducing its distribution” and downranking groups and pages that are repeat offenders. It later added that it would consider barring such pages from using Facebook’s fundraising platform, following the World Health Organization and UNICEF’s joint declaration of a “global measles crisis” that has already seen a 300 percent uptick in reported cases.

“By the time you finish reading this,” according to a WHO and UNICEF opinion piece for CNN in April, “we estimate that at least 40 people — most of them children — will be infected by this fast-moving, life-threatening disease.” Last year, the Atlantic coined the term “misinfodemics,” giving “viral misinformation” a double meaning: faulty reporting that spreads rapidly, thereby hastening the spread of disease.

Last month, ahead of Facebook’s announcement, the Washington Post uncovered a bevy of private Facebook groups with membership ranging from thousands to hundreds of thousands of people, promoting debunked alternative cancer treatments like baking soda, colloidal silver, and frankincense. NBC News reported in May on parents in private Facebook groups recommending life-threatening treatments for autism endorsed as “cures,” including turpentine, urine, and industrial bleach.

Facebook says its updated flagging procedure for health content is two-fold: addressing whether “a post about health exaggerates or misleads,” and whether “a post promotes a product or service based on a health-related claim,” tackling this issue much like the company addressed clickbait in years past. This algorithm change effectively gives health misinformation, as the Washington Post puts it, the spam treatment.

But as the Wall Street Journal reports in its coverage of vaccine misinformation on social media, Facebook’s flagging protocol can easily miss inaccurate reports parading as opinion. “For Facebook, this system is only in place for pages that make explicitly false claims,” says WSJ’s Spencer Macnaughton. “That means that many groups that use anti-vaccine language, but don’t include a specifically false claim, might not get flagged.” Similarly, opinions and hashtags on Facebook-owned Instagram are not flagged.

Nowhere does Facebook’s July statement suggest that it will remove groups or pages known to spread health misinformation. The changes are described only as “ranking updates,” specifically meant to target News Feed. Private groups may be harder to find, but Facebook’s updates won’t necessarily end some of the most dangerous sources of misinformation.

Sign up for The Goods’ newsletter. Twice a week, we’ll send you the best Goods stories exploring what we buy, why we buy it, and why it matters.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.