Facebook Will More Closely Monitor How It Makes Money

“Seeing those words made me disgusted and disappointed—disgusted by these sentiments and disappointed that our systems allowed this.”

Sheryl Sandberg, Facebook’s chief operating officer, delivers a speech in Paris earlier this year.
Sheryl Sandberg, Facebook’s chief operating officer, delivers a speech in Paris earlier this year. (Philippe Wojazer / Reuters)

In the last week, one of Facebook’s greatest business strengths became a source of tremendous embarrassment to the company. Its famed money-making engine—which ceaselessly converts user data and content into advertising dollars, all underpinned by algorithmic plumbing—was found to have some glaring issues.

Specifically, as ProPublica revealed Thursday, advertisers could target self-described anti-Semites as an audience for their ads. Facebook’s algorithmic ad tool allowed buyers to target users who publicly entered phrases like “Jew hater,” “How to burn the Jews,” or “History of ‘why Jews ruin the world’” into their Facebook profile as their educational background or professional interest.

Facebook apologized for the feature last week, pulling every algorithmically defined ad-targeting category. On Wednesday afternoon, Sheryl Sandberg, the company’s chief operating officer, outlined further steps.

Facebook would be tightening its “enforcement processes” to make sure that content in violation of its community standards could not be used to target ads, Sandberg said in a public note on her profile. The company would also make it easier for users to report offensive or abusive ads directly to corporate workers.

Facebook’s community standards, which have been in effect for years, already prohibit “anything that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, or disabilities or diseases.”

The company also reactivated more than 5,000 audience-targeting demographics after checking them individually. “After manually reviewing existing targeting options, we are reinstating the roughly 5,000 most commonly used targeting terms—such as ‘nurse,’ ‘teacher,’ or ‘dentistry,’” Sandberg said.

She also suggested that the company would manually vet future target-audience terms in the future.

This final change—that Facebook should check the audience categories that underpin its central business—was one of the main reforms that experts called for after last week.

“We don’t need to be too awed by this problem,” Aaron Rieke, a technologist at the civil-rights firm Upturn, told me last week, about the anti-Semitic targeting. “You have a finite list of categories, many of which were generated automatically. Take a look, and see what matches up with your community standards and the values of the company. Facebook can monitor the things it does that make it money.”

Other suggested changes—like a public clearinghouse of all ad categories available to purchasers—were not embraced by the company.

Sandberg, writing on the eve of Rosh Hashanah, the Jewish new year, took an unusually personal and contrite tone in her note of apology. “The fact that hateful terms were even offered as options was totally inappropriate and a fail on our part,” she said:

Seeing those words made me disgusted and disappointed—disgusted by these sentiments and disappointed that our systems allowed this. Hate has no place on Facebook—and as a Jew, as a mother, and as a human being, I know the damage that can come from hate. [...]

We never intended or anticipated this functionality being used this way—and that is on us. And we did not find it ourselves—and that is also on us.

Like so many of Facebook’s recent struggles, the whole story turns on the intersection of automation, algorithms, and users acting in bad faith. It also turns on the company’s consistent inability to imagine the worst applications of its software.

Facebook, like many other tech companies, allows advertisers to “self-serve” their own ad purchases. There’s little human interference with—or oversight of—this process: A buyer can write a post or upload a video, select a target audience, and take out an ad with no humans otherwise involved. Many of the systems that maintain Facebook’s plumbing are essentially algorithmic like this, too.

This automation has come back to bite the company twice in the past week—at both the targeting and the purchasing ends of its pipeline. In addition to the anti-Semitic problem with the targeting end, Facebook also appears to have allowed Russian shell firms to buy political ads, attempting to influence the 2016 election.

Facebook may be able to plug some of the simplest holes in its targeting software. But the Russia-purchased ads—and the surprise with which Facebook discovered the anti-Semitic targeting in the first place—suggests that it is facing an arduous, important, and possibly Sisyphean task in trying to understand the darkest corners of its own business.

Robinson Meyer is a former staff writer at The Atlantic and the former author of the newsletter The Weekly Planet.