Get Ready for the Next Big Privacy Backlash Against Facebook

Privacy watchdogs think a damning leaked document about Facebook targeting insecure teens could help usher in new era in privacy protections.
Eyeshaped window with venetian blind looking out over city and full moon
Getty Images

Data mining is such a prosaic part of our online lives that it’s hard to sustain consumer interest in it, much less outrage. The modern condition means constantly clicking against our better judgement. We go to bed anxious about the surveillance apparatus lurking just beneath our social media feeds, then wake up to mindlessly scroll, Like, Heart, Wow, and Fave another day.

But earlier this month, The Australian uncovered something that felt like a breach in the social contract: a leaked confidential document prepared by Facebook that revealed the company had offered advertisers the opportunity to target 6.4 million younger users, some only 14 years old, during moments of psychological vulnerability, such as when they felt “worthless,” “insecure,” “stressed,” “defeated,” “anxious,” and like a “failure.”

The 23-page document had been prepared for a potential advertiser and highlighted Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.” According to The Australian’s report, Facebook had been monitoring posts, photos, interactions, and internet activity in real time to track these emotional lows. (Facebook confirmed the existence of the report, but declined to respond to questions from WIRED about which types of posts were used to discern emotion.)

The day the story broke, Facebook quickly issued a public statement arguing that the premise of the article was “misleading” because “Facebook does not offer tools to target people based on their emotional state.” The social network also promised that the research on younger users “was never used to target ads.” The analysis on minors did not follow Facebook’s research review protocols, the company wrote, so Facebook would be “reviewing the details to correct the oversight,” implying that the analysis had not been sanctioned by headquarters in Menlo Park.

A spokesperson for Facebook tells WIRED that the research had been commissioned by an advertiser. But Facebook’s public statement did not make that clear or explain how the research on minors ended up in a presentation to potential advertisers.

The statement said only that the analysis had been conducted by “an Australian researcher.” But the leaked presentation obtained by The Australian was prepared by two Australian Facebook employees, both managers who connect Facebook to ad agencies.

Privacy advocates and social media researchers, some of whom signed a public letter to Mark Zuckerberg about the ethical implications of tracking minors, tell WIRED the leak arrived at a crucial time in their campaign for stricter guidelines around consumer surveillance. Between the political fallout of psychographic profiling on Facebook and recent fines against the social network for breaking European laws about data collection, they hope this controversy could have lasting implications on the way the $400 billion behemoth tracks sensitive data.

Welcome to the next phase of Facebook privacy backlash, where the big fear isn't just what Facebook knows about its users but whether that knowledge can be weaponized in ways those users cannot see, and would never knowingly allow.

Dear Mark Zuckerberg

Five years ago, Facebook conducted a mass experiment in manipulating emotions on nearly 700,000 unsuspecting users. The company tweaked News Feeds to show random users more positive or negative content, to see if it made those users happy or sad. In that case, there was no leaked document, no smoking gun: The results were published openly in an academic journal in 2014. In response, there was an outcry over what seemed like social engineering; the company said it had been “unprepared for the reaction” and strengthened its research review process accordingly.

A spokesperson for Facebook tells WIRED that the research referenced in the newly surfaced document complied with Facebook’s privacy and data policies, such as anonymizing the data by removing any personally identifiable information, but it did not meet those enhanced research protocols, which are supposed to require additional review for studies of “sensitive groups,” like minors.

Molly Jackman/Lauri Kanerva/Facebook

Nonetheless, The Australian’s report claimed that the psychological insights had been culled from a database of 6.4 million younger Facebook users from Australia and New Zealand, included 1.9 million high school students with an average age of 16, and including some as young as 14.

A week after the document was leaked, more than two dozen nonprofits from the US, Europe, Brazil, and Mexico wrote a blistering public letter to Zuckerberg arguing that Facebook should release the document because the health and ethical implications were “far too concerning to keep concealed.” Facebook has become a “powerful cultural and social force in the lives of young people,” they wrote, and the mega-corporation could not just chalk up the mistake to a deviation from its research protocols. Marketers “and others” could use this research to “take advantage of young people by tapping into unique developmental vulnerabilities for profit,” the letter warned. (WIRED reached out to The Australian’s media editor, Darren Davidson, who obtained the leaked document, to see if the paper has plans to publish it in full, but did not receive an immediate response.)

“We take the concerns raised by these organizations seriously,” a Facebook spokesperson said in response to questions from WIRED. “Last week we reached out to several of these groups to discuss the research, and together agreed to set a meeting. We look forward to working with them.”

Jeff Chester, executive director of the Center for Digital Democracy, one of the nonprofits that signed the letter, will be present at the Facebook meeting. “I’ll be interested to see how honest they are,” he tells WIRED. “Are they going to acknowledge all of the similar research that they already do? Or what it means for Facebook and Instagram users worldwide? Are they going to talk about the fact that they are continually expanding the ability of their platform to identify and track consumers on behalf of powerful advertisers?”

Chester keeps close tabs on Facebook’s increasingly sophisticated marketing capabilities, a toolkit that includes neuro-marketing and biometric research techniques that can be used to test bodily reactions to ads, like responses in the brain, heart, eye movement, or memory recall. Chester pointed to a recent report from Facebook IQ---a research division within the social network designed to help marketers---that used an EEG headset to measure social connections and feelings in virtual reality.

“When Facebook said this was aberration, we knew that was not true, because it squarely fits into what Facebook does all the time in terms of analyzing the emotional reactions of individuals,” including vulnerable groups like young people, black people, and Latinos, Chester says. “Facebook is one big sentiment-mining apparatus.”

If the users in question weren’t teenagers---or if the emotion wasn’t insecurity---Facebook’s public statement might have been sufficient; the uproar from privacy advocates may have been duly noted, then promptly forgotten.

Instead, as Kathryn Montgomery, a professor at American University and the director of the school’s communications studies division—who is married to Chester—tells WIRED, The Australian’s report served as “a flashpoint that enables you to glimpse Facebook’s inner workings, which in many ways is about monetization of moods.”

A New Advertising Age

This may sound like a lot of sturm und drang for making money off of teenage insecurity, a mass market practice that has been around since at least World War II. The entire advertising industry is, after all, premised on the ability to leverage a consumer’s emotional state. But it's one thing to show makeup ads to people who follow Kylie Jenner on Instagram; it's another to use computational advertising techniques to sell flat-tummy tea to 14 year olds at the exact moment they're feeling their worst.

In fact, Montgomery and Chester have been fighting to protect young people’s digital privacy for decades. The couple helped pass the Children’s Online Privacy Protection Act (COPPA) in 1998, which restricts data collection and online marketing from targeting children under 13 years old. The legislation was created to prevent the first wave of dotcom companies from engaging in deceptive practices, such as using games and contests to collect information about children without parental permission. The same year COPPA passed, the FTC filed its first internet privacy complaint against GeoCities, for misleading both child and adult consumers about how it was using their personal information. Since then, companies big, small, and fictional have racked up fines.

For its part, Facebook has been open and cooperative in responding to concerns about minors in the past. After The Wall Street Journal reported in 2012 that Facebook was considering allowing children younger than 13 to open accounts, the company met with privacy advocates who helped convince the platform to continue barring children from the platform.

Facebook also understands that minors require additional protections. By default, it turns off location sharing for minors, and offers warnings before young people share a post publicly. Indeed, Facebook sometimes uses its tracking capabilities to safeguard users, such as newly released artificially intelligent suicide prevention tools that “help people in real time.”

“We do, of course, want to try to help people in our community who are at risk, including if their friends report to us that they may be considering self-harm, but that’s not related to the incorrect allegations that were made in The Australian’s piece,” a Facebook spokesperson tells WIRED.

Regardless, advances in ad targeting may require more default protections. Marketers want to pinpoint people in an “intimate, ongoing, interactive way,” says Chester. As people use more and more devices across different networks, companies that collect this information have amassed bank vaults of data on users' locations, recent life events, affinity groups, or, theoretically, emotional states.

“This is the holy grail of advertising,” says Saleem Alhabash, an assistant professor at Michigan State University. A consumer has “a particular need or motivation at this particular moment in time, and you are giving them messages that feed exactly to what they’re feeling. The return on investment is huge.”

To that end, Alhabash believes companies should, for the most part, have the freedom to conduct business. “I do not think that advertising in general is manipulative.” he says. “Where it becomes manipulative is when certain parts of our personal information gets used against us to makes us crave and want things that we do not want.” (Alhabash worked on a study about how Facebook ads for alcohol can increase the desire to drink.)

Amid a swirl of recent concerns over how Facebook can influence our actions in the real world and the ways that micro-targeting can be weaponized—such as voter-suppression campaigns targeting African Americans—the leaked document seems like another sign that fears about the company have taken on a different shape.

“We’ve entered a new phase because of the controversy in promoting fake news, in disseminating hate speech, in Facebook’s outsized influence in campaigns that resulted in Brexit, the election of Trump, and other political developments,” Chester explains.

Europe Plays Hardball

Unfortunately for Facebook, the Australian ad targeting controversy cropped up just as European regulators have been cracking down on social networks, charging that they “aren’t taking complaints from their users seriously enough.” That’s the reason Germany’s justice minister cited in March when he proposed a law that would fine social media companies up to €50 million if they don’t respond quickly enough to reports of illegal content or hate speech.

This week, the focus has shifted to Facebook’s privacy violations. On Tuesday, data protection authorities (DPAs) from France, the Netherlands, Spain, Germany, and Belgium issued a joint statement detailing the results of national investigations into Facebook for privacy issues, including processing personal data for advertising purposes.

France and the Netherlands handed down what amounted to a slap on the wrist and a small fine, but this is just the preview. Europe’s strict privacy laws are about to get even stricter. It’s all part of a growing sense in the EU that it’s time to throw a bridle on Silicon Valley.

In 368 days (regulators have posted a handy countdown clock) the General Data Protection Regulation will go into effect for the European Union. Once the new rules are in place, companies will be forced to take privacy more seriously, if only because of the fines, David Martin, senior legal officer at the European Consumer Organization, tells WIRED by email. France fined Facebook €150,000 for unlawfully tracking internet users to display targeted advertising, the maximum it can currently impose. But once the new rules are in place, the fines could be as high as €20 million, or 4 percent of the company’s global revenue, whichever is higher, Martin says.

For companies like Google and Facebook, with market capitalizations in the hundreds of billions, compliance might be a bigger issue than fines. But American advocates hope that some of that momentum will be contagious, pressuring Silicon Valley’s oligarchy into creating stronger safeguards for sensitive data. Says Chester, “The feedback I got from my colleagues in Europe was, ‘Look, you guys have that letter. We have laws and rules that need to be enforced.’”

In the joint statement on Tuesday, the Dutch authorities reported that Facebook violated data protection laws for its 9.6 million users in the Netherlands by using sensitive personal data without the users' explicit consent, including serving targeted ads based on users' sexual preferences. Facebook changed its practices to comply, and the Dutch DPA said it will issue a sanction if it finds out the violations have not stopped.

In response to questions from WIRED about the sanctions, a different Facebook spokesman says that the company respectfully disagrees with the findings by the French and Dutch authorities. Facebook maintains that its practices have been compliant, but the spokesperson says that Facebook welcomes the dialogue.

“At Facebook, putting people in control of their privacy is at the heart of everything we do,” the spokesperson tells WIRED. “Over recent years, we've simplified our policies further to help people understand how we use information to make Facebook better. We’ve built teams of people who focus on the protection of privacy—from engineers to designers—and tools that give people choice and control.”

And yet the findings from the investigations don’t sound that far off from the leaked Australian document, which is partly what made the specter of preying on teen insecurity so unsettling.

It’s not a dystopian nightmare. It’s just a few clicks away from the status quo.