Biz & IT —

Surveillance-based manipulation: How Facebook or Google could tilt elections

From Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World.

Surveillance-based manipulation: How Facebook or Google could tilt elections
W. W. Norton & Company, Inc
Bruce Schneier is a cryptographer and security expert who has been blogging on these topics since 2004. He is the author of numerous books, including Carry On and Liars and Outliers. The following is an excerpt from his latest book, Data and Goliath: The Hidden Battles to Collect your Data and Control Your World. Copyright © 2015 by Bruce Schneier. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

Someone who knows things about us has some measure of control over us, and someone who knows everything about us has a lot of control over us. Surveillance facilitates control.

Manipulation doesn’t have to involve overt advertising. It can be product placement that makes sure you see pictures that have a certain brand of car in the background. Or just increasing how often you see those cars. This is, essentially, the business model of search engines. In their early days, there was talk about how an advertiser could pay for better placement in search results. After public outcry and subsequent guidance from the FTC, search engines visually differentiated between “natural” results by algorithm and paid results. So now you get paid search results in Google framed in yellow and paid search results in Bing framed in pale blue. This worked for a while, but recently the trend has shifted back. Google is now accepting money to insert particular URLs into search results, and not just in the separate advertising areas. We don’t know how extensive this is, but the FTC is again taking an interest.

When you’re scrolling through your Facebook feed, you don’t see every post by every friend; what you see has been selected by an automatic algorithm that’s not made public. But someone can pay to increase the likelihood that their friends or fans will see their posts. Corporations paying for placement is a big part of how Facebook makes its money. Similarly, a lot of those links to additional articles at the bottom of news pages are paid placements.

The potential for manipulation here is enormous. Here’s one example. During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.

A truly sinister social networking platform could manipulate public opinion even more effectively. By amplifying the voices of people it agrees with, and dampening those of people it disagrees with, it could profoundly distort public discourse. China does this with its 50 Cent Party: people hired by the government to post comments on social networking sites supporting, and challenge comments opposing, party positions. Samsung has done much the same thing.

Many companies manipulate what you see based on your user profile: Google search, Yahoo News, even online newspapers like The New York Times. This is a big deal. The first listing in a Google search result gets a third of the clicks, and if you’re not on the first page, you might as well not exist. The result is that the Internet you see is increasingly tailored to what your profile indicates your interests are. This leads to a phenomenon that political activist Eli Pariser has called the “filter bubble”: an Internet optimized to your preferences, where you never have to encounter an opinion you don’t agree with. You might think that’s not too bad, but on a large scale it’s harmful. We don’t want to live in a society where everybody only ever reads things that reinforce their existing opinions, where we never have spontaneous encounters that enliven, confound, confront, and teach us.

In 2012, Facebook ran an experiment in control. It selectively manipulated the newsfeeds of 680,000 users, showing them either happier or sadder status updates. Because Facebook constantly monitors its users—that’s how it turns its users into advertising revenue—it was easy to monitor the experimental subjects and collect the results. It found that people who saw happier posts tended to write happier posts, and vice versa. I don’t want to make too much of this result. Facebook only did this for a week, and the effect was small. But once sites like Facebook figure out how to do this effectively, the effects will be profitable. Not only do women feel less attractive on Mondays, they also feel less attractive when they feel lonely, fat, or depressed. We’re already seeing the beginnings of systems that analyze people’s voices and body language to determine mood; companies want to better determine when customers are getting frustrated and when they can be most profitably upsold. Manipulating those emotions to better market products is the sort of thing that’s acceptable in the advertising world, even if it sounds pretty horrible to us.

This is all made easier because of the centralized architecture of so many of our systems. Companies like Google and Facebook sit at the center of our communications. This gives them enormous power to manipulate and control.

There are unique harms that come from using surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fundraising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans... anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also finely tune their get-out-the-vote campaigns on Election Day and more efficiently gerrymander districts between elections. This will likely have fundamental effects on democracy and voting.

Psychological manipulation—based both on personal information and control of the underlying systems—will get better and better. Even worse, it will become so good that we won’t know we’re being manipulated.

Channel Ars Technica