FYI.

This story is over 5 years old.

Tech

The Algorithms Aren’t Working for the Rest of Us

Code carries human bias and mistakes.
Image: Shutterstock / Composition: Louise Matsakis

I used to enjoy Instagram. Photos in the feed were displayed in reverse-chronological order. The newest posts were at the top, and the oldest posts at the bottom. It was easy to scroll though. Then, last year, Instagram changed the algorithm. Now, pictures appear in an order I don't understand. A snapshot from a birthday party three days ago is displayed next to a selfie only hours old. Time on Instagram now feels distorted.

Advertisement

The algorithm stopped working for me. The change provided a benign reminder that the engineers who run Instagram's algorithm (and other social networks like it) ultimately decide what information I get to see, and in what order.

Now, other algorithms aren't working, and this time it's worse. Oversights in the systems that help run Facebook, Instagram, Google, and Twitter uncovered by journalists over the last month prove again that code carries human bias and mistakes. Algorithms aren't perfect mathematical equations—they're designed by people, and like humans, have flaws.

It's likely the algorithms haven't been working for some time. But now, we're taking notice:

  • Facebook's advertising algorithm blocked an ad for a march against white supremacy in Portland, Oregon. The ad was up for two hours last month, then got removed. The company won't say why.
  • The company's advertising algorithm also let marketers target users who expressed interest in topics like "Jew hater." Facebook automatically generated the ad category based on their online activity. In an ironic twist, Facebook then sent an automated email to a ProPublica reporter asking if she wanted to buy an ad to promote her story that exposed Facebook's anti-semitic advertising categories.
  • Facebook also let marketers aim their ads at users who expressed interest in other hateful ideas, like "killing bitches" or "threesome rape."
  • Most startlingly, a troll farm with ties to the Russian government successfully bought $100,000 worth of Facebook ads targeted at US voters from 2015 to 2017. They were designed to escalate social tensions and sow political unrest.
  • Twitter's advertising platform let users target people who used racist phrases like the n-word.
  • Google let marketers reach people using key phrases such as "black people ruin neighborhoods."
  • Google's search tool unmasked the names of young criminal offenders and their victims sealed by law in Canada. A search for the name of an offender or victim returned media coverage of their courts cases, despite that their names do not appear in the articles themselves.
  • Instagram's algorithm used an image with the text "I will rape you before I kill you, you filthy whore!" to advertise its platform on Facebook. The post, made nearly a year ago by the Guardian tech writer Olivia Solon, was one of her most "engaged," which might be why it was selected. It was a screenshot of a threatening email she received.

These aren't going to be the last news stories that chronicle the algorithms' blind spots. The only reason a company like Facebook is able to accommodate over two billion users and generate over $9 billion in ad revenue last quarter is because it automates its services to some degree. It's essential to its business and that of every major internet platform.

But as the avalanche of examples uncovered over the last month show, the algorithms just aren't designed with every bias or potential pitfall in mind. Facebook's automated programs are intended to generate eyeballs for advertisements. They aren't built to positively shape our society, or ensure we learn more about the world around us.

Clearly, Facebook is an incredibly successful company. Mark Zuckerberg is the fifth richest person in the world. To be fair, it seems like the algorithms are working for him just fine. They're just not working for the rest of us.