There are many horrifying things posted to social media. Can artificial intelligence find and block them?

Teaching Machines to Recognize (and Filter) Humanity’s Dark Side

Facebook has a problem — a very significant problem — with the violent and gruesome content which has quickly found its way, in numerous instances, onto the social network and its Facebook Live feature, which was introduced to American users in January 2016.

The disturbing litany of murders, suicides and assaults have already become macabre technological milestones. These include Robert Godwin Sr., the 74-year-old father of nine and grandfather of 14 who was selected by a gunman at random and then murdered in a video posted to Facebook in mid-April. One week later, a man in Thailand streamed the murder of his 11-month old daughter on Facebook Live before taking his own life. The beating and torture of an 18-year-old man with intellectual and development disabilities was live-streamed on the service in January, and the tragic shooting death of two-year-old Lavontay White Jr. followed a month later on Valentine’s Day.

“At least 45 instances of violence — shootings, rapes, murders, child abuse, torture, suicides, and attempted suicides — have been broadcast via Live [since] December 2015,” Buzzfeed’s Alex Kantrowitz reported this month. “That’s an average rate of about two instances per month.”


CONVICTIONS: Where science & criminal justice meet.


Facebook CEO Mark Zuckerberg announced in early May that the company would hire 3,000 more reviewers to its community operations team to help filter offensive, unwanted, and violent content. This would be in addition to the 4,500 team members already in place around the globe.

The sheer volume of content to review is staggering. The Menlo Park, California-based company is by far the world’s largest social media network, and it has become an almost ubiquitous presence in our daily lives. Facebook now boasts 1.94 billion monthly users, and 1.28 billion daily users, according to the company’s most recent tally. And the challenge of monitoring the media shared by those billions of users is not unique to Facebook. Google is addressing similar concerns with its YouTube service, for example, and Twitter has had similar problems with its live streaming application Periscope.

A 2014 exposé in Wired magazine estimated that the number of content moderators working to scrub social media sites of the worst content humanity has to offer — violence, gore, hardcore sexual imagery — was over 100,000 and growing. That award-winning article has now evolved into a documentary film that brings viewers inside the work of this beleaguered army of often low-paid workers in India or the Philippines as they collectively scan and flag millions of posts daily. The film is harrowing and graphic, but what’s become clear from the many disturbing and wantonly criminal video streams that do get published is that human eyeballs can only do so much.

This grim reality now has tech and social media giants, along with video streaming services everywhere, scrambling to develop and deploy automated content moderation systems capable of flagging, reviewing, and removing offensive posts with more speed and precision — and they are leaning on machine learning and other forms of artificial intelligence, or AI, to do it. It’s a tall order, and despite a good deal of breathless press about the promise and prospects of these automated technologies, large hurdles remain.

The short documentary ‘The Moderators’ takes viewers inside the dark world of content moderation. It’s not pretty.


The working theory, of course, is that layered networks and predictive algorithms can be trained to mimic the behavior of neurons in the human brain, and, eventually, cognitive processing. You see basic examples of deep learning algorithms every day when Amazon or Netflix offers you suggestions based on your buying and surfing history, for example. Deep learning is also the foundational technology behind self-driving cars — although as was demonstrated in a tragic death involving Tesla Motors’ “Autopilot” mode last year, much work remains to be done. (The car’s on-board brain, it seems, failed to distinguish between the broadside of a tractor trailer crossing the road ahead, and the brightly-lit sky behind it.)

“The algorithms used in deep learning have been around since the 1980s, but it’s taken off in the last three or four years for two reasons,” said Matt Zeiler, the founder and CEO of Clarifai, a New York City-based deep learning start-up, during a phone interview. Clarifai is an artificial intelligence company that has a platform for understanding visual content.

“The first is data. We have computers, tablets and mobile devices creating a huge amount of data. That creates large training sets to create these training models. The second reason: You also need massive amounts of computation power. We typically use graphic cards. Those have become programmable and very powerful in the last three or four years. I expect Facebook could build this [solution] soon,” he added.

Zeiler would know something about the social media behemoth’s AI strategy. He launched Clarifai in 2013 after completing his doctorate at New York University, where he focused on applying deep learning to image recognition. At NYU, Zeiler worked with two of the biggest names in artificial intelligence: Yann LeCun and Rob Fergus. They now lead Facebook’s AI research and operations. Zeiler also studied under cognitive psychologist and computer scientist Geoffrey Hinton at the University of Toronto. Hinton — who now splits his time between U of T and Google — is known as the “godfather” or “guru” of deep learning for his pioneering research on artificial neural networks.

As it relates to social media, the goal is simple: Train artificial intelligence systems to detect offensive imagery by feeding it massive amounts of known data — video clips of fighting, say, or violence, blood, and weapons. The system would learn to recognize these images within seconds, or, ideally, a fraction of a second, and assign a sensitivity score between 0 and 1. The higher the score, the greater confidence that something untoward is included in the footage. (The scoring system is similar to the algorithms used in facial recognition technology that I wrote about last month.)

Clarifai is building a gore and violence classifier that is now in beta testing, Zeiler told me, though he suggests that it will be extremely difficult to cover every possible scenario. “Let’s say that someone may have drowned and there is an image of a person laying on a beach,” Zeiler said. “You don’t know if they have drowned or are just laying on a beach. It’s easier to classify if there is blood and gore.”

Nudity is often easier for automated detection systems — although even here, context is key. “What might be viewed as objectionable can be very different across various cultures and regions,” explained Abhijit Shanbhag, the founder and CEO of Singapore-based Graymatics, in a video call. Graymatics is another artificial intelligence startup that specializes in image and video recognition. “For instance, in the Middle East, what is viewed as objectionable can be very different from what is viewed as objectionable in the United States, Australia or Scandinavia,” he said.

Shanbhag, who launched Graymatics in 2011, has an academic background — his doctorate from the University of Southern California focused on processing — and he has worked for Qualcomm, Ericsson Wireless, and other major tech companies. Graymatics’ software is being used to detect nudity for several “very large” social media platforms across the Middle East and Asia, he told me. The Singapore prime minister’s office has also invested in the company.

Training an artificial intelligence system to detect violent acts could involve training the system to score “moralities” within each video frame, Shanbhag told me. For instance, consider a live feed or video that shows one person hitting another person. A single frame of video would be scored on three metrics. “The fist hitting the body [is] the object-level analytic,” he explained. “The second is the activity involved or how the movement is happening. The third is audio content when it is available.”

The company has access to “a very large number” of closed circuit television feeds, such as security and law enforcement surveillance video, that is used to train its AI systems. They also stage and record their own scenarios — more fodder to teach the system how to “see,” and how to understand what it is seeing.


Of course, not all violence is physical, and a massive sub-genre of internet abuse involves psychological violence like bullying, threats, and intimidation. Could AI identify video footage of such acts? Shanbhag seems to think so. “This becomes more critical as some people are becoming more innovative, unfortunately, in carrying out different types of violence,” he said. “There is a different sensitivity score for different kinds of violence. Psychological torture might rank very high on the audio morality, but may not rank very high with respect to activities or images. That’s why the system will be perpetually self-learning to track new kinds of activities.”

But again, such evolution will take time, and even Facebook’s chief, Mark Zuckerberg, has said that truly automated video content monitoring is still years away. In a public statement this month, the company responded to the recent spate of terrorist attacks — and the criticism that social media companies have sustained for being breeding grounds of extremism — by describing the myriad technologies and systems that Facebook deploys to combat the problem. This includes deployments of artificial intelligence in image and language analysis, detecting relationship patterns, and other algorithmic wizardry.

But for now, the company conceded, it will also take human eyes and brains — and that might well be the case for a long time.

“AI can’t catch everything,” wrote Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, the company’s counterterrorism policy manager. “Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story,” the executives noted.

“To understand more nuanced cases,” they added, “we need human expertise.”

Rod McCullom reports on the intersection of science, medicine, race, sexuality, and poverty. He has written for The Atlantic, The Nation, and Scientific American, among other publications.

Rod McCullom is a Chicago-based science journalist and senior contributor to Undark whose work has been published by Scientific American, Nature, The Atlantic, and MIT Technology Review, among other publications.