Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction.

Instagram's Kevin Systrom wants to clean up the %!@$ internet.
Image may contain Face Human Person and Beard
Art Streiber

I sat down with Kevin Systrom, the CEO of Instagram, in June to interview him for my feature story, "Instagram's CEO Wants to Clean Up the Internet," and for "Is Instagram Going Too Far to Protect Our Feelings," a special that ran on CBS this week.

It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. He also discusses free speech, the possibility of Instagram becoming too bland, and whether the platform can be considered addictive. Our conversation occurred shortly before Instagram introduced the AI to the public.

A transcript of the conversation follows.

Nicholas Thompson, Editor-in-Chief: Morning, Kevin

Kevin Systrom, CEO of Instagram: Morning! How are you?

NT: Doing great. So what I want to do in this story is I want to get into the specifics of the new product launch and the new things you're doing and the stuff that's coming out right now and the machine learning. But I also want to tie it to a broader story about Instagram, and how you decided to prioritize niceness and how it became such a big thing for you and how you reoriented the whole company. So I'm gonna ask you some questions about the specific products and then some bigger questions

KS: I'm down.

NT: All right so let's start at the beginning. I know that from the very beginning you cared a lot about comments. You cared a lot about niceness and, in fact, you and your co-founder Mike Krieger would go in early on and delete comments yourself. Tell me about that.

KS: Yeah. Not only would we delete comments but we did the unthinkable: We actually removed accounts that were being not so nice to people.

NT: So for example, whom?

KS: Yeah well I don't remember exactly whom, but the back story is my wife is one of the nicest people you'll ever meet. And that bleeds over to me and I try to model it. So, when we were starting the app, we watched this video, basically how to start a company. And it was by this guy who started the LOLCats meme and he basically said, "To form a community you need to do something," and he called it "Prune the trolls." And Nicole would always joke with me, she's like, "Hey listen, when your community is getting rough, you gotta prune the trolls." And that's something she still says to me today to remind me of the importance of community, but also how important it is to be nice. So back in the day we would go in and if people were mistreating people, we'd just remove their accounts. I think that set an early tone for the community to be nice and be welcoming.

NT: But what's interesting is that this is 2010, and 2010 is a moment where a lot of people are talking about free speech and the internet, and Twitter's role in the Iranian revolution. So it was a moment where free speech was actually valued on the internet, probably more than it is now. How did you end up being more in the "prune the trolls" camp?

KS: Well there's an age-old debate between free speech—what is the limit of free speech, and is it free speech to just be mean to someone? And I think if you look at the history of the law around free speech, you'll find that generally there's a line where you don't want to cross because you're starting to be aggressive or be mean or racist. And you get to a point where you wanna make sure that in a closed community that's trying to grow and thrive, you make sure that you actually optimize for overall free speech. So if I don't feel like I can be myself, if I don't feel like I can express myself because if I do that, I will get attacked, that's not a community we want to create. So we just decided to be on the side of making sure that we optimized for speech that was expressive and felt like you had the freedom to be yourself.

NT: So, one of the foundational decisions at Instagram that helped make it nicer than some of your peers, was the decision to not allow re-sharing, and to not allow something that I put out there to be kind of appropriated by someone else and sent out into the world by someone else. How was that decision made and were there other foundational design and product decisions that were made because of niceness?

KS: We debate the re-share thing a lot. Because obviously people love the idea of re-sharing content that they find. Instagram is full of awesome stuff. In fact, one of the main ways people communicate over Instagram Direct now is actually they share content that they find on Instagram. So that's been a debate over and over again. But really that decision is about keeping your feed focused on the people you know rather than the people you know finding other stuff for you to see. And I think that is more of a testament of our focus on authenticity and on the connections you actually have than about anything else.

NT: So after you went to VidCon, you posted an image on your Instagram feed of you and a bunch of celebrities

KS: Totally, in fact it was a Boomerang.

NT: It was a Boomerang, right! So I'm going to read some of the comments on @kevin's post.

KS: Sure.

NT: These are the comments: "Succ," "Succ," "Succ me," "Succ," "Can you make Instagram have auto-scroll feature? That would be awesome and expand Instagram as a app that could grow even more," "#memelivesmatter," "you succ," "you can delete memes but not cancer patients," "I love #memelivesmatter," "#allmemesmatter," "succ," "#MLM," "#memerevolution," "cuck," "mem," "#stopthememegenocide," '#makeinstagramgreatagain," "#memelivesmatter," "#memelivesmatter," "mmm," "gang,' "melon gang"—I'm not quite sure what all this means. Is this typical?

KS: It was typical, but I'd encourage you to go to my last post which I posted for Father's Day

NT: Your last post is all nice!

KS: It's all nice.

NT: They're all about how handsome your father is.

KS: Right? Listen, he is taken. My mom is wonderful. But there are a lot of really wonderful comments there.

NT: So why is this post from a year ago full of "cuck" and "#memelivesmatter" and the most recent post is full of how handsome Kevin Systrom's dad is?

KS: Well that's a good question. I would love to be able to explain it, but the first thing I think is back then there were a bunch of people who I think were unhappy about the way Instagram was managing accounts. And there are groups of people that like to get together and band up and bully people, but it's a good example of how someone can get bullied, right. The good news is I run the company and I have a thick skin and I can deal with it. But imagine you're someone who's trying to express yourself about depression or anxiety or body image issues and you get that. Does that make you want to come back and post on the platform? And if you're seeing that, does that make you want to be open about those issues as well? No. So a year ago I think we had much more of a problem, but the focus over that year, over both comment filtering so now you can go in and enter your own words that basically filter out comments that include that word. We have spam filtering that works pretty well, so probably a bunch of those would have been caught up in the spam filter that we have because they were repeated comments. And also just a general awareness of kind comments. We have this awesome campaign that we started called #kindcomments. I don’t know if you know the late night show were they reads off mean comments on another social platform; we started kind comments to basically set a standard in the community that it was better and cooler to actually leave kind comments. And now there is this amazing meme that has spread throughout Instagram about leaving kind comments. But you can see the marked difference between the post about Father’s Day and that post a year ago on what technology can do to create a kinder community. And i think we’re making progress which is the important part.

NT: Tell me about sort of steps one, two, three, four, five. How do you — you don’t automatically decide to launch the seventeen things you’ve launched since then? Tell me about the early conversations.

KS: The early conversations were really about what problem are we solving and we looked to the community for stories. We talked to community members. We have a giant community team here at Instagram, which I think is pretty unique for technology companies. Literally, their job is to interface with the community and get feedback and highlight members who are doing amazing things on the platform. So getting that type of feedback from the community about what types of problems they were experiencing in their comments then led us to brainstorm about all the different things we could build. And what we realized was there was this giant wave of machine learning and artificial intelligence—and Facebook had developed this thing that basically—it’s called deep text

NT: Which launches in June of 2016, so it’s right there.

KS: Yup, so they have this technology and we put two and two together and we said: You know what? I think if we get a bunch of people to look at comments and rate them good or bad—like you go on pandora and you listen to a song, is it good or is it bad—get a bunch of people to do that. That’s your training set. And then what you do is you feed it to the machine learning system and you let it go through 80 percent of it and then you hold out the other 20 percent of the comments. And then you say, "Okay, machine, go and rate these comments for us based on the training set," and then we see how well it does and we tweak it over time, and now we’re at a point where basically this machine learning can detect a bad comment or a mean comment with amazing accuracy—basically a 1 percent false positive rate. So throughout that process of brainstorming, looking at the technology available and then training this filter over time with real humans who are deciding this stuff, gathering feedback from our community and gathering feedback from our team about how it works, we’re able to create something we’re really proud of.

NT: So when you launch it you make a very important decision: Do you want it to be aggressive, in which case it'll probably knock out some stuff it shouldn't? Or do you want it to be a little less aggressive, in which case a lot of bad stuff will get through?

KS: Yeah, this is the classic problem. If you go for accuracy, you will misclassify a bunch of stuff that actually was pretty good. So you know if 'your my friend and I go on your photo and I'm just joking around with you and giving you a hard time, Instagram should let that through because we're friends and I'm just giving you a hard time and that's a funny banter back and forth. Whereas if you don't know me and I come on and I make fun of your photo, that feels very different. Understanding the nuance between those two is super important and the thing we don't want to do is have any instance where we block something that shouldn't be blocked. The reality is it's going to happen. So the question is, is that margin of error worth it for all the really bad stuff that gets blocked? And that's a fine balance to figure out. That's something we're working on. We trained the filter basically to have a one-percent false positive rate. So that means one percent of things that get marked as bad are actually good. And that was a top priority for us because we're not here to curb free speech, we're not here to curb fun conversations between friends, but we want to make sure we are largely attacking the problem of bad comments on Instagram.

NT: And so you go, and every comment that goes in gets sort of run through an algorithm, and the algorithm gives it a score from 0 to 1 on whether it's likely a comment that should be filtered or a comment that should not be filtered, right? And then that score is combined with the relationship of the two people?

KS: No, the score actually is influenced based on the relationship of the people

NT: So the original score is influenced by, and Instagram I believe—if I have this correct—has something like a karma score for every user, where the number of times they've been flagged or the number of critiques made of them is added into something on the back end, is that goes into this too?

KS: So without getting into the magic sauce—you're asking like Coca Cola to give up its recipe—I'm going to tell you that there's a lot of complicated stuff that goes into it. But basically it looks at the words, it looks at our relationship, and it looks at a bunch of other signals including account age, account history, and that kind of stuff. And it combines all those signals and then it spits out a score of 0 to 1 about how bad this comment is likely. And then basically you set a threshold that optimizes for one-percent false-positive rate.

NT: when do you decide it's ready to go?

KS: I think at a point where the accuracy gets to a point that internally we're happy with it. So one of the things we do here at instagram is we do this thing called dogfooding—and not a lot of people know this term but in the tech industry it means, you know, eat your own dog food. So what we do is we take the products and we always apply them to ourselves before we go out to the community. And there are these amazing groups on Instagram—and I would love to take you through them but they're actually all confidential— but it's employees giving feedback about how they feel about specific features.

NT: So this is live on the phone to a bunch of Instagram employee's right now?

KS: There are always features that are not launched that are live on Instagram employees' phones, including things like this.

NT: So there's a critique of a lot of the advances in machine learning that the corpus on which it is based has biases built into it. So DeepText analyzed all Facebook comments—analyzed some massive corpus of words that people have typed into the internet. When you analyze those, you get certain biases built into them. So for example, I was reading a paper and someone had taken a corpus of text and created a machine learning algorithm to rank restaurants, and to look at the comments people had written under restaurants and then to try and guess the quality of the restaurants. He went through and he ran it, and he was like, "Interesting," because all of the Mexican restaurants were ranked badly. So why is that? Well it turns out, as he dug deeper into the algorithm, it's because in massive corpus of text the word "Mexican" is associated with "illegal"—"illegal Mexican immigrant" because that is used so frequently. And so there are lots of slurs attached to the word "Mexican," so the word "Mexican" has negative connotations in the machine learning-based corpus, which then affects the restaurant rankings of Mexican restaurants.

KS: That sounds awful

NT: So how do you deal with that?

KS: Well the good news is we're not in the business of ranking restaurants

NT: But you are ranking sentences based on this huge corpus of text that Facebook has analyzed as part of DeepText

KS: It's a little bit more complicated than that. So all of our training comes from Instagram comments. So we have hundreds of raters and it's actually pretty interesting what we've done with this set of raters: basically, human beings that sit there —and by the way human beings are not unbiased that's not what i'm claiming—but you have human beings. Each of those raters is bilingual. So they speak two languages, they have a diverse perpsective, they're from all over the world. And they rank those comments basically, thumbs up or thumbs down. Basically the instagram corpus, right?

So you feed it a thumbs up, thumbs down based on an individual. And you might say, "But wait, isn't a single individual biased in some way?" Which is why we make sure every comment is actually seen twice and given a rating twice by at least two people to make sure that there is as minimal amount of bias in the system as possible. And then on top of that, we also gain feedback from not only our team but also the community, and then we're able to tweak things on the margin to make sure things like that don't happen. I'm not claiming that it won't happen—that's of course a risk—but the biggest risk of all is doing nothing because we're afraid of these things happening. And I think it's more important that we are A) aware of them, and B) monitoring them actively, and C) making sure we have a diverse group of raters that not only speak two languages but are from all over the world and represent different perspectives to make sure we have an unbiased classifier.

NT: So let's take a sentence like 'These hos ain't loyal," which is a phrase that I believe a previous study on Twitter had a lot of trouble with. Your theory is that some people will say, 'Oh that's a lyric, therefore it's okay,' some people won't know it will get through, but enough raters looking at enough comments over time will allow lyrics to get through, and 'These hoes ain't loyal,' I can post that on your Instagram feed if you post a picture which deserves that comment.

KS: Well I think what I would counter is, if you post that sentence to any person watching this, not a single one of them would say that's a mean spirited comment to any of us, right?

NT: Right.

KS: So I think that's pretty easy to get to. I think if there are more nuance in examples, and I think that's the spirit of your question, which is that there are grey areas. The whole idea of machine learning is that it's far better about understanding those nuances than any algorithm has in the past, or any single human being could. And I think what we have to do over time is figure out how to get into that grey area, and judge the performance of this algorithm over time to see if it actually improves things. Because by the way, if it causes trouble and it doesn't work, we'll scrap it and start over with something new. But the whole idea here is that we're trying something. And I think a lot of the fears that you're bringing up are warranted but is exactly why it keeps most companies from even trying in the first place.

NT: And so first you're going to launch this filtering bad comments, and then the second thing you're going to do is the elevation of positive comments. Tell me about how that is going to work and why that's a priority.

KS: The elevation of positive comments is more about modeling in the system. We've seen a bunch of times in the system where we have this thing called the mimicry effect. So if you raise kind comments, you actually see more kind comments, or you see more people giving kind comments. it's not that we ever ran this test but I'm sure if you raised a bunch of mean comments you would see more mean comments. Part of this is the piling-on effect, and I think what we can do is by modeling what great conversations are, more people will see Instagram as a place for that, and less for the bad stuff. And it's got this interesting psychological effect that people want to fit in and people want to do what they're seeing, and that means that people are more positive over time.

NT: And are you at all worried that you're going to turn Instagram into the equivalent of an East Coast liberal arts college?

KS: I think those of us who grew up on the East Coast might take offense to that *laughs* I'm not sure what you mean exactly.

NT: I mean a place where there are trigger warnings everywhere, where people feel like like they can't have certain opinions, where people feel like they can't say things. Where you put this sheen over all your conversations, as though everything in the world is rosy and the bad stuff, we're just going to sweep it under the rug.

KS: Yeah, that would be bad. That's not something we want. I think in the range of bad, we're talking about the lower five percent. Like the really, really, bad stuff. I don't think we're trying to play anywhere in the area of grey. Although I realize, there's no black or white and we're going to have to play at some level. But the idea here is to take out, I don't know, the bottom five percent of nasty stuff. And I don't think anyone would argue that, that makes Instagram a rosy place, it just doesn't make it a hateful place.

So you feed it a thumbs up, thumbs down based on an individual. And you might say, "But wait, isn't a single individual biased in some way?" Which is why we make sure every comment is actually seen twice and given a rating twice by at least two people to make sure that there is as minimal amount of bias in the system as possible. And then on top of that, we also gain feedback from not only our team but also the community, and then we're able to tweak things on the margin to make sure things like that don't happen. I'm not claiming that it won't happen—that's of course a risk—but the biggest risk of all is doing nothing because we're afraid of these things happening. And I think it's more important that we are A) aware of them, and B) monitoring them actively, and C) making sure we have a diverse group of raters that not only speak two languages but are from all over the world and represent different perspectives to make sure we have an unbiased classifier.

NT: So let's take a sentence like 'These hos ain't loyal," which is a phrase that I believe a previous study on Twitter had a lot of trouble with. Your theory is that some people will say, 'Oh that's a lyric, therefore it's okay,' some people won't know it will get through, but enough raters looking at enough comments over time will allow lyrics to get through, and 'These hoes ain't loyal,' I can post that on your Instagram feed if you post a picture which deserves that comment.

KS: Well I think what I would counter is, if you post that sentence to any person watching this, not a single one of them would say that's a mean spirited comment to any of us, right?

NT: Right.

NT: So I think that's pretty easy to get to. I think if there are more nuance in examples, and I think that's the spirit of your question, which is that there are grey areas. The whole idea of machine learning is that it's far better about understanding those nuances than any algorithm has in the past, or any single human being could. And I think what we have to do over time is figure out how to get into that grey area, and judge the performance of this algorithm over time to see if it actually improves things. Because by the way, if it causes trouble and it doesn't work, we'll scrap it and start over with something new. But the whole idea here is that we're trying something. And I think a lot of the fears that you're bringing up are warranted but is exactly why it keeps most companies from even trying in the first place.

NT: And you wouldn't want all of the comments on your,—You know, on your VidCon post, it's a mix of sort of jokes, and nastiness, and vapidity, and useful product feedback. And you're getting rid of the nasty stuff, but wouldn't it be better, if you raised like the best product feedback and the funny jokes to the top?

KS: Maybe. And maybe that's a problem we'll decide to solve at some point. But right now we're just focused on making sure that people don't feel hate, you know? And I think that's a valid thing to go after, and I'm excited to do it.

NT: So the thing that interests me the most is that it's like Instagram is a world with 700 million people, and you're writing the constitution for the world. When you get up in the morning and you think about that power, that responsibility, how does it affect you?

KS: Doing nothing felt like the worst option in thew world. So starting to tackle it means that we can improve the world; we can improve the lives of as many young people in the world that live on social media. I don't have kids yet; I will someday, and I hope that kid, boy or girl, grows up in a world where they feel safe online, where I as a parent feel like they're safe online. And you know the cheesy saying, with great power comes great responsibility. We take on that responsibility. And we're going to go after it. But that doesn't mean that not acting is the correct option. There are all sorts of issues that come with acting, you've highlighted a number of them today, but that doesn't mean we shouldn't act. That just means we should be aware of them and we should be monitoring them over time.

NT: One of the critiques is that Instagram, particularly for young people is very addictive. And in fact there's a critique being made my Tristen Harris who was a-classmate of yours, and a classmate of Mike's, and a student in the same class as Mike's. And he says that the design of Instagram deliberately addicts you. For example, when you open it up it just-
KS: Sorry I'm laughing just because I think the idea that anyone inside here tries to design something that is maliciously addictive is just so far fetched. We try to solve problems for people and if by solving those problems for people they like to use the product, I think we've done our job well. This is not a casino, we are not trying to eke money out of people in a malicious way. The idea of Instagram is that we create something that allows them to connect with their friends, and their family, and their interests, positive experiences, and I think any criticism of building that system is unfounded.

NT: So all of this is aimed at making Instagram better. And it sounds like changes so far have made Instagram better. Is any of it aimed at making people better, or is there any chance that the changes that happen on Instagram will seep into the real world and maybe, just a little bit, the conversations in this country will be more positive than they've been?

KS: I sure hope we can stem any negativity in the world. I'm not sure we would sign up from that day one. Um, but I actually want to challenge the initial premise which is that this is about making Instagram better. I actually think it's about making the internet better. I hope someday the technology that we develop and the training sets we develop and the things we learn we can pass on to startups, we can pass on our peers in technology, and we actually together build a kinder, safer, more inclusive community online.

NT: Will you open source the software you've built for this?

KS: I'm not sure. I'm not sure. I think a lot of it comes back to how good it performs, and the willingness of our partners to adopt it.

NT: But what if this fails? What if actually people actually get kind of turned off by instagram, they say, "Instagram's becoming like Disneyland, I don't want to be there." And they share less?

KS: The thing I love about Silicon Valley is we've bear hugged failure. Failure is what we all start with, we go through, and hopefully we don't end on, on our way to success. I mean Instagram wasn't Instagram initially. It was a failed start up before. I turned down a bunch of job offers that would have been really awesome along the way. That was failure. I've had numerous product ideas at Instagram that were totally failures. And that's okay. We bear hug it because when you fail at least you're trying. And I think that's actually what makes Silicon Valley different from traditional business. Is that our tolerance for failure here is so much higher. And that's why you see bigger risks and also bigger payoffs.