Facebook's Head of AI Wants to Teach Chatbots Common Sense

Facebook is already disconcertingly good at recognizing faces in photos. But the company's director of artificial intelligence wants to push AI even further
Facebook039s Yann LeCun speaks with Cade Metz at WIRED039s 2016 Business Conference.
Bryan Derballa for WIRED

Facebook is already disconcertingly good at recognizing faces in photos. But the company's director of artificial intelligence research, Yann LeCun, wants to push AI even further. Today at the 2016 WIRED Business Conference, he said he wants to teach chatbots common sense.

That's an important part of Facebook's goal of enabling its Facebook M virtual assistant to actually understand the things you ask it to do. Today, Facebook M is powered in part by humans. But eventually Facebook wants to power the entire thing with AI.

LeCun is the founding father of deep learning, one of the most important branches of artificial intelligence today. Deep learning techniques are used for everything from the algorithms that filter your Facebook feed to Android's voice recognition system to Skype's cutting edge real-time translation tool. But while machines have gotten really good at recognizing voice commands and translating one human language into another, AIs still can't really understand language, LeCun explained.

Making that happen means teaching computers to learn in much the same way humans do. LeCun points out that babies learn to learn to associate words with objects by simply observing the world around them. It takes at least a couple years, but we humans are able to learn all this with relatively few examples, at least compared to the number of images that LeCun and company feed their computers. "So there's something we're missing about human and animal learning," he says. That missing thing, LeCun explains, is what we might call common sense.

The Missing Piece

To fill in that missing piece, he and his colleagues are working on what's called predictive learning. Today, the most popular way of training an AI is what's called supervised learning. Basically, if you want to teach an AI to recognize cars, you'll show it thousands or millions of pictures of cars, and eventually it will figure out the common attributes of a car—like wheels—and be able to spot cars in other photos. That's much easier than the old way doing things, which involved trying to manually program the system to recognize wheels and other common features of cars. But what LeCun and his team would rather do is let machines observe the world and figure out what cars are simply by seeing lots of them and noticing that people call them "cars." That's what humans do, after all.

Facebook is approaching these twin challenges–understanding language and predictive learning–together. LeCun explained that the company is trying to teach its AI systems to understand human language by having it essentially having it watch over the shoulders of the real humans who respond to queries on the Facebook M virtual assistant. But making it work will require more than just lots of conversations for the software to eavesdrop on. It means figuring out the LeCun and company are working hard to figure out the mathematical and conceptual pieces that are missing from their model.

And LeCun says Facebook can't do this alone. Predictive learning is more of a scientific problem than a technological one, he says. And that means doing research out in the open, the way scientists do. "Doing research in secret just doesn't work," he says.