Special report | Technology

From not working to neural networking

The artificial-intelligence boom is based on an old idea, but with a modern twist

HOW HAS ARTIFICIAL intelligence, associated with hubris and disappointment since its earliest days, suddenly become the hottest field in technology? The term was coined in a research proposal written in 1956 which suggested that significant progress could be made in getting machines to “solve the kinds of problems now reserved for humans…if a carefully selected group of scientists work on it together for a summer”. That proved to be wildly overoptimistic, to say the least, and despite occasional bursts of progress, AI became known for promising much more than it could deliver. Researchers mostly ended up avoiding the term, preferring to talk instead about “expert systems” or “neural networks”. The rehabilitation of “AI”, and the current excitement about the field, can be traced back to 2012 and an online contest called the ImageNet Challenge.

ImageNet is an online database of millions of images, all labelled by hand. For any given word, such as “balloon” or “strawberry”, ImageNet contains several hundred images. The annual ImageNet contest encourages those in the field to compete and measure their progress in getting computers to recognise and label images automatically. Their systems are first trained using a set of images where the correct labels are provided, and are then challenged to label previously unseen test images. At a follow-up workshop the winners share and discuss their techniques. In 2010 the winning system could correctly label an image 72% of the time (for humans, the average is 95%). In 2012 one team, led by Geoff Hinton at the University of Toronto, achieved a jump in accuracy to 85%, thanks to a novel technique known as “deep learning”. This brought further rapid improvements, producing an accuracy of 96% in the ImageNet Challenge in 2015 and surpassing humans for the first time.

The 2012 results were rightly recognised as a breakthrough, but they relied on “combining pieces that were all there before”, says Yoshua Bengio, a computer scientist at the University of Montreal who, along with Mr Hinton and a few others, is recognised as a pioneer of deep learning. In essence, this technique uses huge amounts of computing power and vast quantities of training data to supercharge an old idea from the dawn of AI: so-called artificial neural networks (ANNs). These are biologically inspired networks of artificial neurons, or brain cells.

In a biological brain, each neuron can be triggered by other neurons whose outputs feed into it, and its own output can then trigger other neurons in turn. A simple ANN has an input layer of neurons where data can be fed into the network, an output layer where results come out, and possibly a couple of hidden layers in the middle where information is processed. (In practice, ANNs are simulated entirely in software.) Each neuron within the network has a set of “weights” and an “activation function” that controls the firing of its output. Training a neural network involves adjusting the neurons’ weights so that a given input produces the desired output (see diagram). ANNs were starting to achieve some useful results in the early 1990s, for example in recognising handwritten numbers. But attempts to get them to do more complex tasks ran into trouble.

In the past decade new techniques and a simple tweak to the activation function have made training deep networks feasible. At the same time the rise of the internet has made billions of documents, images and videos available for training purposes. All this takes a lot of number-crunching power, which became readily available when several AI research groups realised around 2009 that graphical processing units (GPUs), the specialised chips used in PCs and video-games consoles to generate fancy graphics, were also well suited to running deep-learning algorithms. An AI research group at Stanford University led by Andrew Ng, who subsequently moved to Google and now works for Baidu, a Chinese internet giant, found that GPUs could speed up its deep-learning system nearly a hundredfold. Suddenly, training a four-layer neural network, which had previously taken several weeks, took less than a day. It is a pleasing symmetry, says Jen-Hsun Huang, the boss of NVIDIA, which makes GPUs, that the same chips that are used to conjure up imaginary worlds for gamers can also be used to help computers understand the real world through deep learning.

What got people excited about this field is that one technique, deep learning, can be applied to so many different domains

The ImageNet results showed what deep learning could do. Suddenly people started to pay attention, not just within the AI community but across the technology industry as a whole. Deep-learning systems have since become more powerful: networks 20 or 30 layers deep are not uncommon, and researchers at Microsoft have built one with 152 layers. Deeper networks are capable of higher levels of abstraction and produce better results, and these networks have proved to be good at solving a very wide range of problems.

“What got people excited about this field is that one learning technique, deep learning, can be applied to so many different domains,” says John Giannandrea, head of machine-intelligence research at Google and now in charge of its search engine too. Google is using deep learning to boost the quality of its web-search results, understand commands spoken into smartphones, help people search their photos for particular images, suggest automatic answers to e-mails, improve its service for translating web pages from one language to another, and help its self-driving cars understand their surroundings.

Learning how to learn

Deep learning comes in many flavours. The most widely used variety is “supervised learning”, a technique that can be used to train a system with the aid of a labelled set of examples. For e-mail spam filtering, for example, it is possible to assemble an enormous database of example messages, each of which is labelled “spam” or “not spam”. A deep-learning system can be trained using this database, repeatedly working through the examples and adjusting the weights inside the neural network to improve its accuracy in assessing spamminess. The great merit of this approach is that there is no need for a human expert to draw up a list of rules, or for a programmer to implement them in code; the system learns directly from the labelled data.

Systems trained using labelled data are being used to classify images, recognise speech, spot fraudulent credit-card transactions, identify spam and malware, and target advertisements—all applications in which the right answer is known for a large number of previous cases. Facebook can recognise and tag your friends and family when you upload a photograph, and recently launched a system that describes the contents of photographs for blind users (“two people, smiling, sunglasses, outdoor, water”). There is a huge reservoir of data to which supervised learning can be applied, says Mr Ng. Adoption of the technology has allowed existing firms in financial services, computer security and marketing to relabel themselves as AI companies.

Another technique, unsupervised learning, involves training a network by exposing it to a huge number of examples, but without telling it what to look for. Instead, the network learns to recognise features and cluster similar examples, thus revealing hidden groups, links or patterns within the data.

Unsupervised learning can be used to search for things when you do not know what they look like: for monitoring network traffic patterns for anomalies that might correspond to a cyber-attack, for example, or examining large numbers of insurance claims to detect new kinds of fraud. In a famous example, when working at Google in 2011, Mr Ng led a project called Google Brain in which a giant unsupervised learning system was asked to look for common patterns in thousands of unlabelled YouTube videos. One day one of Mr Ng’s PhD students had a surprise for him. “I remember him calling me over to his computer and saying, ‘look at this’,” Mr Ng recalls. On the screen was a furry face, a pattern distilled from thousands of examples. The system had discovered cats.

Reinforcement learning sits somewhere in between supervised and unsupervised learning. It involves training a neural network to interact with an environment with only occasional feedback in the form of a reward. In essence, training involves adjusting the network’s weights to search for a strategy that consistently generates higher rewards. DeepMind is a specialist in this area. In February 2015 it published a paper in Nature describing a reinforcement-learning system capable of learning to play 49 classic Atari video games, using just the on-screen pixels and the game score as inputs, with its output connected to a virtual controller. The system learned to play them all from scratch and achieved human-level performance or better in 29 of them.

Gaming the system

Video games are an ideal training ground for AI research, says Demis Hassabis of DeepMind, because “they are like microcosms of the real world, but are cleaner and more constrained.” Gaming engines can also generate large quantities of training data very easily. Mr Hassabis used to work in the video-games industry before taking a PhD in cognitive neuroscience and starting DeepMind. The company now operates as an AI research arm for Google, from offices near King’s Cross station in London.

DeepMind made headlines in March when its AlphaGo system defeated Lee Sedol, one of the world’s best Go players, by 4-1 in a five-game match in Seoul. AlphaGo is a reinforcement-learning system with some unusual features. It consists of several interconnected modules, including two deep neural networks, each of which specialises in a different thing—just like the modules of the human brain. One of them has been trained by analysing millions of games to suggest a handful of promising moves, which are then evaluated by the other one, guided by a technique that works by random sampling. The system thus combines biologically inspired techniques with non-biologically inspired ones. AI researchers have argued for decades over which approach is superior, but AlphaGo uses both. “It’s a hybrid system because we believe we’re going to need more than deep learning to solve intelligence,” says Mr Hassabis.

He and other researchers are already looking to the next step, called transfer learning. This would allow a reinforcement-learning system to build on previously acquired knowledge, rather than having to be trained from scratch every time. Humans do this effortlessly, notes Mr Hassabis. Mr Giannandrea recalls that his four-year-old daughter was able to tell that a penny-farthing was a kind of bicycle even though she had never seen one before. “Computers can’t do that,” he says.

MetaMind, an AI startup recently acquired by Salesforce, is pursuing a related approach called multitask learning, where the same neural-network architecture is used to solve several different kinds of problems in such a way that experience of one thing makes it better at another. Like DeepMind, it is exploring modular architectures; one them, called a “dynamic memory network”, can, among other things, ingest a series of statements and then answer questions about them, deducing the logical connections between them (Kermit is a frog; frogs are green; so Kermit is green). MetaMind has also combined natural-language and image-recognition networks into a single system that can answer questions about images (“What colour is the car?”). Its technology could be used to power automated customer-service chatbots or call-centres for Salesforce’s customers.

In the past, promising new AI techniques have tended to run out of steam quickly. But deep learning is different. “This stuff actually works,” says Richard Socher of MetaMind. People are using it every day without realising it. The long-term goal to which Mr Hassabis, Mr Socher and others aspire is to build an “artificial general intelligence” (AGI)—a system capable of solving a wide range of tasks—rather than building a new AI system for each problem. For years, AI research has focused on solving specific, narrow problems, says Mr Socher, but now researchers are “taking these more advanced Lego pieces and putting them together in new ways”. Even the most optimistic of them think it will take another decade to attain human-level AGI. But, says Mr Hassabis, “we think we know what the dozen or so key things are that are required to get us close to something like AGI.”

Meanwhile AI is already useful, and will rapidly become more so. Google’s Smart Reply system, which uses two neural networks to suggest answers to e-mails, went from being a deep-learning research project to a live product in just four months (though initially it had to be discouraged from suggesting the reply “I love you” to nearly every message). “You can publish a paper in a research journal and literally have a company use that system the next month,” says Mr Socher. There is a steady flow of academic papers from AI companies both large and small; AI researchers have been allowed to continue publishing their results in peer-reviewed journals, even after moving into industry. Many of them maintain academic posts alongside working for companies. “If you won’t let them publish, they won’t work for you,” explains Chris Dixon of Andreessen Horowitz.

Google, Facebook, Microsoft, IBM, Amazon, Baidu and other firms have also made some of their deep-learning software available free on an open-source basis. In part, this is because their researchers want to publish what they are doing, so it helps with recruitment. A more cynical view is that big internet firms can afford to give away their AI software because they have a huge advantage elsewhere: access to reams of user data for training purposes. This gives them an edge in particular areas, says Shivon Zilis of Bloomberg Beta, an investment fund, but startups are finding ways into specific markets. Drone startups, for example, can use simulation data to learn how to fly in crowded environments. And lots of training data can be found on the internet, says Sam Altman, president of Y Combinator, a startup incubator. He notes that humans can learn from modest amounts of data, which “suggests that intelligence is possible without massive training sets”. Startups pursuing less data-hungry approaches to AI include Numenta and Geometric Intelligence.

Pick and mix

Companies are lining up to supply shovels to participants in this AI gold rush. The name that comes up most frequently is NVIDIA, says Mr Dixon; every AI startup seems to be using its GPU chips to train neural networks. GPU capacity can also be rented in the cloud from Amazon and Microsoft. IBM and Google, meanwhile, are devising new chips specifically built to run AI software more quickly and efficiently. And Google, Microsoft and IBM are making AI services such as speech recognition, sentence parsing and image analysis freely available online, allowing startups to combine such building blocks to form new AI products and services. More than 300 companies from a range of industries have already built AI-powered apps using IBM’s Watson platform, says Guru Banavar of IBM, doing everything from filtering job candidates to picking wines.

To most people, all this progress in AI will manifest itself as incremental improvements to internet services they already use every day. Search engines will produce more relevant results; recommendations will be more accurate. Within a few years everything will have intelligence embedded in it to some extent, predicts Mr Hassabis. AI technology will allow computer interfaces to become conversational and predictive, not simply driven by menus and icons. And being able to talk to computers will make them accessible to people who cannot read and write, and cannot currently use the internet, says Mr Bengio.

Yet steady improvements can lead to sudden changes when a threshold is reached and machines are able to perform tasks previously limited to humans. Self-driving cars are getting better fast; at some point soon they may be able to replace taxi drivers, at least in controlled environments such as city centres. Delivery drones, both wheeled and airborne, could similarly compete with human couriers. Improved vision systems and robotic technology could allow robots to stack supermarket shelves and move items around in warehouses. And there is plenty of scope for unexpected breakthroughs, says Mr Dixon.

Others are worried, fearing that AI technology could supercharge the existing computerisation and automation of certain tasks, just as steam power, along with new kinds of machinery, seemed poised to make many workers redundant 200 years ago. “Steam has fearfully accelerated a process that was going on already, but too fast,” declared Robert Southey, an English poet. He worried that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Many people feel the same way about artificial intelligence today.

Read on: Automation and anxiety >>

This article appeared in the Special report section of the print edition under the headline "From not working to neural networking"

March of the machines: A special report on artificial intelligence

From the June 25th 2016 edition

Discover stories from this section and more in the list of contents

Explore the edition