Don't believe the hype when it comes to AI

Artificial intelligence may be subject to more hype than any other field. While this creates funding opportunities, it could also damage AI's long-term potential

Thanks to neural networks - digital approximations of the way that the human brain learns - artificial intelligence has made enormous breakthroughs in everything from creating machines that can recognise faces with more accuracy than a human, to building cars capable of driving themselves, to recently, a computer "Turing test for sound" which can watch silent videos and predict the sounds that should accompany them.

But it very nearly didn't happen like this. Forty years ago, research into neural networks almost stopped altogether. Budgets were slashed, plugs were pulled and students were advised by their teachers that researching neural networks was a bit like dating the loser in school: they'd never amount to anything and you'd just get hurt in the process. Certainly there were things neural networks weren't capable of at the time, but it's equally true that a large amount of the backlash the field suffered came down to the massive amount of hype it had received.

Researchers, particularly in the rival, more established field of symbolic AI, were perturbed by articles like the one Science magazine published in 1958 about neural nets, entitled "Human Brains Replaced?" Reading it today, the crazy thing is how accurate the article was: predicting machine learning capable of making decisions and translating languages. But neural networks weren't capable of doing all of that just then, and the vitriolic response to those stories helped crush the hopes of people interested in the field. It was only due to a group of strong-willed researchers in the 80s, willing to work in relative obscurity for years, that pulled neural networks back from the brink. Today, many of them are top experts in the field and enjoy high-level jobs at companies such as Google.

Neural networks, of course, aren't the only technology to prompt overhype and, inevitably, disappointment. Robotics has had a similarly challenging time. In the 60s, a groundbreaking robot called Shakey set benchmarks in fields such as pattern recognition, information representation, problem solving and natural-language processing. It has, rightly, been described as the world's first general-purpose robot capable of reasoning about its own actions. But when it was profiled in Life magazine in 1970, Shakey was called the "first electronic person" and was said to be (wrongly) capable of travelling "about the Moon for months at a time without a single beep of direction from the Earth".

So are journalists to blame? Possibly, but not exclusively. Overhype, like success, has many fathers. Researchers, for instance, have benefitted from hype when it comes to funding. At an AI conference in Boston during the 70s, one researcher told the press that it would take just five more years until all of us had smart robots in our homes picking up stray socks. He was confronted by a furious colleague who said, "Don't make those predictions! You're underestimating how long this will take." Without pausing, the researcher responded, "I don't care. Notice all the dates I've chosen were after my retirement date."

As AI became big business in the 80s - initially thanks to the boom in what are called "expert systems" - we began to encounter a new species: venture capitalists. Although many VCs believe in the transformative abilities of technology, it would be naïve to think that big business doesn't bring with it a certain "pump and dump" mentality, whereby promises are inflated until the metaphorical balloon finally pops under the pressure. Neural networks recovered from this effect, but there are plenty of other examples of technology in which their macro story was correct, but their inability to match the hype in the short term proved to be a blow too fatal to overcome.

AI may be more subject to hype than any other field. It is a discipline which exists perpetually on the brink of science fiction, being sometimes described as "cool things that computers aren't yet capable of". AI is the only subject I've come across where successes are shuffled out of the field altogether: no longer considered "AI proper", but rather some lesser sub-field of it. It's a bit like a magician dismissing illusions as simple tricks the moment he discovers that there's no real magic in it, but rather a trapdoor on the stage.

There are plenty of short-term benefits to hype but, ultimately, it can bring with it more problems than it solves. However, with an army of excitable journalists, eager VCs and perpetually optimistic computer scientists, it's not the kind of thing which can easily be lifted out of the field like a dodgy line of code. We need a more thoughtful approach to the subject of building thinking machines - meaning less sensationalism, more stability and, ultimately, satisfying progress.

Of course, the other risk of hype is that in our eagerness to look to the future and to all that machines are not yet capable of, we overlook the massive strides that have already been made.

This article was originally published by WIRED UK