This article is more than 1 year old

Calm down, Elon. Deep learning won't make AI generally intelligent

Professor Mark Bishop on the dangers of deep stupidity

Minds Mastering Machines Mark Bishop, a professor of cognitive computing and a researcher at the Tungsten Centre for Intelligent Data Analytics (TCIDA) at Goldsmiths, University of London, celebrated the successes of deep learning during a lecture at the Minds Mastering Machines conference on Monday, but stressed the limits of modern AI.

Elon Musk, Stephen Hawking and Nick Bostrom have repeatedly warned of the dangers of AI as the technology becomes more ubiquitous.

The analogy comparing neural networks to human brains has been exhausted, and gives the impression that machines can "think", "imagine" or "dream". But the idea that creating AI is like summoning a demon is a step too far, since machines don't actually understand the world.

Bishop used the example of a book that reached a whopping $23,698,655.93 on Amazon due to the failure of algorithms.

In 2011, two Amazon resellers Profmath and Bordeebook automatically reset their prices to compete with one another over a book on fly biology. Profmath nudged the numbers so it was always slightly cheaper than the cheapest pricing, while Bordeebook increased the cost so it was always a little more expensive than Profmath's.

As these algorithms duelled each other, the prices skyrocketed to absurd figures. A human would realise that the book would never sell at those prices, but the machine didn't know as it was simply doing what it was programmed to do.

"These things don't understand anything they're doing. They might be manipulating apples, bananas or books, but they don't understand anything," Bishop said.

AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars

READ MORE

It's this lack of understanding of the real world that means AI is more artificial idiot than artificial intelligence. It means that the chances of building artificial general intelligence is quite low, because it's so difficult for computers to truly comprehend knowledge, Bishop told The Register.

Deep learning relies on building mathematical functions that develop by mapping features scoured from input training data to some output, whether that is recognising an image, language or text.

But not everything can be represented computationally, Bishop argues. Take Penrose tiling, a set of shapes that can be repeatedly combined in a way to completely fill a plane of space infinitely leaving no gaps such as hexagons, equilateral triangles or squares.

Different shapes like pentagons and diamonds can also fit together to create intricate tiling patterns. How would a machine know if such a combination could continue indefinitely? It would use pattern recognition, Bishop said. Computers would spot that there were regular repeating shapes sweeping the tiles and that as long as the tiles could continuously make those shapes, the structure could continue.

But there are some exceptions; some shapes do slot together to fill a space but do not seem to make a regular, repeating pattern. A computer would fail in these instances and here they lack mathematical insight to arrive at a correct proof.

The third point that separates machine and human intelligence are sensations. It's impossible for physical feelings and emotions to arise naturally out of computations, Bishop said.

Computers have no conscious grounding, so why would they prefer a task over another? "Nothing matters to them. Taking over the world? Why would they want to do that?"

Machines may be made so that they computationally model the brain, but that doesn't necessarily mean they'll have minds. ®

More about

TIP US OFF

Send us news


Other stories you might like