Skip to main contentSkip to navigationSkip to navigation
Businesswoman Surrounded by Robots
The ‘uncanny valley’ effect means that many people find robots creepy when their features look and move almost, but not exactly, like humans. Photograph: Blutgruppe/Corbis
The ‘uncanny valley’ effect means that many people find robots creepy when their features look and move almost, but not exactly, like humans. Photograph: Blutgruppe/Corbis

Rise of the robots: how long do we have until they take our jobs?

This article is more than 9 years old
science correspondent
Google’s Ray Kurzweil predicts robots will reach human levels of intelligence by 2029 – if they overcome current limitations

They have mastered the art of poker, helped write a cookbook and can cope with a basic conversation. The decision by a Japanese bank to staff their frontdesk with a bevy of robots is just the latest in a series of advances and predictions that at times appear to suggest we will all be replaced, professionally and socially, by automatons.

Ray Kurzweil, director of engineering at Google, has estimated that robots will reach human levels of intelligence by 2029, purportedly leaving us about 14 years to reign supreme. So, how far are we along this trajectory?

The increase in computing prowess during the past decade has expanded the kinds of tasks computers can undertake independently. IBM’s Watson computer, which won the US quiz show Jeopardy! in 2011, is being successfully applied to medical diagnosis.

By mining medical research papers available online and analysing diagnostic images, it can outperform doctors in some tasks. Most recently, the same machine has been transformed in to an “artificial lawyer”, which can search legal databases and correspondence for possibly relevant information.

Before mourning or rejoicing over the imminent demise of the entire legal and other professions, it is worth noting that these machines only do well at responding to certain predictable questions. Like the iPhone’s Siri, if you ask the right things, it sounds quite competent, but a lot of the time the responses are plain silly.

The next step, elusive thus far, is developing a program that actually understands the meaning of words and phrases. Computer scientists generally concede that jokes and sarcasm are still utterly beyond computers.

Beyond purely intellectual tasks, the physical capability of robots is also rapidly advancing. Improved processing of visual information means that driverless cars are now on the horizon and a glance at some of the galloping and armoured machines developed by Boston Dynamics gives a sinister hint of the military potential.

But the best technology today still performs far worse on skills such as dexterity and balance – attributes that come naturally to humans. Engineers can just about build a robot capable of loading a dishwasher or taking out the dustbin, for instance, but at colossal expense, making such machines a very distant prospect in the home.

There is also the question of what we want our robotic companions to look like. While films, from Terminator to Ex Machina, tend to portray essentially “souped up” humans, in reality we may be more comfortable with entities that look a bit less like us.

The so-called “uncanny valley” effect means that many people find robots creepy when their features look and move almost, but not exactly, like humans.

Perhaps with this in mind, when Japanese scientists developed a robot to provide emotional support to the sick and elderly, they chose to make it in the shape of a baby seal, called Paro. Similarly, scientists at the University of Lincoln went for plain white plastic features when designing a “printable companion”, Marc, that can be screwed together from 3D-printed components. Previous research has shown that humans readily interact with not much more than a pair of eye brows and a smile, so our aesthetic requirements may be the easiest to meet.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed