Skip to main contentSkip to navigationSkip to navigation
RQ-4 Block 10 Global Hawk unmanned drones
RQ-4 Block 10 Global Hawk unmanned drones. Photograph: Northrop Grumman//EPA
RQ-4 Block 10 Global Hawk unmanned drones. Photograph: Northrop Grumman//EPA

We can’t ban killer robots – it’s already too late

This article is more than 6 years old
Philip Ball

Telling international arms traders they can’t make killer robots is like telling soft-drinks makers that they can’t make orangeade

One response to the call by experts in robotics and artificial intelligence for an ban on “killer robots” (“lethal autonomous weapons systems” or Laws in the language of international treaties) is to say: shouldn’t you have thought about that sooner?

Figures such as Tesla’s CEO, Elon Musk, are among the 116 specialists calling for the ban. “We do not have long to act,” they say. “Once this Pandora’s box is opened, it will be hard to close.” But such systems are arguably already here, such as the “unmanned combat air vehicleTaranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border. Autonomous tanks are in the works, while human control of lethal drones is becoming just a matter of degree.

Yet killer robots have been with us in spirit for as long as robots themselves. Karel Čapek’s 1920 play RUR (Rossum’s Universal Robots) gave us the word (meaning “labourer” in Czech). His humanoid robots, made by the eponymous company for industrial work, rebel and slaughter the human race. They’ve been doing it ever since, from Cybermen to the Terminator. Robot narratives rarely end well.

It’s hard even to think about the issues raised by Musk and his co-signatories without a robot apocalypse looming in the background. Even if the end of humanity isn’t at stake, we just know that one of these machines is going to malfunction with the messy consequences of Omni Consumer Product’s police droid in Robocop.

Such allusions could seem to make light of a deadly serious subject. OK, so a robot Armageddon might not be exactly frivolous, but these stories, for all that they draw on deep-seated human fears, are ultimately entertainment. It’s all too easy, though, for a debate like this to settle into the polarisation of good and bad technologies that science-fiction movies can encourage, with the attendant implication that, so long as we avoid the really bad ones, all will be well.

The issues – as specialists on Laws doubtless recognise – are more complex. On the one hand, they concern the wider, and increasingly pressing, matter of robot ethics; on the other hand they are about the very nature of modern war, and its commodification.

How do we make autonomous technological systems safe and ethical? Avoiding robot-inflicted harm to humans was the problem explored in Isaac Asimov’s I, Robot, a collection of short stories so seminal that Asimov’s three laws of robotics are sometimes discussed now almost as if they have the force of Isaac Newton’s three laws of motion. The irony is that Asimov’s stories were largely about how such well-motivated laws could be undermined by circumstances.

In any event, the ethical issues can’t easily be formulated as one-size-fits-all principles. Historian Yuval Noah Harari has pointed out that driverless vehicles will need some principles for deciding how to act when faced with an unavoidable and possibly lethal collision: who does the robot try to save? Perhaps, Harari says, we will be offered two models: the Egoist (which prioritises the driver) and Altruist (which puts others first).

‘Mightn’t a robot make a better assessment using biometrics than a frightened soldier using instincts?’ Terminator Genisys. Photograph: Melinda Sue Gordon/Allstar/Paramount Pictures

There are shades of science-fictional preconceptions in a 2012 report on killer robots by Human Rights Watch. “Distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a human’s actions, something a robot could not do,” it says. Furthermore, “robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians”. But the first claim is a statement of faith – mightn’t a robot make a better assessment using biometrics than a frightened soldier using instincts? As for the second, one feels: sure, sometimes. Other times, humans in war zones wantonly rape and massacre.

This is not to argue against the report’s horror at autonomous robot soldiers, which I for one share. Rather, it brings us back to the key question, which is not about technology but warfare.

Already our sensibilities about the ethics of war are arbitrary. “The use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection,” says the Human Rights Watch, and it is a fair point but impossible to place in any consistent ethical framework while nuclear weapons are internationally legal. Besides, there’s a continuum between drone war, soldier enhancement technologies and Laws that can’t be broken down into “man versus machine”.

This question of automated military technologies is intimately linked to the changing nature of war itself, which, in an age of terrorism and insurgency, no longer has a start or end, battlefields or armies: as American strategic analyst Anthony Cordesman puts it: “One of the lessons of modern war is that war can no longer be called war.” However we deal with that, it’s not going to look like the D-day landings.

Warfare has always used the most advanced technologies available; “killer robots” are no different. Pandora’s box was opened with the invention of steel smelting if not earlier (and it was almost never a woman who did the opening). And you can be sure someone made a profit from it.

By all means let’s try to curb our worst impulses to beat ploughshares into swords, but telling an international arms trade that they can’t make killer robots is like telling soft-drinks manufacturers that they can’t make orangeade.

Philip Ball is a science writer. His latest book is The Water Kingdom: A Secret History of China

More on this story

More on this story

  • Campaign to stop 'killer robots' takes peace mascot to UN

  • Ex-Google worker fears 'killer robots' could cause mass atrocities

  • The rise of the killer robots – and the two women fighting back

  • UK, US and Russia among those opposing killer robot ban

  • Britain funds research into drones that decide who they kill, says report

  • Use of 'killer robots' in wars would breach law, say campaigners

  • The Truth About Killer Robots: the year's most terrifying documentary

  • Weaponised AI is coming. Are algorithmic forever wars our future?

  • Thousands of leading AI researchers sign pledge against killer robots

  • Killer robots will only exist if we are stupid enough to let them

Most viewed

Most viewed