Skip to main contentSkip to navigationSkip to navigation
South Korean professional Go player Lee Sedol puts his first stone against Google’s artificial intelligence program, AlphaGo.
South Korean professional Go player Lee Sedol puts his first stone against Google’s artificial intelligence program, AlphaGo. Photograph: Handout/Getty Images
South Korean professional Go player Lee Sedol puts his first stone against Google’s artificial intelligence program, AlphaGo. Photograph: Handout/Getty Images

The momentous advance in artificial intelligence demands a new set of ethics

This article is more than 8 years old
Jason Millar

In a dramatic man versus machine encounter, AlphaGo has secured its third, decisive victory against a renowned Go player. With scientists amazed at how fast AI is developing, it’s vital that humans stay in control

Let us all raise a glass to AlphaGo and mark another big moment in the advance of artificial intelligence and then perhaps start to worry. AlphaGo, Google DeepMind’s game of Go-playing AI just bested the best Go-playing human currently alive, the renowned Lee Sedol. This was not supposed to happen. At least, not for a while. An artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.

But as we drink to its early arrival, we should also begin trying to understand what the surprise means for the future – with regard, chiefly, to the ethics and governance implications that stretch far beyond a game.

As AlphaGo and AIs like it become more sophisticated – commonly outperforming us at tasks once thought to be uniquely human – will we feel pressured to relinquish control to the machines?

The number of possible moves in a game of Go is so massive that, in order to win against a player of Lee’s calibre, AlphaGo was designed to adopt an intuitive, human-like style of gameplay. Relying exclusively on more traditional brute-force programming methods was not an option. Designers at DeepMind made AlphaGo more human-like than traditional AI by using a relatively recent development – deep learning.

Deep learning uses large data sets, “machine learning” algorithms and deep neural networks – artificial networks of “nodes” that are meant to mimic neurons – to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.

Possessing a more intuitive approach to problem-solving allows artificial intelligence to succeed in highly complex environments. For example, actions with high levels of unpredictablility – talking, driving, serving as a soldier – which were previously unmanageable for AI are now considered technically solvable, thanks in large part to deep learning.

AI is also increasingly able to manage complex, data intensive tasks, such as monitoring credit card systems for fraudulent behaviour, high-frequency stock trading and detecting cyber security threats. Embodied as robots, deep-learning AI is poised to begin to move and work among us – in the form of service, transportation, medical and military robots.

Deep learning represents a paradigm shift in the relationship humans have with their technological creations. It results in AI that displays genuinely surprising and unpredictable behaviour. Commenting after his first loss, Lee described being stunned by an unconventional move he claimed no human would ever have made. Demis Hassabis, one of DeepMind’s founders, echoed the sentiment: “We’re very pleased that AlphaGo played some quite surprising and beautiful moves.”

Alan Turing, the visionary computer scientist, predicted we would someday speak of machines that think. He never predicted this.

This also made me think back to my time as an engineer, when surprises in the lab were rarely occasions for celebration. In a more traditional design environment, the goal is to anticipate and control for as many possible states a device could find itself in. Surprises typically meant that our design had deviated from its intended behaviour and required a fix. But this just underscores the difference between traditional design paradigms and deep learning.

When it comes to deep learning, unpredictability and surprises are – or can be – a good thing. They can serve as indicators that a system is working well, perhaps better than the humans that came before it. Such is the case with AlphaGo. In the coming years, it will probably continue to learn and to improve, surprising and teaching its human competitors with new moves and strategies along the way.

Other artificial intelligence, designed to benefit humanity by surpassing our abilities in highly complex tasks – diagnosing illness, researching pharmaceuticals, managing power grids, protecting against cyber threats – could rely for its success on deep learning and the unpredictability that seems to be a necessary part of it.

However, unpredictability indicates a loss of human control. That Hassabis is genuinely surprised at his creation’s behaviour betrays a lack of control inherent in the design. And though some loss of control might be fine in the context of a game such as Go, it raises pressing ethics and governance questions elsewhere.

How much (and what kind of) control should we relinquish to driverless cars, artificial diagnosticians, or cyber guardians? How should we design appropriate human control into sophisticated AI that requires us to give up some of that very control? Is there some AI that we should just not develop if it means any loss of human control?

How much of a say should corporations, governments, experts or citizens have in these matters? These important questions, and many others like them, have emerged in response, but remain unanswered. They require human, not human-like, solutions.

Answers to these questions will also require input from the right mix of humans and AI researchers alone can only hope to contribute partial solutions. As we’ve learned throughout history, scientific and technical solutions don’t necessarily translate into moral victories.

Organisations such as the Open Roboethics initiative and the Foundation for Responsible Robotics were founded on this understanding. They bring together some of the world’s leading ethicists, social scientists, policymakers and technologists to work towards meaningful and informed answers to uniquely human questions surrounding robotics and AI. The process of drafting ethics standards for robotics and AI will involve an interdisciplinary effort.

Because of deep learning, AI is surprising us with the speed of its own advancement. Expertise is no longer a 10,000-hour proposition when the would-be expert’s “brain” expands with every improvement to Amazon’s and Google’s clouds. This new pace of innovation is precisely what lends urgency to our challenge.

So as we drink to the passing of a milestone in AI, let’s also drink to the understanding that the time to answer deeply human questions about deep learning and AI is now.

Dr Jason Millar is an engineer and philosopher. He teaches robot ethics at Carleton University in Ottawa

Most viewed

Most viewed