Taking Safety Measures

DeepMind, Google's artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To "solve intelligence" by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.

This, of course, raises some concerns. Namely, what do we do if the AI breaks...if it gets a virus...if it goes rogue?

In a paper written by researchers from DeepMind, in cooperation with Oxford University's Future of Humanity Institute, scientists note that AI systems are "unlikely to behave optimally all the time," and that a human operator may find it necessary to "press a big red button" to prevent such a system from causing harm.

In other words, we need a "kill-switch."

Understanding DeepMind

Co-founder Mustafa Suleyman emphasizes that DeepMind is more than a game-playing AI. As was hinted at previously in this article, it's an artificial general intelligence (AGI), meaning that it learns from raw input to solve tasks without any pre-programming, whereas AIs learn from specific tasks they're created for. Ultimately, he describes their systems as agents.

You know what else were called agents? The corrupted AIs in The Matrix...the bad guys.

But perhaps you might be wondering about whether an AI could learn its way through an off switch—if it could evolve beyond our built-in safety features. The paper, titled "Safely Interruptible Agents," explores a method that will prevent an agent, so to speak, from working its way around the big red button.

Fear And Worry

In reality, these AIs are growing each year, and they are getting closer to matching human level abilities when it comes to certain tasks. Suleyman even states that, in relation to image recognition, DeepMind was able to process about a million images with approximately a 16% error rate. That was in 2012. Last year, it went down to an astonishing 5.5%.

So as AI continues to advance, it makes sense to take safety measures in case human operators need to “take control of a robot that is misbehaving [that] may lead to irreversible consequences.”

To that end, as the authors of the paper note, the Google team is looking for "a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator."


Share This Article