Mind Matters Natural and Artificial Intelligence News and Analysis
Processor of the future. Concept of global cyberspace. Innovations in computer nanotechnology. 3D illustration of an abstract microchip
Processor of the future. Concept of global cyberspace. Innovations in computer nanotechnology. 3D illustration of an abstract microchip

Are we risking a planetary AI intelligence explosion?

Or are our problems with AI the usual boring stuff we prefer to avoid?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In 2015, a Harvard artificial intelligence (AI) and statistics researcher offered a grim vision of the AI world to come. It was recently recirculated by the Boston-based Future of Life Institute which is “working to mitigate existential risks facing humanity.”

Viktoriya Krakovna begins by quoting an early authority, computer scientist I. J. Good, who said in 1965, “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” Krakovna argues that even if that doesn’t happen, existential risk looms:

The incentives are to continue improving AI systems until they hit physical limits on intelligence, and those limitations (if they exist at all) are likely to be above human intelligence in many respects.

Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design. This was outlined in Omohundro’s paper and more concretely formalized in a recent MIRI paper. Humans routinely destroy animal habitats to acquire natural resources, and an AI system with any goal could always use more data centers or computing clusters.

Viktoriya Krakovna, “Risks From General Artificial Intelligence Without an Intelligence Explosion” at Future of Life Institute (November 30, 2015)

She also worries that intelligent computers might not have the same values we do:

Even an AI system with perfect understanding of human values and goals would not necessarily adopt them. Humans understand the “goals” of the evolutionary process that generated us, but don’t internalize them – in fact, we often “wirehead” our evolutionary reward signals, e.g. by eating sugar.

Viktoriya Krakovna, “Risks From General Artificial Intelligence Without an Intelligence Explosion” at Future of Life Institute (November 30, 2015)

Another of her concerns is keeping superintelligent AI away from the internet:

A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses. When developing an AI system in the vicinity of general intelligence, it would be important to keep it cut off from the internet. Large scale AI systems are likely to be run on a computing cluster or on the cloud, rather than on a single machine, which makes isolation from the internet more difficult. Containment measures would likely pose sufficient inconvenience that many researchers would be tempted to skip them.

Viktoriya Krakovna, “Risks From General Artificial Intelligence Without an Intelligence Explosion” at Future of Life Institute (November 30, 2015)

Is all this imminent? Is it even possible?

Two recent books, featuring a total of 45 experts, offer estimates ranging from two decades to two centuries ahead for the dawn of generally intelligent AI. The spread suggests underlying fundamental uncertainties. Mind Matters News asked some of our house computer science experts for comment:

Jonathan Bartlett emphasizes that the danger with AI lies not in what it can do but in what we falsely think it can do: “Thinking that your tool is making inferences, making decisions, and acting virtuously will lead you to incorrect expectations and an inability to use the machine and its results properly.”

For instance, the biggest danger machine learning presents is the illusion of knowledge. We mistake the output of machine learning systems for knowledge. Knowledge can be recombined and reused, but machine learning output, being a merely statistical correlation, is unlikely to mean much beyond that. Using it for more is by definition a logical fallacy. Machine learning can harm humanity by becoming a replacement for thinking instead of a tool to assist thinking.

In a similar vein, AI systems are dangerous to humanity when we think that programming moral values makes it a morally responsible agent. Moral responsibility is solely in the programmer or the user. Thinking that we have somehow removed ourselves from the moral equation by programming some sort of morality into a computer is dangerous for society.

Brendan Dixon adds, “There is, in my mind, so much wrong here it’s surprising that otherwise intelligent people fall for it. Here are a few quick thoughts:”

● The collection of worries is predicated on the development of General AI. We have, not only, made little to no progress towards that goal, it’s clear that the goal is much harder to achieve that we realized. For example, if experts are claiming we’ll have to wait at least a decade for autonomous cars (i.e., until ~2030), how is it that by 2050 we’ll have AI overtaking humans?

● On one hand, some claim we’ll achieve General AI and at the same time, they mutter about the “value function that omits variables.” So, which is it? Are these machines just fancy programs operating on the predicates we give them? Or are they truly intelligent machines? If the underlying assumption is that our own values operate as “value functions,” that is a statement only a strict materialist could believe.

● My core disagreement with the dire prophecies is this: Worry about a sci-fi future of generally intelligent AI occludes our vision of the real problems AI creates for us now. These include all the data bias and “lack of common sense” issues. Consider, for example, “AI” scans of resumés which became a PR disaster for Amazon due to sexist bias. The same would likely hold for the AI/algorithm-assisted employee reviews in their warehouses (if reports are to be believed). These real problems hurt real people. Their roots are in the misguided notion that these machines somehow embody intelligence, when, in fact, they are programs transmitting the biases of their authors and/or training data.

The problem is that we’ll stupidly cede control (driven by “efficiency” and/or philosophical blindness) to machines.

Eric Holloway thinks that massive increases in artificial intelligence have peaked:

I have argued previously that we are probably near “peak AI.”

Since AI improvement appears to be almost entirely due to hardware improvement, and the basis for improvement “Moore’s law” has stopped being true. In which case we may actually face an “intelligence implosion” where AI cannot keep up with the projected demands and becomes overwhelmed. I believe the only way forward is to learn how to use human-in-the-loop approaches effectively.

So, what if we get an intelligence explosion with human-in-the-loop? In the pure AI scenario that is scary, because the AI does not need humans, and so can optimize humans out of the picture. However, in the human-in-the-loop scenario, an intelligence explosion could be a fantastic thing, because in this case the AI “realizes” humans are essential for its well being. In which case, it would optimize for human flourishing.

Perhaps the eventual crisis will be this: AI hits a ceiling predicted by the Halting Problem. Then we are left with a Robogeddon that can’t happen and a crowd of experts who need it to. That could be a volatile situation.

See also: Will artificial intelligence design artificial superintelligence?

Artificial intelligence is impossible

and

Human intelligence as a Halting Oracle

Also: Why I Doubt That AI Can Match the Human Mind Jonathan Bartlett: Computers are exclusively theorem generators, while humans appear to be axiom generators

and

Can an algorithm be racist?


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Are we risking a planetary AI intelligence explosion?