Science and technology | Superforecasting the end of the world

What are the chances of an AI apocalypse?

Professional “superforecasters” are more optimistic about the future than AI experts

Image: Nick Kempton

In 1945, just before the test of the first nuclear bomb in the New Mexico desert, Enrico Fermi, one of the physicists who had helped build it, offered his fellow scientists a wager. Would the heat of the blast ignite a nuclear conflagration in the atmosphere? If so, would the firestorm destroy only New Mexico? Or would the entire world be consumed? (The test was not quite as reckless as Fermi’s mischievous bet suggests: Hans Bethe, another physicist, had calculated that such an inferno was almost certainly impossible.)

These days, worries about “existential risks”—those that pose a threat to humanity as a species, rather than to individuals—are not confined to military scientists. Nuclear war; nuclear winter; plagues (whether natural, like covid-19, or engineered); asteroid strikes and more could all wipe out most or all of the human race. The newest doomsday threat is artificial intelligence (ai). In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Explore more

This article appeared in the Science & technology section of the print edition under the headline "Bringing down the curtain"

Preparing the way: The alarming plans for Trump’s second term

From the July 15th 2023 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science and technology

A Russia-linked network uses AI to rewrite real news stories

CopyCop churned out 19,000 deceptive posts in a month

To stay fit, future Moon-dwellers will need special workouts

Running around the inside of a barrel might help


Wind turbines keep getting bigger

That poses a giant transport problem