SHARE

The findings of this research were first covered by The Conversation as part of its Insights series.

From shamanic ritual to horoscopes, humans have always tried to predict the future. And while some of those practices may sound arcane, modern life still relies on prophecy. From the weather forecast to the time the GPS says we’ll reach our destination, our lives are built around futuristic fictions.

Of course, while we may sometimes feel betrayed by our local meteorologist, trusting their foresight is a lot more rational than putting the same stock in a TV psychic. This shift toward more evidence-based guesswork came about in the 20th century: futurologists began to see what prediction looked like when based on a scientific understanding of the world, rather than the traditional bases of prophecy (religion, magic, or dream). Genetic modification, space stations, wind power, artificial wombs, video phones, wireless internet, and cyborgs were all foreseen by “futurologists” from the 1920s and 1930s. Such visions seemed like science fiction when first published.

They all appeared in the brilliant and innovative “To-Day and To-Morrow” books from the 1920s, which signal the beginning of our modern conception of futurology, in which prophecy gives way to scientific forecasting. This series of more than 100 books provided humanity— and science fiction—with key insights and inspiration. I’ve been immersed in them for the last few years while writing the first book about these fascinating works, and have found that these pioneering futurologists have a lot to teach us.

In their early responses to the technologies emerging at the time—aircraft, radio, recording, robotics, television—the writers grasped how those innovations were changing our sense of who we are. And they often gave startlingly canny previews of what was coming next, as in the case of Archibald Low, who in his 1924 book “Wireless Possibilities” predicted the mobile phone: “In a few years time we shall be able to chat to our friends in an aeroplane and in the streets with the help of a pocket wireless set.”

Looking at this collection of sparkling projections can teach us a lot about current prediction attempts, which are dominated by methodologies claiming scientific rigour, such as “horizon scanning,” “scenario planning,” and “anticipatory governance.” Most of this professional future gazing takes place within government, think-tanks, and corporations, resulting in bland and narrowly targeted projections. But the scientists, writers, and experts who wrote these futurology books produced very individual visions.

They were committed to thinking about the future on a scientific basis, but they were also free to imagine worlds that would emerge for reasons other than corporate or governmental advantage. The resulting narratives are sometimes fanciful, but this whimsy occasionally gets them further than today’s more cautious and methodical projections.

Forecasting future discoveries

Take J B S Haldane, the brilliant mathematical geneticist, whose 1923 book “Daedalus; or: Science and the Future” inspired the rest of the series. It ranges widely across the sciences, trying to imagine what remained to be done in each.

Haldane thought physics had wrapped up most of its mysteries with the Theory of Relativity and the development of quantum mechanics. The main tasks left seemed to him to be the delivery of better engineering: faster travel and better communications.

Chemistry, too, he saw as likely to be concerned more with practical applications, such as inventing new flavors or developing synthetic food, rather than making theoretical advances. He also realized that alternatives to fossil fuels would be necessary, and forecast the use of wind power. Most of his predictions have been fulfilled.

It’s chastening, though, how much even such a clear-sighted and ingenious scientist missed, especially in the future of theoretical physics. He doubted nuclear power would be viable. He couldn’t know about future discoveries of new particles leading to radical changes to the model of the atom. Nor, in astronomy, could he see the theoretical prediction of black holes, the theory of the big bang, or the discovery of gravitational waves.

But, at the dawn of modern genetics, he saw that biology held some of the most exciting possibilities for future science. He foresaw genetic modification, arguing that: “We can already alter animal species to an enormous extent, and it seems only a question of time before we shall be able to apply the same principles to our own.” If this sounds like Haldane supported eugenics, it’s important to note that he was vocally opposed to forced sterilization, and didn’t subscribe to the overtly racist and ableist eugenics movement that was en vogue in America and Germany at the time.

a stack of books
To-Day and To-Morrow. Max Saunders

The development that caught the eye of so many readers was what Haldane called “ectogenesis”—his term for growing embryos outside the body in artificial wombs. Many other futurologists and thinkers took up the idea, the most notable being Haldane’s close friend Aldous Huxley, who was to use it in “Brave New World,” with its human “hatcheries” cloning the citizens and workers of the future. It was also Haldane who coined the word “clone.”

Ectogenesis still seems like science fiction, but the reality is getting closer. It was announced in May 2016 that human embryos had been successfully grown in an “artificial womb” for 13 days—just one day short of the legal limit, which prompted an inevitable ethical row. And in April 2017 an artificial womb designed to nurture premature human babies was successfully trialled on sheep. So even that prediction of Haldane’s may well be realized soon, perhaps within a century after he dreamed it up. Artificial wombs will probably be used first as a prosthesis to cope with medical emergencies, but they could eventually become as routine as caesareans or surrogacy.

Science, then, was not just science for these writers. It had social and political consequences. Many of the contributors of this series were social progressives, in sexual as well as political matters. Haldane looked forward to the doctor taking over from the priest, with science finally separating sexual pleasure from reproduction. In ectogenesis, he foresaw that women could be relieved of the pain and inconvenience of bearing children. As such, the idea could be seen as a feminist thought experiment.

What this reveals is how shrewd these writers were about the controversies and social proclivities of the age. At a time when too many thinkers were seduced by the pseudoscience of eugenics, Haldane was scathing about it. He had better ideas about how humanity might want to transform itself. While most of the scholars musing on eugenics merely supported white supremacy, Haldane’s motives suggest he’d be delighted at the advent of technologies like CRISPR—a method by which humankind could better itself in ways that mattered, like curing congenital disease.

Alternate futures

Some of To-Day and To-Morrow’s predictions of technological developments are impressively accurate, such as video phones, space travel to the moon, robotics, and air attacks on capital cities. But others are charmingly misguided.

a plane
The Dornier Do X was the largest, heaviest, and most powerful flying boat in the world when it was produced by the Dornier company in Germany in 1929. Wikipedia, CC BY

Oliver Stewart’s 1927 volume, “Aeolus or: The Future of the Flying Machine,” argued that British craftsmanship would triumph over American mass production. He was excited by autogiros—a small aircraft with a propeller for thrust and a freewheeling rotor on top—for which there was a craze at the time. He thought travelers would use those for short-haul flights, transferring for long-haul to flying boats: passenger planes with boat-like bodies that could take off from, and land on, the sea. Flying boats certainly had their vogue for glamorous voyages across the ocean, but disappeared as airliners became bigger and longer range and as more airports were built.

The To-Day and To-Morrow series, like all futurology, is full of such parallel universes. In the rousing 1925 feminist volume “Hypatia or: Woman and knowledge,” activist Dora Russell (wife of the philosopher Bertrand) proposed that women should be paid for household work. Unfortunately, this has not come to pass (though modern science is at least interested in calculating how traditionally feminine tasks cut into productivity and wellbeing).

The film critic Ernest Betts, meanwhile, writes in 1928’s “Heraclitus; or The Future of films” that “the film of a hundred years hence, if it is true to itself, will still be silent, but it will be saying more than ever.” His timing was terrible, as the first “talkie,” The Jazz Singer, had just come out. But Betts’s vision of film’s distinctiveness and integrity—the expressive possibilities open to it when it brackets off sound—and of its potential as a universal human language, cutting across different linguistic cultures, remains admirable.

It’s difficult to guess which of the forking paths before us leads to our real future. In most of the books, moments of surprisingly accurate prediction are tangled up with false prophecies. This isn’t to say that the accuracy is just a matter of chance. Take another of the most dazzling examples, “The World, the Flesh and the Devil” by the scientist J D Bernal, one of the great pioneers of molecular biology. This has influenced science fiction writers, including Arthur C Clarke, who called it “the most brilliant attempt at scientific prediction ever made.”

Bernal sees science as enabling us to transcend limits. He doesn’t think we should settle for the status quo if we can imagine something better. He imagines humans needing to explore other worlds, and to get them there he imagines the construction of huge, life-supporting space stations called bio-spheres, now named after him as “Bernal spheres.” Imagine the international space station, scaled up to small planet or asteroid size.

Brain in a vat

When Bernal turns to the flesh, things get rather stranger. A lot of the To-Day and To-Morrow writers were interested in how we use technologies as prostheses, to extend our faculties and abilities through machines. But Bernal takes it much further. First, he thinks about mortality, or more specifically about the limit of our lifespan. He wonders what science might be able to do to extend it.

In most deaths, the person dies because the body fails. So what if the brain could be transferred to a machine host, which could keep it—and therefore the thinking person—alive much longer?

Bernal’s thought experiment develops the first elaboration of what philosophers now call the “brain in a vat” hypothesis. Modern discussions of said brains in said vats are usually concerned with questions of perception and illusion (if my brain in a vat was sent electrical signals identical to the ones sent by my legs, would I think I was walking? Would I be able to tell the difference?). But Bernal has more pragmatic ends in view. Not only would his Dalek-like machines be able to extend human brain life, but they’d also be able to extend our capabilities. They would give us stronger limbs and better senses.

Bernal wasn’t the first to postulate what we’d now call the cyborg. It had already appeared in pulp science fiction a couple of years earlier—talking, believe it or not, about ectogenesis.

But it’s where Bernal takes the idea next that is so interesting. Like Haldane’s, his book is one of the founding texts of transhumanism: the idea that humanity should improve its species. He envisions a small sense organ for detecting wireless frequencies, eyes for infrared, ultraviolet and X-rays, ears for supersonics, detectors of high and low temperatures, of electrical potential and current.

With that wireless sense Bernal imagined how humanity could be in touch with others, regardless of distance. Even fellow humans across the galaxy in separate biospheres could be within reach. And, like several of the series’ authors, he imagines such interconnection as augmenting human intelligence, producing what science fiction writers have called a hive mind, or what Haldane calls a “super-brain.”

It’s not AI, exactly, because its components are natural: individual human brains. And in some ways, coming from Marxist intellectuals like Haldane and Bernal, what they’re imagining is a particular realization of solidarity—workers of the world uniting, mentally. Bernal even speculates that if thoughts could be broadcast to other minds in this way, then they would continue to exist even after the brain that thought them had died. In this he offers a form of immortality guaranteed by science instead of religion.

Blind spots

Bernal also imagined the world wide web more than 60 years before its invention by Tim Berners Lee. What neither Bernal, nor any of the To-Day and To-Morrow contributors could imagine, though, was the computers needed to run it—even though they were only about 15 years away when he was writing. And it is these inconceivable computers that have so ramped up and transformed early attempts at futurology into the industry it is today.

How can we account for this computer-shaped hole at the center of so many of these prophecies? It was partly that mechanical or “analogue” computers such as punched card machines and anti-aircraft gun “predictors” (which helped gunners aim at rapidly moving targets) had gotten extremely good at calculation and information retrieval. So good, in fact, that inventor and To-day and To-morrow author H Stafford Hatfield thought what was needed next was a “mechanical brain.”

woman in front of an old computer
A Colossus codebreaking computer, 1943. Wikimedia Commons

So these thinkers could see that some form of artificial intelligence was required. But even though electronics were developing rapidly, in radios and even televisions, it didn’t seem to occur to people that if you wanted to make something that functioned like a brain it would need to be electronic, rather than mechanical or chemical. This was exactly the moment in history when neurological experiments by Edgar Adrian and others in Cambridge began to show that electrical impulses actually made the human brain tick.

Just 12 years later, in 1940—before the development of the first digital computer, Colossus at Bletchley Park—it was possible for Haldane (again) to see that what he called “Machines that Think” were beginning to appear, combining electrical and mechanical technologies. In some ways our situation is comparable, as we sit poised just before the next great digital disruption: AI.

Bernal’s book is a fascinating example of just how far extended future thinking can go. But it also shows where it reaches its limits. If we can understand why the To-Day and To-Morrow authors were able to predict biospheres, mobile phones, and special effects, but not the computer, the obesity crisis, or the resurgence of religious fundamentalisms, then maybe we can catch some of the blind spots in our own forward vision and horizon scanning.

Yesterday and today

The pairing of scientific knowledge and imagination in these books created something unique—a series of hypotheticals somewhat lodged between futurology and science fiction. It is this sense of hopeful imagination that I think urgently needs to be injected back into today’s predictions.

As I have mentioned, computer modeling of the future mainly happens in businesses or organizations. Banks and other financial companies want to anticipate shifts in the markets. Retailers need to be aware of trends. Governments need to understand demographic shifts and military threats. Universities want to drill down into the data of these or other fields to try to understand and theorize what is happening.

To do this kind of complex forecasting well, you have to be a fairly large corporation or organization with adequate resources. The bigger the data pool, the hungrier the exercise becomes for computing power. You need access to expensive equipment, specialist programmers, and technicians. Information that citizens freely offer to companies such as Facebook or Amazon is sold on to other companies for their market research—as many were shocked to discover in the Cambridge Analytica scandal.

The main techniques which today’s governments and industries use to try to prepare for or predict the future—horizon scanning and scenario planning—are all well and good. They may help us nip wars and financial crashes in the bud (though rather obviously, they don’t always get it right either). But as a model for thinking about the future more generally, such methods are profoundly reductive.

They’re all about maintaining the status quo. Any interesting ideas or innovative speculations about anything other than risk avoidance are likely to get pushed aside. The group nature of think-tanks and foresight teams also has a leveling down effect. Future thinking by committee has a tendency to come out in bureaucratese: bland, impersonal, insipid. The opposite of science fiction.

Which is perhaps why science fiction needs to put its imagination in hyperdrive: to boldly go where the civil servants and corporate drones are too timid to venture. To imagine something different. Some science fiction is profoundly challenging in the sheer otherness of its imagined worlds.

That was the effect of “2001” or “Solaris,” with their imagining of other forms of intelligence, as humans adapt to life in space. Kim Stanley Robinson takes both ideas further in his novel “2312,” imagining humans with implanted quantum computers and different colony cultures as people find ways of living by building mobile cities to keep out of the sun’s heat on Mercury, terraforming planets, and even hollowing out asteroids to create new ecologies as art works.

When we compare To-Day and To-Morrow with the kinds of futurology on offer nowadays, what’s most striking is how much more optimistic most of the writers were. Even those like Haldane and Vera Brittain (the author of a superb volume on women’s rights in 1929) who had witnessed the horrors of modern technological war, saw technology as being the solution rather than the problem.

Imagined futures nowadays are more likely to be shadowed by risk and anxieties about catastrophes, whether natural (asteroid collision, mega-tsunami) or man-made (climate change and pollution). The damage industrial capitalism has inflicted on the planet has made technology seem like the enemy. Certainly, until anyone has any better ideas, reducing carbon emissions, energy waste, pollution, and industrial growth seems like our best bet for survival.

Imagining positive change

The only thing that looks likely to convince us to change our ways is the dawning conviction that we have left it too late; that even if we cut emissions to zero now, global warming has almost certainly passed the tipping point and will continue to rise to catastrophic levels regardless of what we do to try to stop it.

That realization is beginning to generate new ideas about technological solutions, like ways of extracting carbon from the atmosphere or of artificially reducing sunlight over the polar ice caps. Such proposals are controversial, and sometimes attacked as encouragement to carry on with Anthropocene vandalism while someone else clears up our mess.

But they might also show that we are at an impasse in future thinking, and are in danger of losing the ability to imagine positive change. That too is where comparison with earlier attempts to predict the future might be able to help us. Where the modernism of the 1920s and 30s was very much oriented towards the future, we are more obsessed with the past, with nostalgia. Ironically, the very digital technology that came with such a futuristic promise is increasingly used in the service of heritage and the archive. Cinematic special effects are more likely to deliver feudal warriors and dragons, rather than rockets and robots.

But if today’s futurologists could get back in touch with the imaginative energies of their predecessors, perhaps they would be better equipped to devise a future we could live with.

Max Saunders is a Professor of English at King’s College London. The findings of this research were first covered by The Conversation as part of its Insights series.

The Conversation