Humans, not AIs, will save us from the endless slurry of fake news

Artificial intelligence isn't yet discerning enough to intervene in the bot war damaging our democracy

In 2019, many of the first drafts of history will be written by artificial intelligence. Rather than spending tens or hundreds of hours synthesising information from thousands of sources, analysts will have a personalised AI that generates written briefings for them in minutes, auto-updating as data inputs change. AI will become a core layer of the stack.

That’s the good news. The bad news is these same technologies are also very good at generating propaganda and disinformation – meaning that they are on the verge of pressure-testing some of our most closely-held democratic processes and norms. The bot-generated propaganda we saw in the 2016 US presidential election was primitive at best. To the extent there was any automation at all, it was crude. In 2019, AI will allow content to be targeted, personalised and optimised to prey on our anxieties and hijack our attention for maximum political advantage. The end result could mean enormous – and possibly irreparable – disruption of democratic processes.

Text has always been one of the final frontiers of AI because building machines that read and write is hard. But the same neural networks used to generate text in high-value commercial contexts can also be spun into engines for fake news. Technologically, it’s easier to build a computer that writes something that is fake versus something that is true.

As the complexity of our world increases, so too does human susceptibility to manipulation. Imagine a scenario whereby thousands of different anti-gun control or pro-fascism headlines are written by an AI trained to understand key beliefs of each faction. The headlines are auto-matched with highly graphic images, tested on social media to determine the most viral combinations and then personalised using the Likes, tweets and digital breadcrumbs that users leave scattered across the internet.

As these messages propagate through the network, they bring together like-minded people who then radicalise others along the way. The predictable mathematical patterns of opinion formation take hold and shape the dominant narratives. Repeat, speed this up to superhuman scale, and we have a new form of warfare.

Read more: As fake news flourishes, the UK's fact-checkers are turning to automation to compete

This year we discovered that Russia had spent $1.25 million per month on ad campaigns leading up to the US presidential election. Imagine what the combination of $100m and even the halfway-decent AI we have at the moment will be capable of. The science and the computing power is there, and, in some cases, the core algorithms and code to execute this at scale is available to download directly onto a laptop.

Armed with these technologies, a small group of well-funded people will be able to launch an attack that ultimately breaks democracy, driving people to hate each other more than they love their country. Nor should we expect AIs to come to the rescue and solve this problem for us. The current world-leading algorithms are not performing significantly better than random on real world “real news vs. fake news” classification tasks.

One of the reasons that computational propaganda has been so successful is that the naïve, popularity-based filtering systems employed by today’s leading social networks have proven to be fragile and susceptible to targeted fake information attacks.

To solve this problem, we will need to design algorithms that amplify our intelligence when we’re interacting together in large groups. The good news is that the latest research into such systems looks promising. A team from University College, London has shown that when you break apart large groups into multiple smaller groups and then allow human-to-human debate within these smaller groups, you can improve collective intelligence (more effective decision-making through collaboration) by up to 30 per cent.

At MIT, Sandy Pentland and colleagues have shown an improvement in collective intelligence can be achieved by designing social networks programmed to dynamically change over time. Other researchers have achieved similar results by designing networks that routinely expose us to viewpoints that contradict our own or which periodically isolate users to allow them to make decisions and form opinions without influence from others.

Insights such as these will help us design a new topology for the networks we use to share information and that will help democracy, which is ultimately about collective intelligence. In 2019 this endeavour will continue. While we wait for AI to become smart enough to determine truth from fiction, we will set about enhancing our collective intelligence by designing systems that allow for better human-to-human interactions online.

Sean Gourley is founder and CEO of Primer

This article was originally published by WIRED UK