The UK has been to the polls three times in the past three years, and all produced surprise results – in part because the dominant media narrative was shaped by polling that turned out to be off-beam.

In the last few months opinion polls have suggested that the Tories and Labour are neck and neck, with much media chatter about why Labour is failing to capitalise on the torrid time their opponents are experiencing at Westminster.

Given recent history, it’s reasonable to ask whether this narrative also fails to reflect what is going on out there on the ground.

Recall that in 2015 the Tories won an unexpected majority in the general election – polls had forecast a hung Parliament for most of the six-week campaign.

The following year Leave won a narrow victory in the EU referendum, with polls suggesting a Remain win for most of the race, before narrowing late on.

Last year’s election polls showed a clear lead for the Conservatives which led to widespread media anticipation of a landslide. Instead it was a humiliation for Theresa May and her party, which lost its majority.

Polling reliability is a pressing question because the UK could face another election this year. The Conservatives are propped up in office by the DUP, which is mired in political uncertainty with the Northern Ireland Assembly still suspended.

Meanwhile, the next six months are set to be the crunch period for Brexit, with lead EU negotiator Michel Barnier needing a deal he can take to the EU27 capitals by October. That process could quite possibly tear apart the Conservative Party’s fragile truce, which has seen its Remain and Leave supporters by and large helping their government to limp on.

The prospect of an early election is being considered at senior levels within the party; the Tories recently began to canvass their members for funding to hire campaign managers, and we hear Conservative Campaign Headquarters is ‘on a war footing’.

So, is there reason to be wary about what we think the polls are telling us? Probably, yes.

Here’s a previous piece looking at why political polls in general should be treated with caution. Now let’s look at what went wrong in the last three years in particular.

In simple terms, polling involves a two-step methodology – asking a representative sample of the population what they think and then using modelling to reflect their varying likelihood of voting on the day. Both of these steps require pollsters to make subjective judgments.

In 2015 pollsters spoke to too many Labour supporters and did not find enough Tories.

In 2016, pollsters found plenty of Leave supporters but discounted their likelihood of going to the polls, because they rarely voted in general elections. It turned out they were far more motivated by having a say on EU membership than by the usual party politics.

Last year is perhaps the most interesting case because the explanation of what went wrong is more complicated. At the time I cautioned the situation might not be as clear-cut as it appeared. It seems as though the earthquake of Brexit, combined with all parties facing shifting dynamics in their traditional heartlands, made it very difficult to predict how voters would behave in any one individual constituency.

“In the absence of a uniform national swing, traditional polling models become quite unstable and don’t capture local divergences,” says Will Jennings, professor of political science at the University of Southampton.

This locally-fluctuating environment has been rare in previous decades’ elections, though perhaps it is set to become more common given the shifts underway in UK politics.

Another factor was that this time pollsters did a pretty good job at counting Tories, but did not find enough Labour voters.

The combination resulted in a dramatic night for TV viewers – and a shock for Westminster.

After three misses, should traditional pollsters do things differently?

Here’s one thing they should contemplate: not everyone got the 2017 election wrong. An experimental methodology run by YouGov, separately from its main polling operation, was startlingly effective. The model designed by Ben Lauderdale, a professor at the London School of Economics, and YouGov’s data scientists worked in the opposite way to traditional political polls.

Polls seek a representative sample at national level; election forecasters can then use that to make predictions about the number of seats each party will win.

YouGov turned that approach on its head – it drew up socio-demographic profiles of different types of voter and constituency, polled on the basis of those types, produced estimates for individual seats and then aggregated that up into a national total for each party.

As YouGov founder Stephan Shakespeare wrote shortly after the model was launched last May:

“Our research can tell us the voting intention of a voter who is aged A, has an income of B, education level of C and has attitudes to various issues of D, E, F and G. Our national sample will have lots of these people. We can project likely constituency level outcomes based on the profile of each place.”

First published a month out from the election, its forecast of a hung parliament was met with incredulity by much of Westminster.

YouGov and Prof Lauderdale had the last laugh, however. Their model worked astoundingly well, accurately anticipating upsets in extremely unlikely seats such as Canterbury, which has been Tory-held since its creation in 1918 but was taken in 2017 by Labour. The city’s characteristics as relatively urban, Remain-supporting in the EU referendum, and with a large student population all made it ripe for a leftward swing, according to the YouGov researchers.

In particular YouGov’s model was better at capturing the upswing of support for Labour than traditional polls were. Prof Lauderdale attributes this to the model’s ability to identify the type of seats in which voters would swing towards Labour – and away from them. In a very heterogeneous polling environment, its focus on local socio-demographic profiles enabled it to pinpoint where the local political earthquakes would happen.

It is not clear whether YouGov’s success was a fluke – we will only know that if they run it again in future elections – but Prof Lauderdale did not think so. The day after the election, he wrote:

“It is possible for a pollster to get lucky on a national vote share or seat prediction, even if the underlying sample data and analysis are not sound. Similarly, even sound methodologies will be unlucky sometimes. But we are confident that the model’s success was not just luck: it is practically impossible to get lucky on 632 constituency predictions and the model correctly identified the winner in 93 percent of the seats. The model did not just get the winner correct, in most cases the estimated party vote shares were accurate to within three or four per cent.”

Unfortunately YouGov is not still running that model as it is not part of its regular ongoing polling operation, so we cannot compare its findings to what the polls are currently telling us. We have to hope that they boot it up next time UK voters are asked to go to the polls.

In its absence, we asked a leading pollster whether it was feasible support for Labour is systematically underestimated.

He said yes, it was theoretically possible, by up to five or six percentage points. This could be because either pollsters are not contacting enough Labour supporters, or Labour supporters who were previously reluctant to turn out and vote may be more motivated in future.

The only way we’re likely to be able to test that thesis, though, is if the UK holds yet another general election.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments