2
Shares
Pinterest WhatsApp

On 18th April 2017 Theresa May announced a snap general election to take place on 8th June. The announcement came as a surprise and was widely believed to be motivated by the large lead in the polls (approximately twenty points) that Ms May holds over her main rival, Labour Party leader Jeremy Corbyn. In calling the snap election at this point, Theresa May has put a lot of confidence in her projected lead in the polls. This is interesting because British election polls have previously been met with a large degree of skepticism and distrust. In this blogpost, I briefly explore the British polling experience and highlight the various explanations that have been provided for the UK’s poor track record in predicting election outcomes. I also discuss why the British experience may differ from other countries where polling has been more successful.

Polling Misses in the UK

A recent discussion of British polling history is offered by American polling guru Nate Silver on his forecasting website. Silver shows that since 1979, the average polling miss in UK elections has been six percentage points. Much larger, as Silver happily points out, than the average polling miss in the US which is about two percentage points. The most notable polling errors in Britain occurred in the 1992, 2010 and 2015 General Elections.

Interestingly however, the nature of these errors varied a lot per election. In 1992, the polls predicted a Labour lead only for the Conservatives to win a majority at election day. In 2010, polls severely over-estimated the rise of the Liberal Democrats, whereas in 2015 polls under-estimated Conservative support and over-estimated the electoral support for the Labour Party.

As polling has become more prominent and as a response to the polling misses, academic literature on the topic has increased, seeking an explanation for the errors that were made. The 1992 polling miss has generally been explained by the so-called “Shy Tory” phenomenon. Put simply, voters in 1992 felt embarrassed, or shy, to admit to voting for the Conservative Party. When asked by pollsters what they would vote, these voters answered ‘Don’t Know’, rather than admit to supporting the Conservative Party. This phenomenon is also referred to as “social desirability bias” and is a well-known feature of survey research: survey respondents are reluctant to provide answers they think the interviewer, or society at large, might disapprove of. As a result, pre-election polls in 1992 included a large number of ‘Don’t Know’ responses, almost all of whom ended up voting for the Conservative Party.

In 2010 pollsters accurately predicted the Conservative seat share, but falsely predicted a large increase in seats for the Liberal Democrats. Furthermore, compared to 1992, the 2010 polls now under-estimated Labour support (Boon and Curtice 2010). A study by Pickup et al. (2010) explores potential explanations for the polling miss and contrast the results of various polling companies to see whether telephone or internet polls fared better and whether specific polling methodologies (such as adjusting for likelihood to vote) resulted in better predictions. The authors find this is not the case. Rather, it appears that there was a: ‘industry wide bias’ in which Liberal Democrat support was under-estimated.

Other authors have suggested the cause of this bias might have been “Shy Labour voters.” In contrast to 1992, in 2010 it was Labour voters who were embarrassed to admit their political preferences. This may have resulted in the industry wide under-estimation of the Labour vote share to the detriment of the Liberal Democrats (Boon and Curtice 2010).

The 2015 General Elections again surprised the British electorate: polls predicted a neck-to-neck race between Labour and the Conservative Party, only for the latter to win an outright majority on election day. The so-called “polling miss” of 2015 incited a large-scale public debate and led to an official polling inquiry which was tasked to identify the causes of the erroneous predictions. The polling inquiry report is extensive and explores a range of potential causes for the 2015 polling miss. The main conclusion of the report is that: ‘the primary cause of the polling miss in 2015 was unrepresentative samples’ (Poling Inquiry – Executive Summary).

Put simply, most polls in 2015 included too many Labour respondents and too few Conservative respondents. The report furthermore states there might have been a “late swing” to the Conservative Party, but that this swing did not account for the size of the polling error.

Pollsters were unable to vindicate themselves with the Brexit Referendum in 2016. Almost all polls predicted that “Remain” would win and few, if any, ever predicted a lead for the “Leave” campaign. Yet, it must also be noted that the majority of polls indicated that the final result would be very close.

Why is Polling so Difficult?

Polling is a complex business. There are several challenges that pollsters across the world must face. One is the reduced response rate of polls. Pollsters used to use a method of random dialing, which ensured random samples of survey respondents. Yet, in today’s day and age few people use landline phones and few people respond to anonymous phone calls from pollsters on their mobile phones. Consequently, much polling has moved to the internet. This presented both challenges and opportunities. An important concern with internet polls is the fact that internet users are not representative of the entire population (they tend to be young and male).

The Brexit polls presented their own set of challenges to pollsters because of the unprecedented character of the election. In general election polls pollsters can use information of respondents’ prior behavior to predict the likelihood that a voter will turnout for instance. No such information was available for Brexit. Furthermore, because the two main parties were internally split on the issue, even party preferences or key demographics could not be used to gauge the likelihood that a particular respondent would vote either Leave or Remain.

Finally, a problem may lie with the interpretation of the polls both by the wider public as well as with professional journalists and political pundits. This issue has been raised both in the wake of both the Brexit Referendum as well as in response to the 2016 Presidential election. The surprise reaction to both these events may not do the polls justice. Many polls in the lead up to the British Referendum predicted a close race and likewise many US polls predicted a tight race between Trump and Clinton. [1] It is well known however that most people, even experienced political journalists, have trouble accurately interpreting probabilities and that humans have a tendency to interpret information in a way that best suits their desired outcome.

British Exceptionalism?

The British history of polling misses does seem exceptional however when compared to the track record of pollsters in some other countries. French polls accurately predicted the numbers one through four in the first round of the Presidential Elections and Dutch election polls accurately predicted the general election outcomes of March 2017. And polling in the US Presidential Elections is generally more accurate than polling in the UK. Although the 2016 Presidential Election results were a surprise to some, Nate Silver argues that:

‘final polling averages in the US have missed the presidential popular vote by only 2 percentage points on average. (…) contrary to the conventional wisdom Trump’s performance wasn’t an exception. He beat his national polls by only 1 to 2 percentage points in losing the popular vote to Hilary Clinton.’

Little comparative research exists as to why polling in Britain might be more challenging when compared to other countries, but it is clear that important cross-country differences exist. When compared to the US an important difference may lie with the resources available to polling organisations. Furthermore, in the US voters’ demographic features correlate more strongly to vote choice than is the case in many European contexts. Thus, American pollsters have more information to work with compared to their European counterparts when predicting peoples vote choice.

A final important difference that is often overlooked is the effect of electoral systems on polling. In the UK, voters vote in constituencies and votes are then translated into seats on the basis of a first-past-the-post system. As such, in the UK case, the popular vote-share a party receives carries little information as to the final division of seats. This is in stark contrast to a system like the Netherlands where voting is entirely proportional and is done in one single constituency. In order to improve polling accuracy under these conditions polling should thus really be conducted at the constituency level. So Lord Michael Ashcroft is the only one to conduct polls in this manner. Polling at constituency level requires gathering data across all 650 electoral constituencies, a very resource heavy task.

Snap decision 

A twenty-point lead in the polls may thus have seemed like an irresistible opportunity for the Conservative Party to call an election. But based on the history of British polling, it remains to be seen whether PM Theresa May hedged her bets wisely.

 

References

Boon and Curties 2010. ‘General Election 2010: Did the Opinion Polls Flatter to Deceive?’ Available at: https://www.research-live.com/article/opinion/general-election-2010-did-the-opinion-polls-flatter-to-deceive/id/4003088.

Mellon and Prosser 2015. ‘Missing Non-Voters and Misweighted Samples: Explaining the 2015 Great British Polling Miss’ Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2631165.

Pickup, M., J.S. Matthews, W. Jennings, R. Ford and S.D. Fisher (2011) ‘Why did the polls overestimate Liberal Democrat support? Sources of Polling Error in the 2010 British General Election’, Journal of Elections, Public Opinion and Parties. 21(2), 179-209.

Polling Inquiry Report, available at: http://eprints.ncrm.ac.uk/3789/1/Report_final_revised.pdf

Silver, Nate 2017. ‘ The UK Snap Election is Riskier than it Seems’ Available at: https://fivethirtyeight.com/features/the-u-k-snap-election-is-riskier-than-it-seems/

Notes

[1] This is also raised by Nate Silver (2017) in defense to the surprised reaction to Trump’s win, pointing out that: ‘Far from having committed an extraordinary error, the polls had correctly pointed toward a competitive race in the Electoral College, although the pundits had mostly ignored this data.’

This article was first published on the website of the Oxford Q-Step Centre.

Comments

comments

Previous post

What Brexit means for Britain's future, according to Oxford University's Chancellor

Next post

Has the Fixed-term Parliaments Act failed?