04 Aug

How Did the Pollsters Get Things So Wrong at the 2015 General Election?

Posted By Politybooks

Although the 2015 General Election will be remembered for ushering into office the first majority Conservative government since New Labour swept John Major from power back in 1997, it also heralded a distinct sense of déjà vu for those whose memories stretched back just a tad further.

This author’s first opportunity to cast a ballot at a general election came in 1992; a failure in registration having denied him the vote in 1987. Anyone who lived through the 1992 General Election – or studied it since – will recall that is was heralded as a watershed. The contest was, it was said, ‘Labour’s to lose’; and they duly obliged. The party’s fourth consecutive general election defeat resulted in a period of reflection and internal debate that ultimately coalesced into the New Labour project and all that went with it – a period of soul-searching not so very far removed from that underway at the time of writing, with Jeremy Corbyn challenging the Blairite orthodoxy and looking to re-focus the party in the wake of its catastrophic day in May.

There is, however, another way in which coverage of the outcome of the recent general election could be said to jog memories of 1992: namely, the failure of the major polling organisations to accurately predict the outcome of the contest – and the shock prompted both by that failure and by the result itself (see Chapter 11, pp. 309–10).

Labour had been ahead in the polls throughout the 1992 campaign. The last few polls ahead of Election Day suggested that the party would be returned to office with a majority of around 20 seats. Whilst the delayed analysis of the exit polls conducted on the day of the election appeared to predict a hung parliament, there was no suggestion that the Conservatives would win the 21-seat majority that they had, in fact, secured. The average predicted lead for Labour of 1.3% had somehow morphed into a 7.6% lead for the Tories. The polling error (at 8.9%) was far in excess of the 3.3% average seen across the three previous general elections.

Commentators offered a number of plausible explanations as to why the pollsters got things so wrong in 1992:

?Deficiencies in the sampling methods employed by the main polling organisations

?The tendency for certain respondents to refuse to answer – or lie when indicating a preference 

?The fact that some Labour supporters polled may not in fact have been registered to vote

?Deficiencies in Labour’s campaign (not least the Sheffield Rally) or nagging doubts over the party’s then leader, Neil Kinnock

?The numbers of ‘floating voters’ and the concept of a ‘late swing’

?The antics of the ‘Tory press’ – most notably The Sun (see below)

Regardless of the relative influence of such factors, the failure on the part of the pollsters to foresee the eventual outcome resulted in a review of polling methods and a return to predictions that were well within the normal statistical margins of error in the general elections that followed in 1997, 2001, 2005 and 2010.

Sun picture

Explaining polling errors at the 2015 General Election

Perhaps not surprisingly, the failure of pollsters to predict the outcome of the 2015 General Election led commentators to revisit some of the explanations that had been offered in the wake of that earlier polling debacle in 1992. Whilst methods have undoubtedly improved significantly since the early 1990s, the thing that pollsters can never properly factor into their calculations is the possibility that respondents might lie -–or simply change their minds. This latter truth results, of course, from the fact that most polls take place days, often weeks, before voters have actually cast a ballot.

Though initial analysis of voting behaviour at the 2015 General Election appeared to suggest that ‘shy Tories’ or a late swing towards the Conservatives might go some way towards explaining the unexpected outcome, subsequent research by the British Election Study (britishelectionstudy.com) appeared to suggest that the result might in fact be put down to the phenomenon of differential turnout – as reported in The Independent (see below). 

Lazy, lying Labour supporters to blame for surprise Tory election win, study finds 

‘”Lazy”, lying Labour supporters were the reason why pollsters predicted the 2015 general election result inaccurately, according to research by the British Election Study (BES).

It found that a high proportion of Labour supporters told pollsters they would vote for Ed Miliband but did not turn out on polling day, compared to a much smaller proportion of Conservative backers who did the same.

This helps to explain why polls consistently showed the two parties neck and neck right up until polling day, when the actual result gave a 6.5 per cent lead to David Cameron’s party.’

Matt Dathan, Online Political Reporter, The Independent, Friday 17 July 2015.

Those wishing to explore the BES findings in more detail should read the relevant paper, ‘Why did the polls go wrong?’ by Jon Mellon and Chris Prosser.

How did the exit poll produced by John Curtice and his team buck the trend in 2015?

Exit polls differ from the regular polls conducted in advance of polling day in that they ask people how they have voted, thus removing at least one of the variables from the equation. Even with that knowledge, however, those charged with commenting on Curtice’s predictions live as part of the rolling TV election night coverage appeared to have had little confidence in his suggestion that the Conservatives would be 78 seats clear of Labour and close to an overall majority, with the LibDems decimated. (The actual result of the election was Conservatives 331, Labour 232, SNP 56, LibDems 8, Others 23.) ‘If this exit poll is right, I will publicly eat my hat on your programme’, was the reaction on BBC of Liberal Democrat Paddy Ashdown on hearing the exit poll result (The final outcome for his party was actually worse that the 10 seats predicted.) 

Such scepticism was hardly surprising. After all, there had been more than 700 regular polls in the year leading up to polling day, with only a tiny fraction predicting anything other than a ‘very’ hung parliament – or a marginal Labour victory. This article by Harry Lambert, of the New Statesman’s May2015.com site, looks at Curtice’s approach – specifically, the way in which the election-night exit poll data is analysed and finessed into headline predictions by his six-man team. Those looking for a broader perspective on media coverage of the 2015 General Election are likely to find the Media Standards Trust’s Election Unspun project of interest.

Sound and fury signifying just what? 

However, despite the eggs on their clipboards the regular pollsters need not sign on for job-seekers’ allowance just yet. Notwithstanding the ringing triumph of the exit pollsters in 2015, they can never assist voters in their decision-making, or the politicians in their campaign strategies (which may be a good thing!). Such polls are not predictions at all; they aim to tell us what has happened, not what will happen. Perhaps their main use is to give the TV pundits something to talk about over the long late-night and early-morning hours of coverage before the actual results begin to materialise and the elaborate computer wizardry generates pictures of our new political landscape.