Polling post mortem and why understanding research matters
About the author
Paul Noble FCIPR is co-author of Evaluating Public Relations and has contributed the research and evaluation chapter in Exploring Public Relations.
The opinion polling industry blames plenty of people for last year’s poll debacle around the general election: voters have been accused of being too old, too young or too busy, and the media have been accused of being too miserly.
However, this week’s preliminary post mortem, commissioned by the industry and chaired by Professor of research methodology Patrick Sturgis, comes to a surprisingly simple conclusion: the polling organisations’ samples were unrepresentative.
To the uninitiated among us, that’s seems to be a particularly obvious, basic error. So what’s going on – or rather went on?
For the polling companies, election polling is high profile but commercially it is small beer. Media sponsors insist on both low budgets and a quick (24hr) turnaround. This combination of low cost and high speed was an accident waiting to happen. Quota sampling is traditionally used for polling, as it is relatively quick and low cost. In contrast, the Office of National Statistics, for example, uses random sampling – but that can require knocking on the same door up to 11 times to get a response from those in the sample.
Three specific problems with the methodology employed by the election polls have emerged. Older voters, who are more likely to vote Tory, and to vote at all, were under represented in online panels. Younger voters, who tend to prefer Labour, are less likely to vote. In addition, Labour voters tend to be easier to reach than ‘busy’ Tory voters. The overall result is that Labour voters were over-represented in the samples derived.
The more cynical among us might smell the whiff of a conspiracy from another of Professor Sturgis’ findings. He commented on ‘the lack of variability across the polls’: virtually all the polling companies’ forecast that the result was too close to call. To be fair, he does stress that there was no evidence of anything inappropriate going on. However, suggestions of ‘herding’ where different companies employ similar methodologies have not been ruled out.
Undoubtedly, we will all learn lessons for the future. For me, it is that we should not automatically ignore ‘outliers’: the Labour Party in particular undertook some limited private polling that contradicted the public polls but they were seduced by the volume of contradictory public polling. All of us should reflect that, however well funded and professionally managed primary research is, there is always the possibility of a rogue result, or even results if the polling companies are hunting in packs.
Sturgis’ final report will be published in March and it is anticipated that he will call for less quantity and more quality. I might simply repeat the old adage: ‘You get what you pay for’.