The polls were correct, we just didn't understand the question.
<
To me that looks pretty accurate – certainly well within the expected margins of error.
The problem is, of course, that in our First Past the Post system, seats aren’t allocated according to the total percentage, but by the winner in different electorial regions. The relationship between the percentage vote and the actual number of seats the party actual wins, is both complex and messy. As such the polls are (still) a good predictor of a parties overall votes, but a poor indicator of how many seats they would win.
This seems pretty obvious but given that this has always been the case, why has it suddenly become an issue in this election? Well, not least of all because we’ve moved from a two party system, with various smaller parties becoming more significant, the discrepancy between the overall percentage votes and seats is much greater this time, as many commentors have observed:
(
(source: http://i100.independent.co.uk/)
Some will argue that this is necessary to ensure we have definitive winners and strong parliaments, whilst others will argue that this is unfair and our electoral system needs reform, and that coalition parliments are more democratic rather than weak.
However, the real problem lies in a misunderstanding what the polls tell us when there is no clear leader in the overall percentage vote.
If a party has a 20% lead in the polls, it doesn’t mean it will get 20% more seats – it is instead a good indicator that the party will get a majority of the seats
If the party had a 40% lead rather than indicating an increase in the majority, it really indicates that you can be more confident that that party will win a majority of seats; on the other hand a 10% lead means the reliability of the prediction is less.
Hence, a poll with the two lead parties running neck to neck does not tell us that we will get a hung parliament (as the media constantly reported) but rather than the poll’s ability to make a prediction is extremely weak (which admittedly the media also talked about the outcomes uncertainty but not quite for the right reasons).
It remains to be seen what influence the polls (or rather the misinterpretation of the polls) on how people voted (and whether this had a significant impact), for example how many people voted tactically assuming on the basis that the outcome would be a hung parliament.
One notable feature of all the polls however is that the SNP was lumped together in the “Others” category, which may explain why its performance came as such a surprise to the media and politicians.]]>
The pollsters take this factor into account, they pronounce on the number of seats (as that is what matters) not just the share of the vote. The exit poll is still a poll, and it was very close.
The exit polls work because they poll the “right” thing. The polls leading up to the election poll quite a different thing, and then the pollsters extraoplate a guess in terms of the seats. My main point is that when there if a large difference between the leads in the (non-exit) polls the confidence is higher than when there is a small lead. An insignificant lead indicates not that the parliament will be hung, but that the number of seats cannot be predicated from that type of poll.
the fact the pollsters failed to predict the number of seats supports my point
The parties poll on a seat by seat basis too, and (apparently) didn’t see the result either, so I’m not sure your main point carries?
Do the parties poll all the seats or just key marginals to decide where to focus their campaign efforts? If the latter, then they too would be reliant on the % polls for the overall picture.
If the parties poll all the seats (or a large majority), do they do so in one go, or gradually over the campaign. If the latter, then as it is a flawed method (even if it might have given a correct answer in this case), any discrepancies with % polls and the general concensus would probably be discounted.as anomalous.
They focus on the marginals, there are only around 100 usually (more like 150 this time), and do so on a rolling basis. There were (I hear) still lots of surprise results, even in well polled seats like cardiff north (I live there), labour, Tory and public polls all gave it to labour, but it went to Tory (it was a slim Tory majority last time), same thing happened in Ed Balls former seat (although obviously this changed hands). Pollsters already adjust for shy Tories etc. so it is still (to me) surprising they were quite a bit out in quite a number of seats. CCHQ thought they would get 290, and they are normally seen as being overly optimistic. It is true that this was a much harder one to call than most since 1992, but even that (which predicted a small labour majority) was closer to getting. The number of Tory seats than this one (and the pollsters bought they had learnt their lesson then!)
How did the BBC Poll of Polls give a higher percentage to UKIP than any of the others? Or was it based on a longer time period?
Nope, just a transcription error on my part – now fixed.
Andy Clarke liked this on Facebook.
Having worked with market research data for nearly 20 years I have a pretty good eye for spotting those kind of typos! I personally don’t think the polls were that great – generally understating tories by 3 points and overstating labour by 3-4. The Ipsos Mori wasn’t too bad apart from the Greens.