Note that this post was written before the final count that was concluded on May 29, 2013. The final count changed the results slightly, with the Liberals winning 49 seats and 44.1% of the vote and the New Democrats taking 34 seats and 39.7% of the vote. The riding which flipped over to the New Democrats was Coquitlam-Maillardville, which the projection had originally forecast for the NDP. That increased the accuracy to 82.4%, or 70 out of 85 ridings. The Electoral Track Record has been updated to reflect the final counts, but the post below has not been.
Now that the dust has settled a little and those in the polling industry (along with myself) have had some time to reflect on Tuesday's results in British Columbia, it is time to take a look at how the projection model performed. But I'd also like to discuss the methodological debate in Canadian polling, how this site has approached it, and the future of this site within the context of a plummeting faith in polling.
The model did about as well as it could considering how different the election's results were to the final polls of the campaign. The model is not capable of second-guessing the polls to the extent that it could have predicted an eight-point NDP lead turning into a five-point Liberal win.
The forecast ranges were included to try to estimate how badly the polls could do if another Alberta-like scenario played out, and aside from the NDP falling two points below the forecasted low they were able to capture all of the vote and seat results at the provincial level. They were not, however, able to capture the performance of the Liberals and New Democrats in metropolitan Vancouver and in the Interior/North, demonstrating just how unbelievably well the Liberals did in these two parts of the province. Their vote came out in huge numbers here (and/or the NDP's stayed home), and the Liberals won the election.
Of course, the forecast ranges are somewhat absurdly wide. But that is more of a reflection of how unpredictable elections have become in Canada. They are absurdly wide, and yet still needed.
The parties did about as well as expected on Vancouver Island, however. If turnout was one of the factors in explaining why the polls missed the call, the Liberal ground game did its work in the rest of the province, while Vancouver Island was left to the NDP.
In all, the seat projection made the right call in 69 of 85 ridings for an accuracy rating of 81.2%, while the potential winner was correctly identified (by way of the projection ranges) in 73 of 85 ridings, for a rating of 85.9%. This shows how the election was really won in just 12 ridings, as the projection ranges (which did not consider a Liberal victory likely) only missed those 12.
Metropolitan Vancouver was where the election was primarily won. The projection gave the NDP between 45% and 51% of the vote and the Liberals between 36% and 41%. Instead, the Liberals took 46.1% of the vote in the region (as this site defines it) to only 40.4% to the NDP. The Liberals won 24 of the 40 ridings, instead of the 14-16 they were expected to win.
The Interior/North was also a major factor in the Liberals' victory. They were expected to win the region with between 38% to 45% of the vote, narrowly beating the NDP out at between 37% and 45%. This gave the Liberals between 12 and 22 seats and the NDP between 9 and 16. Instead, the Liberals won 24 seats with 48.2% of the vote, while the NDP won only 7 seats with 35.4% of the vote.
On Vancouver Island, the NDP won 11 seats, the Liberals two, and the Greens one. The projection did not give the Greens any seats, but expected 11 to 14 for the NDP and 0-3 for the Liberals. The NDP was expected to take between 44% and 53% of the vote, the Liberals between 27% and 35%, and the Greens between 10% and 17%. The NDP actually took 43.9% to 34.2% for the Liberals and 17.2% for the Greens. It would seem that some of the Conservative vote (they took 4%) went to the Liberals and some of the NDP vote went to the Greens, but overall the island played out mostly as forecast.
As usual, the seat projection model was not at fault. If the polls had been accurate, the model would have projected 49 seats for the B.C. Liberals and 36 for the B.C. New Democrats, mirroring the result closely. The ranges would have been 37 to 57 seats for the Liberals and 27 to 46 for the NDP, while up to one Green would have been projected and two independents.
The right call would have been made in 76 of 85 ridings, for an accuracy rating of 89.4%, while the potential winner would have been correctly identified in 81 of 85 ridings, for a rating of 95.3%. The challenge remains getting the vote totals closer to the mark. Frustratingly, that is the one thing I have the least control over.
How the projection model would have been wrong in a few individual ridings is interesting, and reflects just how important local campaigning can be. Three of the incorrect nine ridings (with the actual regional vote results) included Delta South, Vancouver-Point Grey, and Oak Bay-Gordon Head. The model would never have been able to boost Andrew Weaver's support enough to give him the win without some improper fiddling with it on my end. In Delta South, Vicki Huntingdon's support was stronger than would have been expected. And most significantly, Christy Clark's rejection in her own riding is all the more starkly shown. She did not lose it because the overall race was close - the overall swing should have kept the riding in her hands.
Polling methodology and what went wrong
All eyes have turned to how the pollsters are doing their work. Some of the pollsters are looking at their methods and trying to figure out what went wrong and what can be done to avoid these issues in the future. Others are crowing that this or that poll they did a week before the election turned out to be prescient, and it appears that some lessons will not be learned.
A hypothesis does seem to be forming as to what happened. I'd identify a few factors:
Turnout - Turnout was only about 52% in this election, and that can throw off a pollster's numbers to a large degree. However, turnout was also very low in the 2009 election and the polls did a decent job that time. Turnout is not a silver bullet, then, but the effect turnout had in 2009 may not have been the same as in 2013.
Motivation - According to Ipsos-Reid's exit poll (which I will return to in the future), very few British Columbians thought the Liberals would win a majority government (only about one-in-ten), while one-half thought the New Democrats would win. This might have depressed turnout even more, with some New Democrats not bothering to vote since they felt they would win, and some Liberals turning out in greater numbers to ensure their local MLA would get re-elected, even if the party itself would be booted out of government. Conceivably, though, Liberals not bothering to vote for a lost cause should have cancelled things out. And in most cases, people tend to vote in greater numbers for a perceived winner.
Election Day Shift - Yes, it is unbelievable that the polls were right all along and a dramatic change of heart occurred in the final hours. But Ipsos-Reid's poll showed that 9% of Liberal voters made-up their minds in the voting booth. If all of those voters had instead voted for a different party, the Liberals would have been reduced to about 40%. That would have been closer to most polls, but still much higher than even the margin of error would have considered possible. And, of course, some of those 9% might have just been wavering Liberals who did not make up their mind until the last minute, but had told pollsters they were still intending to vote Liberal. While certainly part of the equation, it cannot be all of it.
Bad polling - This is probably the main reason why the polls missed out on the call. The other three factors may have been worth a few points each, but there does seem to have been a problem in building a representative sample. Pollsters will need to figure out why that is.
One of the problems that has been identified most often (especially by those pollsters who use other methods) is that most of the polls used online panels. These have had success in the past, including the 2009 B.C. election, but perhaps online panels are less able to consistent give good results - particularly in provincial campaigns where the panel may be smaller. But this cannot be the only reason, as Angus-Reid's online polling in Manitoba - a province with a quarter of the population of British Columbia - was stellar in its 2011 provincial election.
Nevertheless, the track record of online firms has taken a hit. Telephone polls using live-callers still seem to have the most success. Reaching people by telephone - including mobile phones - probably remains the best way to do quality polling. It is also a good way to do expensive polling.
Is the extra accuracy worth the extra cost? That might not be the case when it comes to market research. Whether it is 36% or 44% of people who say they have trust in your company's brand is not vital information, as long as it is in the ballpark. Even at their worst, online polls have been in the ballpark (the Liberals and NDP were not polled to win the election in Alberta, and nor were the Greens or Conservatives ever pegged to have more than marginal support). But in an election, the quality of a poll, and not the cost, should be the deciding factor in whether or not to report it.
The chart below reveals some information that I have up to now kept to myself. Pollsters are rated in my weighting system by their track record. That track record extends back over 10 years, with more recent elections being more heavily weighted. The difference between one firm and the next is usually not very large, and some of the difference is due to the elections in which these firms have decided to take part. Those that stayed out of Alberta and B.C. are inevitably going to have better ratings than those who didn't. I have considered overhauling the rating system to take into account these sorts of considerations, but I have not yet done so. Because I haven't, I am reluctant to actually rank the polling firms publicly by name.
But I am willing to rank them by methodology. These are the 10 firms in Canada I consider to be major firms, and the method they have used in their most recent election campaign. They are polling firms that release national, regional, or provincial polls on a regular basis. The chart shows each firm's average error per party in any election in which they were active, going back ten years.
As you can immediately see, the polling firms that conduct their surveys using live-callers occupy the top three ranks. The online and IVR polling firms have had less success. The difference is not huge, however - on average, the third best firm misses the call by fewer than 0.5 percentage points per party than the seventh best.
However, it is clear that polls conducted over the telephone with live-callers have had a better track record. That does not mean that they will always have a better result: in the 2011 federal election, Angus-Reid's online panel had the lowest per-party error. But it does suggest that the online panels still have some work to do.
Where to go from here
There were moments yesterday when I contemplated the end of ThreeHundredEight. Why run a site about polling when polling in Canada is so horrid?
But the polling is not always horrid, and even when it seems to be on the bad side there are some indications of something else at play. Alberta is an obvious example, but maybe British Columbia's errors have some mitigating factors as well.
Even if that is not the case - and I am not convinced that it is - polls are not going away and I still believe that they are a useful tool. The electorate deserves to know what the collective wisdom of the people is on various issues, including on the question of who should govern them. But the electorate deserves good, reliable information. Bad information is much worse than none at all, but polls are not going to disappear.
Though I could never claim to be impartial on the question of whether polls should be paid any attention at all (if they are ignored, I would need to find a new line of work), I can continue to be an impartial observer, analyst, and (when need be) critic of the industry. In its own tiny little way, ThreeHundredEight can be part of the solution.
That means more of a focus on methodological and transparency issues, sweeping trends, uncertainties in the polling data, and wider questions about what the numbers mean, if anything at all. It means less focus on the horserace, more caution in reporting numbers, a forecasting model that emphasizes what we don't know, and more reserve in giving attention to questionable polls. And when a poll is questionable, drawing attention to the reasons why.
It might mean a drop in traffic and it will certainly require more work and effort on my end. And like all junkies, I might relapse. But I think it will be a worthwhile endeavour. I welcome your thoughts in the comments section.