At an event put on by the MRIA last week, four pollsters went over what they saw happening in the last federal election campaign.
According to this report by iPolitics, Darrell Bricker of Ipsos Reid was quite critical of the work done by polling aggregators, such as myself and Barry Kay of the Laurier Institute for the Study of Public Opinion and Policy (who was partnered with Global News, Ipsos Reid's media partner). He said:
“I would argue, quite frankly, that these models are as likely to maximize error by putting polls together as they are to minimize it…There’s work to do here and there’s value associated with doing this. But it’s not happening well right now…Sorry, CBC.”
Bricker may very well have a point. Aggregation can certainly be done better, and I and others are always working towards improving it.
But for the record, the error in Ipsos Reid's final poll totaled 1.2 points per party, compared to 1.3 points per party for ThreeHundredEight.com and the CBC Poll Tracker. Hardly a case of maximizing error. And the aggregation out-performed Ipsos Reid at the regional level in British Columbia, the Prairies, Ontario, Quebec, and Atlantic Canada. Ipsos Reid only did better than ThreeHundredEight.com in Alberta.
Ipsos Reid did out-perform ThreeHundredEight.com in the 2011 federal election campaign. But since then, when changes were made to the aggregation model, ThreeHundredEight.com has out-performed Ipsos Reid in the provincial campaigns in Ontario in 2011 and 2014, in British Columbia in 2013, in Quebec in 2014, and in Alberta in 2015, as well as in the municipal election in Toronto in 2014.
More broadly, ThreeHundredEight.com's aggregation has out-performed the polls in most elections. This site has had a smaller average per party error than the average error of all the pollsters in the field for 15 consecutive federal, provincial, and municipal campaigns, and 16 of 18 campaigns overall.
Aggregation is a way to minimize error in the vast majority of cases.
In terms of seat projections, there is certainly more work to be done and I have already taken some steps to address some of the issues with the model. But I should also take this opportunity to point out that the model itself, divorced from how the polls do, has identified the potential winners in 90% of ridings, and its error in the more narrow likely ranges has been four seats or less — in total, all parties combined — in 12 of 14 elections where ranges were given.
Bricker is right to point out the amount of polling data that Nate Silver has to work with. Having that much data would certainly make things a lot easier. But our first-past-the-post system and multi-party democracy is also much more complicated. This was shown in 2010 when Silver had just as much trouble as everyone else in projecting the outcome of the election in the U.K. It happened again this year when FiveThirtyEight affiliated itself with a British outfit.
American elections, with their two parties and, at the presidential and Senate levels, very big jurisdictions, are much more predictable. In this past election, the Liberals jumped 20 points and the Conservatives dropped eight from the previous vote. The last time there was a change of government in the United States, the swing between the Republicans and the Democrats was just five points. Almost half of the seats on offer changed hands in the 2015 Canadian federal election. In the 2012 U.S. presidential election, only two states out of 50 changed hands.
There will always be elections that for one reason or another are very difficult to predict without making unreasonable assumptions or leaps, and the 2015 federal election campaign was one of them. Rather than see that as a reason to abandon everything, this unpredictability is something fascinating that tells us a lot about what happened. That's part of the process — experiments that go wrong can be just as informative as those that go right.