At the riding level, the projection model was certainly not as close as I would have liked it to have been. Overall, the model called 269 ridings correctly and identified the potential winner (as defined by the parties considered capable of winning the riding by the high and low projections) in 291 ridings. That adds up to an overall accuracy of 79.6% on the calls, and 86.1% for identifying the potential winners.
Where did the model do better? It identified the potential winners in 94% of ridings in Alberta, 91% of ridings in Ontario, and 89% in the Prairies. It performed worse in Atlantic Canada (84% of winners identified), British Columbia (83%), and Quebec (76%).
Not surprisingly, the biggest misses were in terms of the seats that the Liberals ended up winning. The largest group of misses were ridings in which the New Democrats were projected to win, only for the Liberals to pick them up. There were 24 of these ridings, located primarily in Quebec, urban and northern Ontario, and in Atlantic Canada. This is where Liberals unexpectedly defeated New Democrats.
The next largest group were the 19 ridings in which the Conservatives were favoured but the Liberals actually won. These were largely in the Greater Toronto Area, in Vancouver and the Lower Mainland, and in New Brunswick. This is where Liberals unexpectedly defeated Conservatives.
There were seven ridings projected to go NDP that actually went to the Bloc Québécois (mostly north of Montreal), and seven ridings projected to go Conservative that actually went NDP (in the Prairies and the B.C. Interior).
On average, the misses were called with just 65% confidence and the average margin of actual victory in these ridings was about 6.7 points. So, they were modestly close races.
One aspect of the seat projection model worked very well. The assigned probabilities of victory turned out about as expected, though at the lower levels of confidence they were somewhat more confident than they should have been. This is likely due to many instances of three-way races, when the model is designed for two-way contests.
As you can see, the calls were generally as correct as they were expected to be.
At the 50% to 64% level, where the calls performed significantly below expected levels of confidence, Liberal victories were missed in 58% of them, or in 22 ridings. That alone gives an indication of how the Liberals were winning close ridings they were not expected to win. Add those 22 ridings to the final projection of 146 for the Liberals, and you have them knocking on the door of a majority government, rather than apparently coming up well short.
With the actual results plugged into the model, the accuracy of the riding level projection increases to 81.4% (or 275 out of 338 ridings), and to 87.3% (295 ridings) for identifying the potential winners.
As discussed in my analyses of the projection model's performance, we're looking primarily at the Liberals picking up new voters in unexpected places, with strategic voting apparently helping the New Democrats out-perform expectations in Western Canada. The Liberals' vote efficiency in Ontario and elsewhere was also above expectations, though well in line with what the Conservatives were capable of with a similar amount of support in 2011.
Riding polls
Now that we've dissected my performance at the riding level, how about the pollsters?
In the charts below, I've included only the polls done within the last two weeks of the election campaign, and compared the riding-level polling only for the parties that finished in the top three slots on election night. The actual results are in the gray areas, and the date refers to the last day the poll was in the field. Let's start in B.C.
The results in Vancouver Granville were particularly interesting, as Mainstreet's final poll came very close, whereas Environics' poll for LeadNow did not. There had been a lot of controversy in the riding due to LeadNow's endorsement of the NDP, despite the edge given to the Liberals in their final poll.
A few riding polls came quite close to the mark, considering the margins of error. For example, in Courtenay–Alberni, Nanaimo–Ladysmith, and South Okanagan–West Kootenay. Not coincidentally, these were NDP-Conservative races in which strategic voting might have kept the Liberal surge at bay.
Now to Alberta.
Ontario was slightly different.
In Ontario, the Liberals were under-estimated in most riding polls, but not all of them. In the ridings of Brampton North, Bruce–Grey–Owen Sound, Hamilton West–Ancaster–Dundas, Kanata–Carleton, Kitchener Centre, Perth–Wellington, and Peterborough–Kawartha, the results for the Liberals were within the margin of error of the final riding polls.
This should not come as a surprise. Unlike Alberta, in which the NDP did worse than expected in the popular vote, the NDP's support had largely already collapsed in Ontario well before election day. There was no surge that sunk the NDP's chances at the last moment in Ontario (as there might have been in Quebec). The Liberals were already riding high in the province in the week before the vote.
And in the ones the polls did miss, it wasn't always the same party that took the hit at the expense of the Liberals. In Timmins–James Bay and Nickel Belt it was the NDP, but in Flamborough–Glanbrook, Nepean, Kenora, and Sault Ste. Marie it was the Conservatives.
There were fewer riding polls done in Quebec in the final days, but they did moderately well.
But here again we're looking at the Liberals being under-estimated, and significantly so in Chicoutimi–Le Fjord, one of the most surprising Liberal wins of the night. That vote came primarily from the Conservatives, but also the NDP. In Jonquière and Lac-Saint-Jean, the polls did quite well.
There were only a few polls done in Atlantic Canada as well, but they were generally poor. The Liberals out-performed these polls by nine to 15 points, with the NDP taking the hit where they were most competitive and the Tories taking the hit where they were competitive.
More lessons to be drawn from the discrepancies, then.
Who did best? In terms of average error per party (only the top three) of those riding-level pollsters in the field in the last two weeks, Segma Recherche did the best with an average error of 3.4 points per party. Next was Environics at 4.6 points per party, followed by Insights West and Mainstreet at 5.5 points per party each. MQO Research had an average error of 7.0 points per party, while Oraclepoll had an average error of 7.8 points per party. Of note is the performance of ThinkHQ in Edmonton Centre, off by just 1.7 points per party among the top three.