In the end, the results were a little more interesting than people were expected. The Liberals still won a big majority in Newfoundland and Labrador (roughly as big as the PCs won in 2011), but the opposition was slightly more robust than some had feared it would be.
The Liberals won 57.2% of the vote, and captured 31 of the 40 seats that were on offer. The Tories took 30.1% of the vote and seven seats, while the NDP took 12.1% of the vote and two seats. I went over the regional breakdown of the results in my CBC piece here.
But how did the projection do? Broadly speaking, it did a decent job.
The Liberals were projected to within one percentage point and one seat and the Tories within two percentage points and one seat. The NDP was over-estimated by a little more than three percentage points and two seats.
But overall, with the emphasis having been placed on the minimum/maximum projections, the forecast was good.
And the new three-election model proved its worth. Had the old model been used, the Liberals would have been projected to win 27 seats, with nine going to the Tories.
The district-level projection was not as good as I would like it to be, making the right call in 33 of 40 ridings for an accuracy rating of 82.5%. The potential winner was identified by the likely ranges in one more district, bumping that accuracy up to 85%.
In the maximum ranges, the potential winner was identified in all but one district — the result in Fortune Bay–Cape La Hune was the one that bucked all the trends.
Of the seven errors, one was projected with 50% confidence and four with 67% confidence or less. The errors were not consistently on one side or another, with the PCs winning three seats the projection model awarded to the Liberals and the Liberals winning four seats awarded to either the NDP or the PCs.
With the actual results plugged into the model, the district-level accuracy did not change: 33 of 40 would have been called correctly, though the potential winner would have been identified in 35 of 40 ridings. But the errors would have been more consistent in that the model would have systematically over-estimated the PCs, getting more NDP seats right but more PC seats wrong. The projection with the actual results would have been 27 Liberals, 12 Tories, and one NDP. The old model would have done even worse, with 25 Liberals, 13 Tories, and two New Democrats.
How did the polls do? It might be better to ask how the poll did. Only Forum was in the field in the last five days of the campaign. In the table below I've included all of the pollsters that released data during the campaign, but I think it is worth considering the amount of time between the polls conducted by Abacus and CRA and voting day. This is particularly the case for CRA, which was in the field for more than two weeks.
Forum's election-eve poll was the closest, with an average error of 2.33 points per party.
The performance of the other pollsters appears to have been directly related to the gap between their final poll and election night. I'm not sure if there is much to read in this — though Forum did have a poll done on Nov. 24 that was much closer to their final estimation than it was to the Abacus and CRA polls released during the same week. But there just wasn't enough data to do much comparison between the different pollsters.
Nevertheless, Forum's last poll of the campaign was close to the mark, following on their successful final poll of the federal campaign.
Next up, Manitoba and Saskatchewan in April.