Statisticians React to the News

Uncertainty and prediction

03 November 2020
Uncertainty - prediction

A Danish proverb asserts that Prediction is hard – especially about the future. Prediction is especially hard when it comes to describing uncertainty. Today, the US votes (or finishes voting) on the next President, and there is a lot of interest in forecasting, formal or informal.

On the simple question of the most likely outcome, the majority of predictions seem to be that Joe Biden and Kamala Harris will defeat Donald Trump and Mike Pence. Statisticians, and at least some of their readers, want more than that – would it be surprising if Biden and Harris lost? How surprising? What would it take to say that a prediction was wrong?

There are at least three sets of formal statistical predictions for the election, from groups headlined by Nate Silver at fivethirtyeight.com, Andrew Gelman at The Economist, and Sam Wang at the Princeton Election Consortium. The 538 model is proprietary; the other two have published code and statistical methods. There are other poll aggregators that either attempt to present transparent summaries of the available information (eg, the Washington Post) or aim to estimate the election result without explicit uncertainty (eg RealClearPolitics). Finally, there are betting markets such as PredictIt.

At the moment, the Economist model summarises the probability of Biden and Harris winning the Electoral College as “very likely”, with an explict probability of “around 19 in 20, or 96%”. At 538, the Biden and Harris are “favored to win” and the probability is summarised as “89 in 100”. The Princeton Election Consortium doesn’t give a probability – they are interested in estimating the margin –, but Trump and Pence getting enough Electoral College votes to win is outside the 95% prediction interval at about 2.5 standard deviations below the mean prediction. Andrew Gelman, in a series of blog posts, argues that the discrepancy between the Economist and 538 forecasts come from how the correlations between states are modelled, and that the 538 model sets these correlations too low.

Both the Economist and 538 are explicitly presenting the probability in terms of simulated elections, with the probabilities being the fractions of simulated elections with each possible outcome. This makes sense in terms of research on health risk communication, where phrases like “80 out of 100 patients like you” appear to be better understood than “80%”. Both sites also have graphics showing each simulated result.

One of the risks of communicating probabilities of winning (especially as % or out of 100) is that they will be interpreted as estimated percentages of the vote. Biden having an 89% chance of winning is a much closer election than Biden getting 89% of the vote would be, but the latter is a more concrete percentage than the former. To mitigate this risk, the page at 538 is headed by a set of about 20 simulated Electoral College maps, two with a Trump win and the rest with a Biden win. At the Economist, the page for each state gives both the probability of winning and the expected vote share.

In addition to the numerical probabilities, both sites have chosen a set of probability vocabulary (favored, likely, unclear, all but certain). This is obviously helpful for people who don’t like numbers, but there is a lot of ambiguity in uncertainty terms. An article in Harvard Business Review in 2018 reported on what probabilities people assigned to words, finding that “more often than not” ranged from a bit less than 50% up to about 75%, and that “with high probability” could be anywhere above 50%. There’s a potential for circularity here – if people are better at describing uncertainty in words than in numbers, it could be that the probabilities they assigned are poorly calibrated and that there’s a lot more consensus on the ordinary words than initially appears.

It’s interesting to compare the statistical predictions with the betting markets. Under a moderately optimistic view of gambling behaviour, the markets can find an optimal price, and therefore a rational predicted probability for a simple event such as “Biden is elected”. They can do this both by providing an incentive for people with well-calibrated opinions to win money from those with poorly-calibrated opinions and, at a more sophisticated level, allow people to learn from the betting behaviour of others. The betting markets could well be pretty good at one of the major social purposes of election forecasting: preparing people for the likely news. It’s less clear that they would be expected to beat formal forecasting models in this specific context. There isn’t much non-public information about the election results, so good predictions come primarily from modelling the public information well. And while gamblers have a financial incentive to get things right, Nate Silver and 538 are a clear example of the potential financial incentive to getting forecasting right.

With that caveat, PredictIt is currently offering $0.66 on a Biden win and $0.40 on a Trump win, implying only about a two-thirds probability for Biden, substantially lower than the statistical forecasts. It could be that PredictIt is wrong – that too many people are betting on what they want to happen rather than their expectations, or even just that they are understating their uncertainty. Or it could be that the statisticians are wrong – I don’t think so, but I would say that, wouldn’t I.

More interestingly, and more pessimistically, the discrepancy could be an example of a familiar problem in applied statistics; the need to agree on precisely what is being predicted. The statisticians are trying to predict what would happen if votes are cast and counted under basically the same rules as in previous elections. That is, they are predicting voter intent. PredictIt is betting on who ends up as President. One concern increasingly being voiced in the US is that these will not be the same: voters will be discouraged or prevented from voting, or courts will rule that large numbers of votes are invalid or should not be counted.

Read more from Thomas Lumley here.

----

* Photo by Airam Vargas / Pexels

Thomas Lumley
New Zealand