Saturday, 11 February 2017

The Wisdom of Crowds: A Census of EPL Forecasts

Introduction


We're nearly two-thirds of the way through the 2016/17 EPL season, which seems a good time to try to predict what might happen. Chelsea’s nine-point cushion and relentless form make them clear favorites for the title; not since Newcastle in 1996 have a team blown such a lead. Just five points separate second from sixth as the remaining superpowers battle for Champions League places: who will miss out? Perhaps the mantra ‘most competitive EPL season ever’ is best reserved for the relegation fight, though. Six teams, two points and an ever-changing landscape. Amongst them: last season’s heroes, Leicester. Too good to go down?

Most TV pundits are definitive in their predictions, indeed they are typically paid to be so. Others prefer to let the numbers do the talking. Football analysts around the world build mathematical models to measure team strength and calculate the probability of match outcomes. Rather than saying “team A are likely to beat team B”, they'll say “I estimate that there is an 85% probability that team A will win”.

There is no agreed method for designing a forecast model for football. Consequently, predictions vary from one model to another. However, there is also strength in diversity. Rather than comparing and contrasting predictions, we can also collect and combine them to form a consensus opinion.

Last January, Constantinos Chappas did just that. Following gameweek 20, he collected 15 sets of predictions, averaging them to produce a ‘consensus forecast’ for the outcome of the 2015/16 EPL season. His article was published on StatsBomb here; we’ll return to the success of last year’s predictions at the end. First, I’m going to repeat the exercise for the 2016/17 EPL season. What do the combined predictions say this time around?

Participants


In total there were 15 participants this year, many of whom offered up their predictions in response to my twitter appeal. A big thank you goes out to (in no particular order):

@8Yards8Feet, @petermckeever, @goalprojection, @11tegen11, @cchappas, @SteMc74, @fussbALEXperte, @SquawkaGaming, @EuroClubIndex@opisthokonta and Sky Sports (via @harrydcarr)

To these, I added forecasts from the FT and FiveThirtyEight; I haven’t been in contact with them personally, but their forecasts are publicly available. I also added a bookmaker’s average, calculated by collecting the odds published on oddschecker.com and averaging the implied probabilities. That’s 14 - the final participant was myself (@eightyfivepoint).

The Predictions


Before we get into the results, a little bit about how they’ll be presented. I’ve followed last year’s article and presented forecasts as box-plots. These are a simple graphical representation of the distribution of forecasts for a particular outcome. The height of the shaded area represents the interquartile range: the 25th to 75th percentiles. By definition, half the forecasts lie within this range -- it provides a decent estimate of the variablity of the predictions.  The black horizontal line in the middle is the median (50th percentile), I’ll sometimes refer to this as the consensus forecast. The ‘whiskers’ extending out vertically from each box show the 5th to 95th percentiles. All but the highest and lowest forecasts for a given outcome will lie within this range.

On each plot I've also plotted the individual predictions as coloured points. They are identified by the legend on the right.

So, without further ado, here are the forecasts for this 16/17 EPL season.

The Champions



Not surprisingly, Chelsea are the clear favourites: the median forecast gives them an 88% chance of winning the league, as do the bookmakers. There’s not a huge amount of variability either, with the forecasts ranging from 80% to 93%. If Chelsea do suffer some kind of meltdown then it’s probably Spurs or City that would catch them, with median predictions of 5% and 4%, respectively. Liverpool and Arsenal are rank outsiders and any of the other teams finishing top would be an enormous surprise.

The Top Four



Now this is where things get a bit more interesting. Chelsea seem almost guaranteed to finish in the Champions League places, which leaves five teams fighting it out for the remaining three. Tottenham and Man City are heavily favoured: both have a median probability of at least 80% and the whiskers on their box-plots do not overlap with those of the next team, Liverpool.

The real fight is between Klopp and Wenger. Statistically they are almost neck-and-neck, with their box-plots indicating that the individual predictions are broadly distributed. Look closely and you see an interesting negative correlation between them: those that are above average for Liverpool tend to be below average for Arsenal (and vice-versa). You can see this more clearly in the scatter plot below. The reason must be methodological; to understand it we’d have to delve into how the individual models assess the teams' relative strength. Note that the bookies are sitting on the fence - they've assigned both Arsenal and Liverpool a 53% chance of finishing in the top four.


Man United are outsiders, but the consensus forecast still gives them about a 1 in 3 chance of sneaking in. Interestingly, the bookmakers odds – which imply a 44% chance of United finishing the Champions League positions - are way above the other predictions. Perhaps their odds are being moved by heavy betting?

The Relegation Candidates



Two weeks ago it looked like Sunderland and Hull were very likely to go down. Since then, the relegation battle has been blown wide open. The first six teams seem set for a nervous run-in and neither Bournemouth nor Burnley will feel safe.

The principal candidates for the drop are Sunderland, Hull and Palace, all of whom have a median prediction greater than a 50% chance of relegation. There is clearly a lot of variability in the predictions though, with the Eagles in particular ranging from a 38%-74%. You can certainly envisage any one of them managing to escape.

The next three clubs - Middlesbrough, Swansea and Leicester - are all currently level on 21 points, yet the median predictions imply that Middlesbrough (42%) are nearly twice as likely to go down as Leicester (22%). I suspect that this is because some models are still being influenced by last season’s results (for instance, Leicester's forecasts appear to bunch around either 15% or 30%). The amount of weight, or importance, placed on recent results by each model is likely to be a key driver of variation between the predictions.

What about <insert team’s name here>?


The grid below shows the average probability of every EPL team finishing in each league position. Note that some of the models (such as FiveThirtyEight, Sky Sports and the bookmakers) are excluded from the plot as I wasn’t able to obtain a full probability grid for them. Blank places indicate that the probability of the team finishing in that position is significantly below 1%.

An obvious feature is that Everton seem likely to finish in 7th place. The distribution gets very broad for the mid-table teams: Southampton could conceivably finish anywhere between 7th and 18th.


Last year’s predictions.


So how did last years’ predictions pan out? Leicester won the league, but the median forecast predicted only a 4% chance of this happening (compared, for example, to a 40% chance that they would finish outside the Champion's League places). However, the top four teams were correctly predicted, with a high probability of finishing there having been assigned to each of Leicester, Arsenal, City and Spurs.

Down at the bottom, both Newcastle and Villa were strongly expected to go down and they did. Sunderland were predicted to have only a 15% chance of staying up, yet the Black Cats escaped again. Instead, Norwich went down in their place having been 91% to stay up. Other surprises were Southampton (7 places higher than expected), Swansea (5 higher) and Crystal Palace (down 7).

How good were last year’s forecasts, overall? This is a tricky question and requires a technical answer. The specific question we should ask is: how likely was the final outcome (the league table) given the predictions that were made? If it was improbable, you could argue that it happened to be just that – an outlier. However, it could also be evidence that the predictions, and the models underlying them, were not particularly consistent with the final table.

We can attempt to answer this question using last season’s prediction grid to calculate something called the log-likelihood function: the sum of the logarithms of the probabilities of each team finishing in their final position. The result you obtain is quite low: simulations indicate that only about 10% of the various outcomes (final rankings) allowed by the predictions would have a lower likelihood. It is certainly not low enough to say that they were bad, it just implies that the final league table was somewhat unlikely given the forecasts. A similar result this time round would provide more evidence that something is missing from the predictions (or perhaps that they are too precise).

A final caveat..


Having said that – models are only aware of what you tell them. There are plenty of events – injuries, suspensions, and managerial changes – of which they are blissfully unaware but could play a decisive role in determining the outcome of the season. Identifying what information is relevant – and what is just noise – is probably the biggest challenge in making such predictions.

I will continue to collect, compare, combine and publicize forecasts as the season progresses: follow me on twitter (@eightyfivepoint) if you'd like to see how they evolve.


(This is a piece that I wrote for StatsBomb; I've copied it here.)