Better prediction of Premier League matches using data from other competitions

In most of my football posts on this blog I have used data from the English Premier league to fit statistical models and make predictions. Only occasionally have I looked at other leagues, but always in isolation. That is, I have never combined data from different leagues and competitions into the same model. Using a league by itself works mostly fine, but I have experienced some issues. Model fitting and prediction making often simply does not work at the beginning of the season. The reason for this has mostly to do with newly promoted teams.

If only data from Premier League is used to fit a model, then no data on the new teams is available at the beginning of the season. This makes predicting the outcome of the first matches of the new teams impossible. In subsequent matches the information available is also very limited compared to the other teams, for which we can rely on data from the previous seasons. This uncertainty in the new teams also propagates into the estimates and predictions for the other teams.

This problem can be remedied by using data from outside the Premier League to help estimate the parameters for the promoted teams. The most obvious place to look for data related to the promoted teams is in the Championship, where the teams played before they were promoted. The FA Cup, where teams from the Championship and Premier League are automatically qualified, should also be a good place to use data from.

To test how much the extra data helps make predictions in the Premier League, I did something similar as I did in my post on the Dixon-Coles time weighting scheme. I used the independent Poisson model to make predictions for all the Premier League matches from 1st of January 2007 to 15th of January 2015. The predictions were made using a model fitted only with data from previous matches (going back to august 2005), thus emulating a realistic real-time prediction scenario. I weighted the data using the Dixon-Coles approach, with \(\xi=0.0018\). This makes the scenario a bit unrealistic, since I estimated this parameter using the same Premier League matches I am going to predict here. I also experimented with using different home field advantage for each of the competitions.

To measure prediction quality I used the Ranked Probability Score (RPS), which goes from 0 to 1, with 0 being perfect prediction. RPS is calculated for each match, and the RPS I report here is the average RPS of all predictions made. Since this is over 3600 matches, I am going to report the RPS with quite a lot of decimal places.

Although the RPS goes from 0 to 1, using a RPS = 1 to mean worst possible prediction ability is unrealistic. To get a more realistic RPS to compare against I calculated the RPS using the probabilities of home, draw and away using the raw proportions of the outcome in my data. In statistical jargon this is often called the null model. The probabilities were 0.47, 0.25 and 0.28, respectively, and gave a RPS = 0.2249.

Using only Premier League data, skipping predictions for the first matches in a season involving newly promoted teams, gave a RPS of 0.19558.

Including data from the Championship in the model fitting, and assuming the home field advantage in both divisions were the same, gave a RPS of 0.19298. Adding a separate parameter for the home field advantage in the Championship gave an even better RPS of 0.19292.

Including data from the FA Cup (in addition to data from the Championship) were challenging. When data from the earliest round were included, the model fitting sometimes failed. I am not 100% sure of this, but I believe the reason for this is that some teams, or groups of teams, are mostly isolated from the rest of the teams. By that I mean that some group of teams have only played each other, but not any other team in the data. While this is not actually the case (it can not be) I nevertheless think the time weights makes this approximately true. Matches played a few years before the mathces that predictions are made for will have weights that are almost 0. It seems reasonable that this coupled with the incomplete design of the knockout format is where the trouble comes from.

Anyway, I got it to work by excluding matches played by a team not in the Championship or Premier League in the respective season. An additional parameter for home field advantage in the Cup were included in the model as well. Interestingly, this gave a somewhat poorer prediction ability that using additional data from the Championship only, with a RPS of 0.192972, but still better that using Premier League data only. With the same home overall field advantage for all the competitions, the prediction were unsurprisingly poorer with RPS = 0.1931.

I originally wanted to include data from Champions League and Europa League, as well as data from other European leagues, but the problems and results with the FA Cup made me dismiss the idea.

I am not sure why including the FA Cup didn’t give better predictions, but I have some theories. One is that a separate FA Cup home field advantage is unrealistic. Perhaps it would be better to assume that the home field advantage is the same as in the division the two opponents play in, if they play in the same division. If they played in different divisions, perhaps an overall average home field advantage could be used instead.

Another theory has to do with the time weighting scheme. The time weighting parameter I used was found by using data from the Premier League only. Since this gives uncertain estimates for the newly promoted teams, it will perhaps give more recent matches more weight to try to compensate. With more informative data from the previous season, this should probably be more influential. Perhaps the time weighting could be further refined with different weighting parameters for each division.

6 thoughts on “Better prediction of Premier League matches using data from other competitions

  1. I have been following your blog for sometimes now, and I will like to develop a model using the linear regression model. So I need some clarification. How do I compute home effect and with what parameters. And also how do I incooperate the time weighing scheme, and with what parameter so that the new matches carries more weight. I have historical data for the past seven seasons. Thanks for your response.

    • If you follow the links in this post to some of my earlier posts you will find R code for computing the weights that you can give to the glm() function. To properly fit the model you will usually need to reformat your data a bit, which I have explained in this post. You can also find some code for doing this on the excellent pena.lt/y blog.

  2. Hi again, I’m afraid I do not understand how/why information from the 2nd league (e.g. championship) will improve the model.
    I want to use next (very unrealistic) example: Let’s assume that the attack/defense parameters distributions are equal for both leagues and the winner will promote to PL. In that case the model will assume that the quality of the promoted team is equal to last year’s champion of the PL. Can you eleborate on why the model accuracy improves when we add games from another sub-league (where the level is significant lower)?

    • I am not 100% sure, but I think it is because you have more data about the promoted teams. When you use data from two divisions, and some teams have palyed in both (promted and relegated teams) the model will adjust the parameters to reflect this.

  3. Hi, I’ve read lots of your blogs on this websites and I found most of them very inspiring and interesting. I don’t know R that well and I implemented the Dixon-Coles model in Mathematica. Somehow I found out that the calculation time is very slow.

    Let’s say that I want to predict the result for the Premium League in 2017. The data I want to use is all the match results of Premier League, FA Cup and the Championship from 2006 to 2016. How long would it take for a full prediction for the season 2017 in R on your computer?

    • Thank you. There are many things that can slow down the estimation time. I know from the optim function in R that different optimization algorithms often have different speeds, so it is worth checking this out. You could also try different starting values. Another thing to consider is the data. Especially including data from the FA cup can be problematic, if you include data from the earlier rounds, with teams from the lower divisions. If a team has really few matches this can create problems. I would also recommend creating a graph (http://opisthokonta.net/?p=1490) to make sure all teams are connected.

Leave a Reply

Your email address will not be published. Required fields are marked *