Least squares rating of football teams

The Wikipedia article Statistical association football predictions mentions a method for least squares rating of football teams. The article does not give any source for this, but I found what I think may be the origin of this method. It appears to be from an undergrad thesis titled Statistical Models Applied to the Rating of Sports Teams by Kenneth Massey. It is not on football in particular, but on sports in general where two teams compete for points. A link to the thesis can be found here.

The basic method as described in Massey’s paper and the Wikipedia article is to use a n*k design matrix A where each of the k columns represents one team, and each of the n rows represents a match. In each match (or row) the home team is indicated by 1, and the away team by -1. Then we have a vector y indicating goal differences in each match, with respect to the home team (i.e. positive values for home wins, negative for away wins). Then the least squares solution to the system Ax = y is found, with the x vector now containing the rating values for each team.

When it comes to interpretation, the difference in least squares estimate for the rating of two teams can be seen as the expected goal difference between the teams in a game. The individual rating can be seen as how many goals a teams scores compared to the overall average.

Massey’s paper also discusses some extensions to this simple model that is not mentioned in the Wikipedia article. The most obvious is incorporation of home field advantage, but there is also a section on splitting the teams’ performances into offensive and defensive components. I am not going to go into these extensions here, you can read more about them i Massey’s paper, along with some other rating systems that are also discussed. What I will do, is to take a closer look at the simple least squares rating and compare it to the ordinary three points for a win rating used to determine the league winner.

I used the function I made earlier to compute the points for the 2011-2012 Premier League season, then I computed the least squares rating. Here you can see the result:

  PTS LSR LSRrank RankDiff
Man City 89 1.600 1 0
Man United 89 1.400 2 0
Arsenal 70 0.625 3 0
Tottenham 69 0.625 4 0
Newcastle 65 0.125 8 3
Chelsea 64 0.475 5 -1
Everton 56 0.250 6 -1
Liverpool 52 0.175 7 -1
Fulham 52 -0.075 10 1
West Brom 47 -0.175 12 2
Swansea 47 -0.175 11 0
Norwich 47 -0.350 13 1
Sunderland 45 -0.025 9 -4
Stoke 45 -0.425 15 1
Wigan 43 -0.500 16 1
Aston Villa 38 -0.400 14 -2
QPR 37 -0.575 17 0
Bolton 36 -0.775 19 1
Blackburn 31 -0.750 18 -1
Wolves 25 -1.050 20 0

It looks like the Least squares approach gives similar results as the standard points system. It differentiates between the two top teams, Manchester City and Manchester United, even if they have the same number of points. This is perhaps not so surprising since City won the league because of greater goal difference than United, and this is what the least squares rating is based on. Another, perhaps more surprising thing is how relatively low least squares rating Newcastle has, compared to the other teams with approximately same number of points. If ranked according to the least squares rating, Newcastle should have been below Liverpool, instead they are three places above. This hints at Newcastle being better at winning, but with few goals, and Liverpool winning fewer times, but when they win, they win with more goals. We can also see that Sunderland comes poor out in the least squares rating, dropping four places.

If we now plot the number of points to the least squares rating we see that the two methods generally gives similar results. This is perhaps not so surprising, and despite some disparities like the ones I pointed out, there are no obvious outliers. I also calculated the correlation coefficient, 0.978, and I was actually a bit surprised of how big it was.

Very accurate music reviews are perhaps not so useful

Back in august i downloaded all album reviews from pitchfork.com, a hip music website mainly dealing with genres such as rock, electronica, experimental music, jazz etc. In addition to a written review, each reviewed album is given a score by the reviewer from 0.0 to 10.0, to one decimal accuracy. In other words, a reviewed album is graded on a 101 point scale. But does it make sense to have such an accurate grading scale? Is it really any substantial difference between two records with a 0.1 difference in score? Listening to music is a qualitative experience, and no matter how professional the reviewer is, a record review is always a subjective analysis influenced by the reviewers taste, mood and preconceptions. To quantify musical quality on a single scale is therefore a hard, if not impossible, feat. Still, new music releases is routinely reviewed and graded in the media, but i don’t know of anyone having a grading system to the accuracy that Pitchfork does. Usually there is a 0 to 5 or 0 to 10 scale, perhaps to the accuracy of a half. There are sites like Metacritc and Rotten Tomatoes (for film reviews) that has a similar accuracy to their reviews, but they are both based on reviews collected from many sources. In the case of Pitchfork, there is usually just one reviewer (with a few reviews credited to two or more people). As far as i know pitchfork has no guidelines on how to interpret the score or what criteria they use to set the score and it may just be up to the reviewer to figure out what to put in the score.

Anyway, I extracted the information from the reviews i downloaded and put it into a .csv file. This gave me data on 13330 reviews which i then loaded into R for some plotting with ggplot2. Lets look at some graphs to see how the scores are distributed and try to find something interesting. First we have a regular histogram:

When I first saw it I was not expect the distribution to be so right skewed. I expected the top to be around maybe 5 or 6. I calculated the mean and median which are 6.96 and 7.2, respectively. Lets look at a bar plot, where each bar corresponds to a specific score.

Now this is interesting. We can clearly see four spikes around the top, some scores are clearly more popular than others. ggplot2 clutters the ticks on the x-axis so it is difficult to see exactly which scores it is (this seems to be a regular problem with ggplot2, event the examples in the official documentation suffers from this) Anyway, I found out that the most popular scores are 7.5 (620 records), 7.0 (614 records), 7.8 (611 records) and 8.0 (594 records). Together, 18.3% of the reviewed records has been given one of these four scores. From this it seems to be some sort of bias towards round or ‘half round’ numbers. I guess we humans have some sort of subconscious preference for these kinds of numbers. If we now look closer at the right end of the plot, we see the same phenomena:

The 10.0 ‘perfect’ score is way more used than the scores just below it. So it appears to be harder to make a ‘near perfect’ album than a perfect one, which is kind of strange. If I were to draw some conclusion after looking at these charts, it would be that a 101 point scale is too accurate to be useful for distinguish between albums that differ little in their numeric scores. I also wonder if this phenomenon can be found in other situations where people are asked to grade something on a scale with similar accuracy.