Luck plays a significant part in a football match. Because of this we are not absolutely sure that the winning team in a match is the best one. Some researchers has taken a closer look at this by viewing a football match as a experiment used to determine which team is the best (Soccer matches as experiments: how often does the ‘best’ team win? by G. K. Skinner & G. H. Freeman, link). They found that in matches where the goal difference where less than about 3 or 4 goals, we could in general not be more than 90% sure that the best team won. This led the scientists to call a football match for “a badly designed experiment”.
While a single match could only hope to determine which of two teams is the best, we need a lot more matches to determine the best team among several. We need to hold a competition. There are several ways in which the different teams can play against each other in a competition. The perhaps most common formats are the the all versus all format we find in most national leagues. In this format every team plays against every other team in the league. Another common type of competition is the knockout tournament. This is the format used in the last stage in many international competitions like FIFA World Cup.
If we suppose the goal of a competition is to determine the best team we can see the competition as an experimental setup. Both of the two different competition formats have pros and cons. In an all versus all leagues (hereafter just referred to as a league) the teams often play each other twice during the season, once at each team’s home field. We thus get a repetition of each pair of teams, and we also get to control for home field advantage. This may or may not be the case in knockout tournaments (hereafter just referred to as a tournament). In knockout stage at the FIFA World Cup the the teams facing each other only plays a single match. Also, all teams except for the team representing the hosting nation has a home field advantage (if they reach the knockout stage, that is). The UEFA Champions League knockout stage operates with 2-leg matches, where each team play each other twice.
The question whether different types of competitions are better or worse at correctly identifying the best team relates to the statistical concept of power. In short, the power of an experimental procedure is the probability of confirming the alternative hypothesis when the alternative is true. In terms of identifying the best team the hypotheses can be stated as
H0: Team X is not the best team
H1: Team X is the best team
So what we want to figure out is what is the probability of team X winning the competition if it truly is the best team. The power of an experiments depends on a couple of factors: The number of observations, the size of the effect and the experimental procedure itself. In a football competition the number of observations and the procedure is greatly confounded. The number of matches is central to the competition format. A lot more matches are played in a league than in a tournament. In a league with N teams N(N-1) matches has to be played. Compared to a 1-leg tournaments where log2N matches has to be played, this is much greater. The effect size in this context is how good the best team is compared to the other teams.
Power analysis can be rather difficult to do analytically except for in the simplest models. One way to do a power analysis is therefore to do simulations. For the simulations I did here I decided to use Elo-ratings (which I have written about here) to generate some ratings and then simulate a competition. By doing this we can know which team is the best. By simulating the competition many times over we can get an estimate of the probability that the best team wins. The Elo-ratings can be used directly to calculate the chances of winning and loosing a match and is therefore a simple way to do this. Elo-ratings has some drawbacks, however. The most obvious that comes to my mind is that it is impossible to calculate the probability of a draw. This may be a problem the simulations of the league competitions. Hopefully, the results does not suffer too much because of this in the long run, since the probability of winning, as calculated by the ratings, does include half of the probability of drawing.
For the simulations I generated uniformly distributed ratings for 16 teams. By changing the upper and lower bounds for the uniform distribution we can change the competitiveness of the league. I used two sets of bounds: One where the ‘win percentage’ between the two bounds where 90% and one with 75%. We can think of this as the variability in effect size. For each simulation new ratings were generated. For each of the results 100000 competitions were simulated. For the tournament simulations I also looked at two different initial seedings. One was the completely random seed. The other was better informed, where the top half of the teams initially matched up against one of the bottom half of the teams. Otherwise it was random.
Here are the results:
|
The league format unsurprisingly is much better at determining the best team than tournaments. What I found most surprising was how little effect the seeding in a tournament has. For both the higher and the lower competitive tournaments the chance of correctly identify the best team increases by less than one percentage point.