Thanks for the reply.
> ? ? ?Hehe, your point b) is funny. One judges a simulation by its
> outcome; if a simulation produces a result that is the same as what it
> simulates, then it is, by definition, good. :)
OK. Question: if, at the beginning of the season, you ask me
tiopredict the final table and I just copy the previous season's final
table (replacing relegated teams), is it a simulation? (more questions
to come if you answer "yes"...)
> - the real match outcomes and table are a sampling of the teams' real
> strengths. They are, of course, subject to randomness; individual
> matches can be influenced by chance. However, repeated sampling will
> converge to the true value, because I don't think the influence of
> chance is /that/ big that it dominates; the variance is not that big.
> Remember that a sequence is a product of chances; you can't expect the
> unexpected to happen too often in a row.
Of course. I specified it to highlight two things:
- You can't judge a simulation using just one or few real matches as
- If a simulation has Chelsea at 70 points and Chelsea really has 69
points you can't just say that the simulation is wrong. (this was
functional to my idea to deduce teams' strength from the real table,
idea that thank to you now I know is wrong)
> - the simulation has the advantage that it can be repeated and its
> statistics checked out. If and when you have time
I just have to develop the habit to start a new simulation before I go
to sleep, and the morning after I'll find the final table (indeed I
guess that it doesn't take more than a couple of hours to complete a
season, probably less). Write down the number of points for each team,
and it's done. (tl;dr: it takes no time)
> please make several
How much is "several"?
> and check the following things:
> ? ?1. How similar to each other the simulated tables are at the end of
> the season. This will tell us how consistently the FM engine applies the
> parameters of the players and teams. If the tables turn out to be more
> or less similar (with each team moving by no more than, let's say, 2 to
> 3 positions around the table), we can conclude that it is a consistent
What's the metric? Variance of number of points (or: of position) for
> ? ?3. And of course we can see how it is influenced by randomness, by
> looking at the outcomes of one match played several times. This is
> allowed to have a bigger variance, but still within limits. One wouldn't
> expect ManU to beat Portsmouth less than 8 times out of 10.
Hmm... If it's just *one* match each season (i.e. if every morning
I'll have to check just ManU vs. Portsmouth), it's OK. Otherwise it's
not practical, because there isn't (AFAIK) a way to export results
from FM to a spreadsheet.
> - what you say below, is backwards.
Thanks for the explanation!
> We can, however,
> calculate the statistics of the simulation, and then see if the real
> results fall within the predicted confidence intervals. And that will
> tell us how true to life the simulation really is - if the predicted
> points per game for ManU is x points, and the actual one is y, we can
> check to see within which confidence interval of the simulation that
> value falls, and deduce the quality of the simulation.
I.e. after several runs of the simulation we can say that we know each
team' strength (as measured by points per match) according to FM
(parameter), and we check how many standard errors separate the real
EPL table (sample) from it. I.e. we ask: how likely is that the real
table is generated by FM?
Am I right?
Sorry for questionable lexicon: all my statistical knowledge comes
from my current reading of Statistics for Dummies*... and I'm
understanding just half of what I am reading!
* Really! I can't keep annoying Daniele for every little doubt, after