There is no substitute for timely and trustworthy information on performance and perceptive analysis of that information. Too often we go from minute to minute and game to game, with little more than a random or fragmentary collection of observations and recollections to guide us. For the very experienced coaches, that often works. But sooner or later, something important is lost for lack of good analysis.
Where we can gather data and analyse it in support of football or futsal performance we should do so. But you need three things - the ability / means to identify and collect the appropriate football data; the ability to analyse it and draw sound conclusions; and finally, the good sense and determination to implement change if the conclusions point that way and it is possible to do so.
Now ask yourself this - how many womens or mens Premier League clubs in the ACT devote time / resources / effort to game analysis? Not many and not often is my observation. When something happens that makes a difference, its usually because the Club coaching / support staff have been doing their homework. The irony is that if we watch football (of any code) on television, we are now provided with a useful short list of game statistics at half and full time. Commentary often revolves around the data. The Canberra United coach will do the video game analysis on Monday following the game and debrief the players that afternoon. The data collected can then be subject to further scrutiny, from game to game to search out patterns, for the home team and the opposition teams.
This is a subject that deserves and will get a lot more attention at the NPL.
But for now, I offer an original piece of research and analysis by someone with clearly a well above average mathematical ability, practiced in the statistical methods that underpin the paper provide in this Post.
OK, we are not all as comfortable with the heavy duty quantitative method that underpins this piece of work, but no problem, the author has presented the findings in a way that we can read and understand. Now this is what a Coach or Technical Director wants - something they can read and understand and make use of if they agree.
The start point for this work was the one I witnessed - the 2010 Under 13 Boys and Girls National Youth Championships. Several of us discussed the method used at the time (and still so today) to select the "All Stars" team to play in the final game of the NYC. The discusson was not about the relative merits of one player above another, it was about the "method" the FFA had settled on to come to an All Stars team and what followed from it. Did this method stand scrutiny? Was it fit for its purpose?
There was plenty of good football and at that time, FFA had introduced what was referred to at the time as a "Technical Assessment Score", which was added to the normal win / draw game results points. But that's another story in itself. Terrific innovation.
Importantly, at the conclusion of the NYC, an exhibition match was held between the winning team and a team composed of the best of the rest of the players. Or was it? It was to be called the All Stars match. The general concept was first trialed in the previous National Training Centre Challenge.
The purpose of the All Stars game at this NYC was to provide the "Technical Assessment Group" a further and final opportunity to decide which players would be called forward to join an Australian age train on squad, in this case, for subsequent selection to an inaugural Australian age representative team (boys and girls). Clearly, if you were not in either the Wining Team or the All Stars team, you were out of consideration at that time for selection to the Australian train on squad. As it happens, the ACT girls were well represented in the All Stars team and one made it all the way to the Australian team. The ACT coach at the time was the coach of the NYC. And there is a really good story around this cohort that again demonstates how it can / should be done for young players. But, alas, again I digress. It was fascinating stuff. Still is. A lot at stake.
There were other outcomes from the Technical Assessment Group" in relation to team and Coach performance, but that is not the subject of this specific research and analysis.
Since the 2010 NYC, the selection of an age Australian team for the Under 13 boys and girls has been scrapped, but the All Star game process is still in place and an important outcome from the Technical Assessment Group. What does it mean for the ACT players going forward? Good reason to reflect on the process.
The All Star game would appear to provide the Technical Assessment Group, the National Women's Coaching staff and attending State NTC Coaches with vital information on those young players who need to be streamed into NTC programs (scholarship or training agreements), as part of the talented player identification process. This impacts differently (in terms of time of entry / duration) on boys and girls programs. But the talent spotting is fully focused at this Under 13 NYC. So it should be!
So, does this All Stars game selection process work for the ACT talented players? Now that is a very important question. Its dumb to assume that it works well or equally for all teams and players until you can prove it so. This bit of research has a couple of surprises for you. What does the analysis of the data reveal? Read the paper, its fascinating.
I will say this (separate from the paper); At the conclusion of the 2011 NYC, our U13 girls remain in Group A, while our U13 boys remain in Group B. Will things change in 2012? Observations of the present programs, seems to indicate that the cautious prediction would be: The girls may be sufficiently competitive to retain their position in Group A, but not constitute the winning team. The boys will remain in Group B (and not be the winning team - which comes from Group A of course). So the means of selection for the All Stars team is really important to prospective opportunities for our young players. Simple as that!
And how closely does the Technical Assessment Group really look at Group B games and how hard is it for a player to shine and be noticed and selected from a modestly performing Group B team? Not much I would think. However, someone is sure to point out that the really, really , really good players do get noticed, which is probably true. But how good do you have to be to get noticed above a player of a little less ability in the winning team? The winning team enjoys the halo effect of simply "winning". More than that, those not on the All Stars game winning team match card, get a preferential place on the All Stars match card. Now that did surprise some of us and I think you will find the research outcomes interesting on this subject. Did the FFA think this through and if so, what did they based their decision upon? Probably no more than "gee, we had better give all the winning team a run in the final match". Makes you think.
There is one way of dramatically advancing the prospects of our young players (boys and girls) - get into to Group A and win. Anyone who says that winning the NYC is not the objective, is not seeing this clearly and anyway, why are we there if not to win? You can bet the players are their to win. Now what sort of Centre of Excellence program will it take to achieve these outcomes and how long will it take? And what sort of coaches do we need to take us forward?
So, on with this terrific piece of work by one of our thinking parents, who seems to have put the time on the sides of Football and Futsal games to good effect. Should be more of it - in fact, I know there is!
It's well worth a read. I wonder if anyone in Capital Football or the FFA can produce this sort of work. They should be doing this sort of work and publishing it.
..........................................................................................................................................................
Choosing a national sports
team: the 2010 FFA National Junior Championships
Choosing a
national sporting team is complex and challenging. It has two key dimensions —
the pool of players that are considered in the selection, and the selection
process. In the 2010 FFA National Junior Championships, in both the girls’ and boys’ competitions (A and B Groups), the
championships ended with an AllStars match between the top performing team in
that group and an AllStars team chosen from the remaining five teams. The
AllStars matches provided recognition to the best performing team in the group,
but also provided a pool of players for selection of national girls’ and boys’
training squads from which a national under-13 girls’ team and boys’ team would
be chosen to play in the Asian Football Confederation’s Festival of Football in
mid 2010.
For the purpose of
selecting a national squad, an AllStars match would ideally be between two
teams that broadly include the most talented players in that group and that are
evenly matched.
The current
selection process raises a number of related questions. Does a match between
the top scoring team and an AllStars team bring together the best talent
amongst the players in the championship? If this is generally the case, does it
remain so when the final scores in the group are close? Additionally, where
some of the better performing players from the top scoring team are placed in
the AllStars team, as occurred in the boys’ competition, is this likely to
significantly affect the balance between the teams in the final match?
Current
arrangements potentially have a negative impact on selection in at least three
ways. Firstly, where there is a wide gap between the standard of play of the
AllStars team and the opposing team, this is likely to lower the overall
standard of players in the AllStars match and therefore the quality of the pool
for national squad selection. Secondly, if there is a significant imbalance in
the standard of each team in the AllStars match, this will reduce the scope for
talented players to show their best performance under pressure. Arguably the
process adopted for the boys of placing the best performing players of each
team (including the top scoring team) onto the AllStars team has the potential
to create this imbalance. The arrangements for the boys also has a third
consequence — that the top scoring team playing the AllStars match is smaller
than the usual 16 players, which reduces the pool of players actually playing
the AllStars match and at the same time gives relatively greater game time to
the players in the potentially weaker team.
Considering these,
what were the outcomes in the 2010 National Junior Championships?
Selection for
the girls’ national training squad
For Group A, NSW
Metro 1 finished in top place, as did NSW Metro 2 in Group B (Table 1). Scores
were close in Group A and the overall ranking on game scores was altered by the
allocation of technical points (of five, three and one) to those teams that
most closely played according to the principles of the National Football
Curriculum.
Table 1: Girls: results for each Group
Team
|
Initial
score
|
Final
score adjusted for technical points
|
AllStars
match selection
|
Girls
Squad selection
|
Group
A
|
||||
NSW Metro 1
|
10
|
15
|
16
|
7
|
11
|
12
|
2
|
1
|
|
ACT
|
6
|
9
|
5
|
1
|
SA
|
7
|
7
|
3
|
2
|
Victoria Metro
|
5
|
5
|
1
|
1
|
1
|
1
|
5
|
5
|
|
Group
B
|
||||
NSW Metro 2
|
15
|
20
|
16
|
6
|
WA
|
10
|
11
|
4
|
1
|
NSW Country
|
7
|
10
|
6
|
3
|
Victoria Country
|
6
|
6
|
3
|
1
|
6
|
6
|
3
|
2
|
|
0
|
0
|
0
|
0
|
Source: Football Federation Australia and Capital
Football websites.
From two AllStars
matches involving 64 players, 30 players were chosen for the Girls National
Training Squad — that is, 17 from Group A, and 13 from Group B. Of all girls
selected, 13 played on NSW Metro teams, or 41% of NSW Metro team players, while
17 of the 32 players on AllStars teams were selected, a selection rate of 53%.
In Group A, NSW
Metro 1 had 7 of their players selected to the national squad, a selection rate
of 44%: in contrast, the AllStars players had 10 of their 16 players selected,
a selection rate of 63%.
In Group B, NSW
Metro 2 had 6 of their 16 players selected, a selection rate of 38%, while 44%
of the AllStars players were selected. The gap in selection rates was less than
in Group A, and final team scores were more spread than for Group A.
If the top scoring
team was broadly representative of the best talent in the championship, one
would expect that the national squad selection rate for the top scoring team to
be roughly the same as for the AllStars team. However, for both groups it was
less than for the AllStars and, for Group A, considerably less. This suggests
that the top scoring team may not have been representative of the top talent in
Group A, and as such the AllStars match may not have provided the best pool for
national squad selection. The apparent imbalance in Group A between the NSW
Metro team and the AllStars team may also have reduced the quality of play in
the final match.
Selection for
the boys’ national training squad
NSW Metro 1 finished
Group A in top place in the boys’ championships also, while NSW Metro 2
finished Group B in top place (Table 2). Scores were close in Group A and the
overall ranking on game scores was altered by the allocation of the technical
points. As noted above, in both groups the AllStars teams also included players
from the top scoring team.
From a pool of 33
AllStar players plus 24 top team players (13 from NSW Metro 1 and 11 from NSW
Metro 2), 26 were selected to the national squad — 18 from the 30 who played in
Group A and 8 from the 27 who played in Group B.[1]
Of all boys selected, 6 played on NSW Metro teams in the AllStars matches, or
25% of NSW Metro team players in those matches, compared to the 61% of AllStars
team players that were selected.
Table 2: Boys: results for each Group
Team
|
Initial
score
|
Final
score adjusted for technical points
|
AllStars
match selection
|
Boys
Squad selection a
|
Group
A
|
||||
NSW Metro 1 team
|
9
|
14
|
13
|
3
|
NSW Metro 1 AllStars
|
3
|
3
|
||
Victoria Metro
|
13
|
13
|
2
|
2
|
7
|
10
|
4
|
3
|
|
7
|
7
|
4
|
4
|
|
4
|
5
|
3
|
3
|
|
2
|
2
|
1
|
0
|
|
Group
B
|
||||
NSW Metro 2
|
15
|
20
|
11
|
3
|
NSW Metro 2 AllStars
|
5
|
4
|
||
NSW Country
|
10
|
13
|
4
|
0
|
9
|
10
|
4
|
1
|
|
ACT
|
7
|
7
|
3
|
0
|
Victoria Country
|
3
|
3
|
0
|
0
|
0
|
0
|
0
|
0
|
Source: Football Federation Australia and Capital
Football websites.
a) Excludes one player who did not play the
championships but was chosen for the national squad, and three other players
chosen for the squad who did not play the AllStars matches.
From the Group A
top team, NSW Metro 1, 3 players were chosen to play as AllStars. All three NSW
Metro AllStars players plus 3 from the remaining 13 of NSW Metro 1 were chosen
for the boys’ national training squad — that is, 3 of the 13 players who played
the match as NSW Metro 1, a selection rate of 23%: in contrast, 15 of the 17
players on AllStars teams were selected, a selection rate of 88%.
In Group B, NSW
Metro 2 was top scoring, and five of its players were chosen for the AllStars
team. Four of these were selected for the national squad, along with three from
the remainder of NSW Metro 2 — that is, 3 of the 11 players who played the
match as NSW Metro 2, a selection rate of 27%: in comparison, 5 of the 16
AllStars players were selected, a selection rate of 31%.
As for the results
of the girls’ competition, the results for the boys’ competition for Group A
suggest that the top scoring team may not have been representative of the top
talent in the Group, and that the AllStars match for this group was not between
balanced teams. All results are presented in Table 3.
Table 3:
Proportion of players selected to the national squad (%)
Group
|
Top scoring team
|
AllStars team
|
|
Girls
|
A
|
44
|
63
|
B
|
38
|
44
|
|
Combined
|
41
|
53
|
|
Boys
|
A
|
23
|
88
|
B
|
27
|
31
|
|
Combined
|
25
|
61
|
The results in Table
3 show some considerable differences between the selection outcomes for those
playing the AllStars match as AllStars team members rather than NSW Metro team
members. Probit regression has identified that several of these differences are
statistically significant: for the Boys Group A, there was a negative relationship
between being selected and playing on the NSW Metro 1 team in the AllStars
match; and for both groups of boys together, there was a negative relationship between
selection and playing on NSW Metro teams in the AllStars matches. Other
significant results were obtained, with implications for national squad
selection (see Attachment A).
.
Conclusion
The current
AllStars matches aim to reward the best scoring teams in the National
Championships by giving them the chance to play the AllStars teams. However,
where the group scores are close, it appears questionable whether it also
provides a good pool for national squad selection — in 2010, for both Girls
Group A and Boys Group A where team scores were close, players from the top
scoring teams had lower selection rates by some margin. This was especially so
for Boys Group A where selection rates were also affected by the allocation of
some players onto the AllStars team.
These differences
suggest a lower overall standard of individual player in the AllStars match,
and the potential for significant imbalance in team performance and resulting
quality of the match.
Placing top
scoring team members on the AllStars team in the boys’ competition also reduced
the overall number of players in the AllStars match — for Group A, 13 rather
than 16 players, and for Group B, only 11 players — and so gave greater game
time to those players whom national selectors subsequently assessed as performing
less highly. This is likely to have further reduced the usefulness of the
AllStars match for selection purposes in the boys’ competition.
This assessment
suggests two potential improvements in national squad selection:
- in cases where final team scores are reasonably spread out, the AllStars team play the top scoring team that includes all members of that team
- in cases where final scores are close — say when the top two teams score within 5 points of each other, as occurred for both A Groups in 2010 — the AllStars match be between two AllStars teams.
ATTACHMENT A
Selection to the national
squad and relationship to final team score and team allocation for the AllStars
match
Table 3 shows
differences in selection outcomes for players on each of the teams in the
AllStars match, for both groups in the girls’ and boys’ championships. Probit
regression can be used to establish whether these kinds of differences are
significant in a statistical sense.
Probit regression
provides an estimate of the probability of an outcome — in this case selection
to the national training squad — given other characteristics. Relevant
characteristics here are the player’s final team score or whether the player
played the AllStars match as a member of the top scoring team rather than as an
AllStar. Current selection arrangements imply that the top scoring team is a
reasonable source of talent for national selection. As such, it would be
reasonable to expect that being chosen for the national training squad would be
significantly related to the player’s final team score, in a positive and
non-trivial way. Current arrangements also assume that they bring the best
players to the final match in balanced teams for selection to the national
squad, and so it would be reasonable to expect that there would not be a significant relationship
between selection and which team the player played on in the AllStars match.
For each group,
player selection to the national training squad was regressed on final team
scores, and also on whether they played the AllStars match as a member of the
top scoring team. A final regression was also run for all AllStars match
players, of selection to the squad regressed on whether they played the
AllStars match as a member of the top scoring team
For Group A, the
regression of selection to the national training squad (that is, on the pool of
32 Group A players who played the AllStars match) and previous team scores
showed evidence of a negative
relationship between the probability of being selected and previous team score.[2]
For this group of girls, a player from a team with an average score (which is
around 8) had a probability of 0.647 of being selected to the national squad.
In comparison, a player with a score of 15 (as achieved by NSW Metro 1) had a
probability of being selected of 0.367, while a player with a team score of 1
(as achieved by Queensland ) had a probability of 0.871. That is, being from the team with the
lowest score was associated with an increase
of 0.504 in the probability of being selected compared to being from the team
with the highest score.
Other regressions using
data from the girls’ championships did not find a significant relationship,
including:
- For Group A, between selection to the national squad and playing the AllStars match as a NSW Metro 1 player
- For Group B, between selection to the national squad and either previous team score or playing the AllStars match as a NSW Metro 2 player
- For all girls in Groups A and B, between selection to the national squad and playing the AllStars matches as NSW Metro players.
For the girls’
competition, the only significant relationship was for Group A players, between
selection to the national squad and team score. However, it was a negative
relationship, which calls into question the use of the top scoring team to play
the AllStars match where final team scores in the group are close.
Considering the
selection of the boys’ squad, for each group player selection to the national
training squad was regressed on final team scores and whether or not the player
played on the NSW Metro team (the top scoring team) in the AllStars match. A
final regression was also run in relation to all boys, of selection to the
squad regressed on whether or not the player played on the NSW Metro team in
the AllStars match.[3]
For Group A, the
data are consistent with there being a negative
relationship between the probability of being selected to the national
training squad and final team score. The
probability of selection for a player from the team with the average score (of
8.5) was 0.761. However, the probability of selection for a player with the NSW
Metro 1 score was 0.486, while the probability of selection for player with South Australia
score was 0.945. That is, a player from the team with the lowest score (South Australia )
had a .459 higher chance of being
selected than players from the team with the highest score.
There was also a negative relationship between selection
to the national squad and whether the player played the AllStars match on the
NSW Metro 1 team. [4] Being on
the NSW Metro 1 team in the AllStars match was associated with a probability of
.231 of being selected to the national squad. Not being on the NSW Metro 1 team
was associated with a probability of .882. That is, being a NSW Metro 1 team
player reduced the probability of
being selected by .882 - .231 = .651.
For Group B,
selection to the national squad had no significant relationship to final team
score.[5] Nor was there a significant relationship
between selection to the national squad and whether the player was on the NSW
Metro 2 team in the AllStars match.
For all boys
playing in the AllStars matches (both groups), the relationship between
selection to the national training group and being on the NSW Metro team in the
AllStars matches was negative. Being
on the NSW Metro team was associated with a probability of 0.250 of being
selected to the national squad. Not being on the NSW Metro team was associated
with a probability of 0.606, that is, 0.356 lower
than being on the AllStars team.
As for the girls,
these results call into question the use of the top scoring team to play the
AllStars match where final team scores in the group are close, and also the
practice of drawing some AllStars players from the top scoring team.[6]
[1] A further 4 boys
were added to the national training squad — one who had not played the national
championships, and three who did not play the AllStars matches.
[2] Where results are significant, this is at the 5% level unless
otherwise stated.
[3] Data exclude the
one player who did not play the championships but was chosen for the national
squad. The three chosen for the squad who did not play the AllStars matches are
included in the regression on team scores only.
[4] This relationship was significant at the 1% level.
[5] The relationship was not significantly different to zero at the 5%
level of significance. However, at the 10% level there was a significant and
negative relationship.
[6] The relative effects of each practice on the probability of
selection cannot be separately identified.
No comments:
Post a Comment