In my last post, I gave a generally positive review of the programming in the CrossFit Games finals. But the Games cannot simply be viewed alone, because the athletes competing were only there because of their performances in the Open and the Regional. To be sure, athletes could not have any glaring weaknesses, or else they would not have made the Games at all. But let's look at the programming across all three levels of competition and see where HQ put the most emphasis.
The following table shows every movement that was used in competition this season. As you can see, more than 30 distinct movements were tested, and very few, if any, CrossFit staples were left out. However, the extent to which the movements were tested varied widely. In adding up the total value assigned to each movement, I assumed that each workout was worth a total of 1.00 (Games workouts scored on a 50-point scale were worth only .50). Within each workout, I assumed that each "station" in the workout was worth equal value, so the box jumps in Open WOD 3 were each worth 0.33 points, whereas the burpees in Open WOD 1 were worth 1.00 points*.
What is clear from this is that HQ puts a large value on the Olympic lifts. The clean and snatch were worth a total of 5.35 events on their own! Add in shoulder-to-overhead (0.67) and that's more than 6 events worth of points based on the Olympic lifts. Although I am a big fan of the Olympic lifts myself, I do think the snatch in particular was over-valued. It was worth nearly 14% of all the available points, including 20% of the Open and 17% of the Regional. The pull-up, a CrossFit staple for years, accounted for 40% of the value of the snatch (maybe slightly more if you considered the pull-up-like elements of the obstacle course).
However, in total, the lifting bias was not as great as some people believe. In total, purely bodyweight movements (excluding running, but including the obstacle course and double-unders) accounted for 45% of all available points; barbell or dumbbell-based movements accounted for about 38%; running or rowing accounted for 6%; all others (including medball lifts) accounted for 14%. I think there was good balance here, with the exception of the running and rowing.
I think the lack of running in the Open and Regionals showed in the Games. For both men and women, neither of the run-focused events (shuttle sprint and Pendleton 2) were highly correlated with success across all other events in the season. In fact, the sprint had basically 0 correlation with success in all other events for the men. For comparison, two charts are below: one shows the weak correlation between men's shuttle sprint and all other events, and one shows the strong correlation between women's Open WOD 3 and all other events (the concept of correlation with other events is detailed in my post "Are certain events 'better' than others?").
In other words, the shuttle sprint was sort of a crapshoot, because the top finishers didn't necessarily do well in those events, whereas Open WOD 3 was dominated by athletes who did well across the board. My feeling is that because running was not tested earlier, we may have omitted some athletes who would have done better on the running events at the Games.
Let's look a bit more into the qualification structure on the road to the Games. The Open, Regionals and Games should all be testing similar things, and in my mind, there are two over-arching goals when programming and carrying out the Open and Region rounds: 1) In the Open, find the athletes with the best shot of reaching the Games, and 2) at the Regionals, find the athletes with the best shot of winning the Games. Put another way: 1) The Open should not eliminate any athletes who would have had a legitimate shot at reaching the Games if they had competed at Regionals, and 2) The Regionals should not eliminate any athletes who would have had a legitimate shot at winning the Games if they had qualified. It is certainly possible to disagree with that sentiment, but my feeling is that we want to pick the best athletes for the Games. We do not want to send athletes to the Games who will not do well there.
So, let's take a look to see if those goals were accomplished. It is impossible to say for sure how the eliminated athletes would have done, but there are ways to get a good sense. First, let's look at the lowest Open finishers to make the Games. On the men's side, Patrick Burke took 35th in his region (Southwest) and Brian Quinlan took 27th (Mid-Atlantic). For the women, Caroline Fryklund took 25th (Europe) and Shana Alverson took 22nd (South East). Given that no one below 35th (and hardly anyone below 20th) wound up reaching the Games, I highly doubt any athletes placing below 60 in the Open would have reached the Games. In this respect, I think the Open did its job. That being said, I think that with the size of the competition pool increasing so rapidly, expanding the Regionals beyond 60 (possibly 100?) might make sense, although logistically this might be challenging.
At the Regional level, it was well-documented on the Games site just how challenging it was for even the elite athletes to qualify for the Games. Notable former Games athletes like Blair Morrison (5th in 2011) and Zach Forrest (12th in 2011) were unable to qualify this season. Could these athletes, or others who narrowly missed out, have contended for the title? Again, it is impossible to know for sure, but we can use the cross-regional comparison to look at the odds.
Because of the points-per-place scoring system, the cross-regional comparison can vary slightly based on how large of a field we use, but I have used a scoring system that includes all athletes who completed all 6 events. I also adjusted for the week of competition (as detailed in my first two posts, a couple months back). Using this system, let's look at the highest finishers not to make the Games. On the men's side, we had Gerald Sasser (21st - Central East), Joseph Weigel (22nd - Central East), David Charbonneau (26th - North East), Nick Urankar (29th - Central East) and Ryan Fischer (30th - Southern California). On the women's side, we had Andrea Ager (19th - Southern California), Sarah Hopping (32nd - Northern California), Chyna Cho (33rd - Northern California) and Amanda Schwarz (38th - South Central).
Now, in the Games, let's see how well athletes with similar ranks in the regionals did. For men, the highest finisher to finish worse than 21st in regionals (i.e., worse than Sasser) was Chad Mackay, who took 9th at the Games despite ranking 32nd in this regional comparison. The next-highest was Patrick Burke, who was 16th at the Games and 24th in the regional comparison. So it is probably fair to assume that none of the non-qualifying athletes would have been able to challenge Froning for the title, but certainly they could have made a run at finishing in the top 10. For women, however, several top women finished lower than Ager in the regional comparison, including Jenny Davis (8th at Games, 28th at Regionals), Christy Phillips (11th at Games, 20th at Regionals), Deborah Cordner-Carson (13th at Games, 34th at Regionals) and Cheryl Brost (15th at Games, 21st at Regionals). Could Ager have challenged Annie Thorisdottir for the title? I doubt it, but given her Regional performance and her Open result (6th in the World), I think it is not out of the question that she could have challenged for a spot in the top 5.
I think the women's results do indicate that some top athletes might have missed the Games. Now, was this a result of poor programming at Regionals, or perhaps do we simply need more qualifying spots? In Ager's case, if we look at the athletes from her region who did make the Games, we see that all four (Kristan Clever, Rebecca Voight, Valerie Voboril and Lindsey Valenzuela) finished in the top 10, so this leads me to believe that the programming was not the issue. The bigger issue is that certain regions are simply too competitive. Consider the men's Central East: all five qualifying men finished in the top 10 (including the champion), and five other men were in the top 35 in this cross-regional comparison (the three mentioned above, plus Elijah Muhammad and Nick Fory). Other regions, such as the North West, had no athletes in the top 20 at the Games. I don't think it's unfair to suggest that HQ consider re-allocating the Games spots or adding more spots across the board.
Overall, I think we have to consider the 2012 Games season a successful one - the increased participation and interest in the Games speaks for itself. With that in mind, I believe there are clearly some adjustments that need to be made moving forward. Hopefully we see HQ continue to refine the system in 2013.
*Notes on valuation of movements: I broke down burpee-box jumps and burpee-muscle-ups into two movements, each worth half of that station's total value. For instance, in the Games Chipper, there were 11 total stations, one of which was burpeee-muscle-ups. So burpees and muscle-ups were each given 0.5/11 (~0.04) points. Also, I ignored the run portion of Regional WOD 3 (DB snatch/run) because it was virtually inconsequential to the results.