Follow me on Twitter!

Wednesday, August 14, 2013

A Closer Look at the 2013 Games Season Programming

I struggled for the last few days on how to present this analysis. Last year, I wrote two lengthy posts assessing the programming for the 2012 Games season. I titled the posts "Were the Games Well-Programmed." While I thought those posts turned out well, I hesitated to simply follow the same template as last year, for a couple reasons:
  • Plenty of people have an opinion on the Games programming, many of whom are much more known in the CrossFit community than me (for instance, I've already read analysis from Rudy Nielsen and Ben Bergeron). Do we need more opinions out there?
  • Assigning grades or giving a thumbs-up/thumbs-down to the Games programming gives off the impression that I have it all figured out. I think HQ has made it clear that they work very hard not to be influenced by the outside world in their decision-making. Am I really going to accomplish anything by telling them they were wrong?
However, balancing those concerns was my feeling that I do have something unique to provide to the discussions. And, most importantly, I think the discussion is important. While I respect HQ's stance to do things their own way, I'd like to think that they are always looking for ways to improve the Games. Although I don't work for HQ, I don't feel as though I'm an outsider. Those of us in the community, and especially those who've been following and competing in the sport for years, are all working toward the same goal: to keep this sport progressing in the right direction. I know that HQ is at least marginally aware of this site, considering Tony Budding took the time to comment on my scoring system post last year. Here's to hoping they're still keeping up with me (and I promise I'll leave the scoring system out of the debate for now, Tony).

With that in mind, this post will be broken down in much the same way as last year's discussion. There are five goals I think that should be driving the programming of the Games, in order of importance:
  1. Ensure that the fittest athletes win the overall championship
  2. Make the competition as fair as possible for all athletes involved
  3. Test events across broad time and modal domains (i.e., stay in keeping with CrossFit's general definition of fitness)
  4. Balance the time and modal domains so that no elements are weighted too heavily
  5. Make the event enjoyable for the spectators
What I'd like to do is assess how well those five goals were accomplished this season. Unlike last year, however, I'm making a couple changes.
  • This year, I'm going to take the entire season into account in this post (last year I separated the Games programming specifically from the Games season as a whole). I've already covered the 2013 Open and Regional programming to some degree in previous posts, so I'll be incorporating some of that here. I think it's better to try to view the Games in the context of the whole season.
  • I won't be giving grades for each goal this year. Instead, I'll be pointing out suggestions for improvement, because simply identifying the problems only gets us halfway there. Additionally, I'll point out things that I felt worked out particularly well. Every year, HQ does a few things that bug me, but they also do a handful of things that make me say, "Hey, that was a great idea. I wouldn't have thought of that." I think it's worth acknowledging both sides.
So with that as our background, let's get started.

1. Ensure that the fittest athletes win the overall championship

I think it's hard to argue this wasn't accomplished this year. Rich Froning was challenged, but he still came out of the weekend looking pretty unbeatable. Sam Briggs, although she did show a few weaknesses, appeared to be the most-well rounded athlete across the board by the end of the weekend, while many of the women who were expected to be her top competition had major hiccups. Both Froning and Briggs won the Open and finished near or at the top in the cross-Regional comparison.

Additionally, as I pointed out in my last post, the athletes that we expected to be at the top generally finished that way. That doesn't absolutely mean that the Games are a perfect test, but it does provide some validation when the top athletes keep showing up near the top across a variety of tests in successive years.

How We Can Do Better: I don't really have anything here. The right athletes won, so mission accomplished.
Credit Where Credit is Due: The fact that almost all the athletes competed in every event really helped keep things interesting until the end. In the past, we've seen athletes build an early lead and hang on simply because the field gets so small that there aren't enough points to be lost in the late events. Allowing 30 athletes to finish the weekend allowed some big swings at the end, including Lindsey Valenzuela's move from 5th to 2nd in the final two events.

2. Make the competition as fair as possible for all athletes involved

Because I promised Tony Budding I wouldn't bring up the scoring system in general, I won't touch on that here. Let's just say I think the scoring system is fair enough. However, the way the scoring system was applied in Cinco 1 and 2 didn't make a whole lot of sense. Any athlete who didn't finish the handstand walk (Cinco 1) or the lunges (Cinco 2) was locked in a tie, despite the fact that the lunges took 2-4 minutes and the separation was very clear between many athletes who were tied. Because of the massive logjam (21 male athletes tied for 7th, 13 female athletes tied for 4th), the few athletes who did finish didn't get that big of a point spread on many other athletes who were on pace to be several minutes behind.

The other issue here is judging, which does tie in with programming to some extent. I think the judging continues to improve each year. Anyone who's been to a local competition has seen the judges who just don't have the stones to call a no-rep. That simply doesn't happen at the Games. You cannot get away with cheating reps, and that's definitely a good thing for the sport.

I won't dwell on it here, but everyone knows the judging in the Open is still a concern (see 13.2 Josh Golden/Danielle Sidell fiasco this year). Hopefully some careful programming will alleviate that next year.

How We Can Do BetterImprove tiebreakers for movements such as walking lunges, handstand walks, running, or anything where a distance is involved instead of a number of reps. Also, I'd prefer to have Games athletes not perform chin-to-bar pull-ups. They are really tricky to judge and aren't as impressive to spectators. In fact, the whole "2007" event just didn't really work for me; it seemed like basically a pull-up contest for the athletes at this level.
Credit Where Credit is Due: Chip timing helped identify the winners really nicely in some of the shorter events. Also, judging keeps improving each year.

3. Test events across broad time and modal domains (i.e., stay in keeping with CrossFit's general definition of fitness)

Right off the bat, let's look at a list of all the movements used this season, along with the movement subcategory I've placed each one into. I realize the subcategories are subjective, and an argument could be made to shift a few movements around or create a new subcategory. In general, I think this is a decent organizational scheme (and I've used it in the past), but I'm open to suggestions.

It's pretty clear that the CrossFit Games season is testing a very wide variety of movements, and the majority of those were used in the Games. Even some that were left out of the Games, like ordinary burpees* and unweighted pistols, were used in other forms (wall burpees*, weighted pistol). No major movements that we've seen in the past were left out of this entire season, with the exception of back squats. I've seen some suggestions online about testing a max back or front squat in the future, as opposed to the Olympic lifts that we have been seeing a lot.

Another key goal is to hit a wide variety of time domains and weight loads. Below are charts showing the distribution of the times and the relative weight loads (for men) this season. The explanation behind the relative weight loads can be found in my post "What to Expect From the 2013 Open and Beyond." Two notes: 1) some of the Regional and Games movements had to be estimated because I don't have any data on them (such as weighted overhead lunge and pig flips); 2) the time domains for workouts that weren't AMRAP were rough estimates of the average finishing times.

Although most of the times were under 30 minutes, we did see a couple beyond that, including one over an hour (the half-marathon row). As for the weight loads, we saw quite a range as well. The two heaviest loads were from the max effort lifts (3RM OHS and the C&J Ladder), but there were also some very heavy lifts used in metcons, mainly in the Games (405-lb. deadlifts for crying out loud). Still, lighter loads were tested frequently in early stages of competition (Jackie, 13.2, 13.3).

How We Can Do BetterI like the idea of testing a max effort on something other than an Olympic lift.
Credit Where Credit is Due: Nice distribution of time domains, and no areas of fitness were left neglected entirely. CrossFit haters can't point to many things and say 'But I bet those guys can't do X.' Yeah, they probably can.

4. Balance the time and modal domains so that no elements are weighted too heavily

Based on the subcategories of movements I've defined above, let's look at a the breakdown of movements in each segment of the 2013 Games Season. These percentages are based on the weight each movement was given in each workout, not simply the number of times the movement occurred (for example, the chest-to-bar pull-ups were worth 0.50 events in Open 13.5, but they were worth only 0.25 events in Regional Event 4).

One thing that surprised me was how little focus there was at the Games on basic gymnastics (pull-ups, push-ups, toes-to-bar, etc.). However, there was quite a bit of bodyweight emphasis (high-skill gymnastics like muscle-ups and HSPU), as well as some twists on other bodyweight movements (wall burpee, weighted GHD sit-up). Overall, bodyweight movements (including rowing) were worth 60% of the points and lifts were worth 40%.

Another surprising thing was how much emphasis there was on the pure conditioning movements like rowing and running. Now, one of the "running" events was the zig-zag sprint, which wasn't actually about conditioning but rather explosive speed and agility. Still, the burden run and the two rowing events really put a big focus on metabolic engine and stamina. I have no problem with this, but what I would like to see is these areas tested more early on. Running in the Open is almost impossible, but at the Regional level, it would make sense to test some sort of middle- or long-distance runs so that athletes who struggle there would have those weaknesses exposed.

As far as loading is concerned, what seems to be happening at the Games in recent years is that things are either super-heavy or super-light. Only two of 12 events tested what I would consider medium loads (somewhere around a 1.0 relative weight for men, like 135-lb. cleans or 95-lb. thrusters), and none tested light loads. Also, as noted above, the bodyweight movements that were required were generally extremely challenging. I personally wouldn't mind seeing some more "classic" CrossFit workouts involved, like we saw with "The Girls" at the end of last year's Games.

Whereas last year's Games seemed to be lacking in the moderately long time frame (12:00-25:00), I think they did a better job of spreading things out this season. In the Games, we had 1 event over 40:00, 3 between 12:00 and 40:00, 4 between 1:00 and 15:00 and 2 that were essentially 0 time.

One other way to see if we're not weighting one area too much is to look at the rank correlations between the events. If the rankings for two separate events are highly correlated, it indicates that we may be over-emphasizing one particular area. For this analysis, I focused only on the Games, because it's not really such a bad thing if we test the same thing in two different competitions since the scoring resets each time, but within the same competition, it's more of a problem. 

I looked at the 10 Games events in which all athletes competed, which gave me a total of 45 unique combinations for men and 45 combinations for women. Of those combinations, only 8 had correlations greater than 50% and only 3 had correlations greater than 70%. Not surprisingly, the 2K row and the half-marathon row were highly correlated for both men and women (54% for men and 81% for women). Also, the Sprint Chipper and the C&J Ladder were strongly correlated (70% for men and 54% for women), likely because they both had a major emphasis on heavy Olympic lifting. One surprise was that the burden run and the 2K row were 79% correlated for women, but I think that may have been somewhat of a fluke, considering the correlation was just 31% for men.

In the end, most events appeared to test pretty distinct aspects of fitness, which is a good sign.

How We Can Do Better: Fans love the heavy movements, but I'd suggest supplementing those with some more moderate weights as well. CrossFitters can relate to someone crushing a workout even if the weight it not enormous (those Open WOD demos weren't bad to watch, were they?) Also, let's test running earlier in the season.
Credit Where Credit is Due: We saw events where even Rich Froning and Sam Briggs found themselves near the bottom, which tells me we are really testing a wide range of skills. And actually, I liked limiting the Games to 12 events (instead of 15 last year), because in my opinion that was sufficient and we didn't wind up double-counting too many areas.

5. Make the event enjoyable for the spectators

Unfortunately I don't have any data to back this up, but in my opinion, this is the area that I think has improved the most in recent years. I think a nice touch at the Games is that in multi-round workouts, each round is performed at a different point on the stage. This really helps the audience follow the action and builds the drama as you see athletes progress through the workout.

Making all the events watchable was also nice after Pendleton 1, Pendleton 2 and the Obstacle Course were unavailable last season. The burden run had many of the same qualities as an off-road event, but it was all done on site and finished up in the soccer stadium.

However, as nice as it is to use the soccer stadium to allow more spectators, the vibe at those events is considerably more subdued. Perhaps HQ will be able to find a way to improve this in the future, but it seems that this sport isn't quite as conducive to viewing from such a distance. By contrast, the intensity in the night events in the tennis stadium is fantastic.

How We Can Do Better: Figure out a way to make things a bit more exciting in the stadium. It won't be easy, but there's no denying that things weren't quite as intense when the workouts were held there.
Credit Where Credit is Due: The Games are truly becoming more of a spectator sport. Even the uninitiated can see the action unfold and understand and appreciate what's going on. And although I mentioned it above, the improvements in judging have helped the spectator experience.

*I decided to break up "wall burpees" into burpees and wall climb-overs. Each were worth 1/6 of the value of that workout (snatch was 1/3 and weighted GHD sit-up was 1/3). This was updated on 8/22/2013.

Thursday, August 1, 2013

Quick Hits: Initial Games Reaction and Upcoming Schedule

Does anyone else go through a weird sense of withdrawal after the Games ends each year? After spending all spring analyzing the Open and Regionals, making predictions and finally attending the Games in person, it's bizarre to consider that we won't really start up another season for six more months. Sure, there will be follow-up videos posted on the Games site for the next few weeks, but eventually the coverage will dry up and we'll all be back in the grind of preparing for next season.

Hopefully, I can fill that void to a certain extent. My goal over the next few months is to break down the 2013 Games and Games season in depth, take a look back at the history and evolution of the Games from a statistical perspective, as well as delve into a few new topics related to training, programming and competition. First on the slate is a critical look at this year's Games, similar to what I did last year in my post "Were the Games Well-Programmed? (Part 1)." My goal is to put this together in the next week or two.

For today, I just wanted to get some quick reaction to the Games out there:

  • The thing that stuck out to me attending the Games in person the past two years is how well-organized and professional the whole event is. Considering this thing is just four years removed from being held on a ranch, it's amazing to see how efficiently things run today. Virtually every event got off on time, the judging was solid, there were no equipment problems, and from what I could tell, the televised product looked good as well. The ESPN2 broadcast certainly seemed to go over well.
  • It's also a blast being out there in person, and I'd recommend it to anyone who hasn't been. Sure, it can be a little draining to sit outside for 10-12 hours a day, but there is plenty to do outside of just watching the events, such as the vendor area, the workout demos, a wide food selection and of course some general people-watching. Many of the CrossFit "celebrities" we see on videos online all the time (plus more mainstream fitness celebrities like Bob Harper) are just hanging out in the crowd like everyone else.
  • As for the competition itself, I think we crowned the two deserving champions. 
    • Rich Froning proved again that he's simply the most well-rounded CrossFitter out there, and as usual, he seems to get better as the stage gets bigger. I'm starting to get the sense that he really looks at the big picture and maybe, just maybe, holds a little bit back early on to keep his body intact until the end. Remember, he didn't win any events until Sunday, where he won all three.
    • Sam Briggs was also the most well-rounded athlete, but she did have a few holes exposed. The zig-zag sprint and the clean and jerk ladder both made her look vulnerable, but she was so solid on the metcons that it didn't matter. I think if Annie Thorisdottir can return at full strength next year, it will be a real battle between those two. Annie clearly has a big strength edge, but I don't think she is at quite the same level as far as conditioning.
  • In my opinion, which I'll expand on in my next post, the test was probably the best all-around that we've had to date. It wasn't too grueling to the point where athletes were falling apart by the end of the weekend, but it was a legitimately challenging weekend. The events were nicely varied, and there were only one or two duds from a spectator perspective.
  • Although things got shaken up at first, the cream really rose to the top by the end of the weekend, particularly for the men. 
    • For the men, I had Froning at a 59% chance to win coming in, and all the men on the podium had at least a 34% chance of doing so according to my predictions. Of the top 10 finishers, 7 were in my top 10 coming in. Garrett Fisher (5th) was probably the biggest surprise on the men's side.
    • For the women, I had Sam Briggs as the favorite at 32% coming in, and I had Lindsey Valenzuela with 15% chance of reaching the podium. Valerie Voboril was a bit more of a surprise, but I still had her with an 8% chance of reaching the podium. Of the top 10 finishers, 4 were in my top 10 and 9 were in the top 21. The only big surprises near the top, based on the Open and Regionals, was Anna Tunnicliffe. I was, however, surprised that Camille Leblanc-Bazinet (16th) and Elizabeth Akinwale (10th) didn't finish higher.
That's it for now. I'll be back in a week or two with a more in-depth breakdown of this year's Games. Until then, good luck with your training!