What does the evidence show?
What does the evidence show?
The real fun at scientific conferences is in the poster sessions. While the big keynote talks tend to focus on well-established research trends, the posters offer an unfiltered glimpse of hunches, works in progress, and wild ideas. As a science journalist, my pre-conference ritual involves poring over the list of poster titles looking for interesting possibilities, then racing around the poster hall to check out the actual posters, where researchers have laid out their latest findings and analyses, and finding out which crazy hunches appear to have paid off.
It’s fun, but I’m increasingly realizing that it’s also problematic. If the only studies you hear about are the ones that produce seemingly positive results, you end up with a distorted impression of how reliable those results are. At a huge conference like the American College of Sports Medicine annual meeting, there are literally hundreds of posters investigating possible performance boosters. Simple probability dictates that you’re going to end up with some false positives among them—and those results will seem more impressive if you ignore all the negative results.
So, in that spirit, I dug through my notebooks to pull out five studies I’ve seen at conferences this fall that I thought were really cool—but didn’t produce the “right” result. I should make absolutely clear that calling them “failed” studies in the title is completely tongue in cheek: These studies were designed to test various hypotheses, and they’re equally successful whether they confirm or reject those hypotheses.
It’s also important to note that, as is typical for poster presentations, these are mostly small experiments, in some cases intended as pilot or exploratory studies. The results may change as more subjects are tested or as the study design is refined based on the pilot results. What’s interesting to me is the opportunity to get a sense of the ideas researchers are pursuing and the theories they’re considering. It’s not about the answers (at this point, anyway); it’s about the questions.
This was an abstract presented by Michael Rogers of Simon Fraser University at the Sport Innovation Summit in Vancouver in October. He and his colleagues performed a series of strength tests, including deadlift, grip strength, and vertical jump, on 12 competitors in a 50K mountain race. More so than in flat races, climbing up and down mountain trails requires a fair amount of strength. So, after controlling for aerobic fitness (as measured in a VO2max test), would the stronger athletes run faster in the race?
The short answer, in this particular cohort, was no. To be honest, though, the study is too small to draw any real conclusions at this point. This group has been studying participants from the same trail race for a number of years now, so it will be interesting to see what patterns emerge as they accumulate more data. I don’t think anyone doubts that aerobic fitness is by far the most important factor in ultras, and that you also need to have some reasonable minimum amount of strength. But trying to quantify the relative importance of strength and endurance is an interesting project.
This was an intriguing presentation by Liam Fitzgerald from the University of Massachusetts Amherst at the New England American College of Sports Medicine conference in Providence in October. In recent years, there’s been widespread recognition that sarcopenia—the loss of muscle with age—can have a major effect on quality (and perhaps quantity) of life. But it has also become clear that there’s more to sarcopenia than simply losing muscle. With age, we also see changes in the connections between brain and muscle and in the function of whatever muscle you’ve got left.
Fitzgerald investigated the potential role of “muscle architecture,” which encompasses three main elements: the thickness of the muscle, the length of the fascicles (bundles of muscle fibers), and the “pennation angle,” which is the angle of the muscle fibers relative to the direction they pull in. There’s plenty of evidence that muscle architecture, which you can assess with ultrasound, is a key determinant of how much force you get from a given amount of muscle. But in Fitzgerald’s study of young and older women, there was no link between muscle architecture and fatigue in a four-minute strength test, suggesting that architecture is not a hidden key to age-related decline.
The biggest conference I went to this fall was the Canadian Society for Exercise Physiology (CSEP) annual meeting in Winnipeg. One the posters I rushed to check out was a University of Guelph study, presented by Rachel Aubry, comparing a Stryd power meter to standard metabolic measurements in 13 recreational and 11 elite runners. The concept of a power meter for running remains new and is something I’m still having trouble getting my head around. Power is a relatively straightforward concept in cycling, but I’m not entirely sure what it means in running—and Stryd’s secret algorithm doesn’t make it easy to figure out.
Aubry’s study involved having the runners perform a series of tests to measure running economy at various speeds on both a treadmill and an outdoor track. There was a “significant albeit weak” overall relationship between measured running economy and power. Interestingly, the researchers found a significant difference in running economy between treadmill and track running—but the power meter didn’t pick up any differences between the two surfaces. There are some caveats to the study, such as the fact that they used the old chest-mounted Stryd model rather than the new foot pod. Still, the overall take is that power, as measured by this device, doesn’t necessarily pick up subtle changes in the metabolic demands of running.
Endurance athletes run on carbs and fat; protein, in contrast, provides building blocks for muscles rather than being burned as fuel. That’s the general pattern, but it’s not the full picture. Studies suggest that between 5 and 10 percent of the energy you burn during exercise can come from protein, particularly during long sessions when your muscles are running low on carbohydrate. But does it matter how hard you’re running? That’s what another CSEP poster, presented by Jenna Gillen of the University of Toronto, sought to determine.
In the study, eight trained runners ran 10K at either 70 or 90 percent of maximum heart rate while the presence of labeled amino acids was tracked to estimate overall protein burning rates. The results: no detectable difference in protein use between the easy and hard runs. It’s possible that a bigger study, or perhaps a longer run that induced more carbohydrate depletion, would have picked up some differences. But the initial conclusion is that you don’t need to make big adjustments in protein consumption based on your exercise intensity.
Should you push hard in today’s workout, or should you back off to recover from previous training? One of the holy grails of training science is figuring out some way of making that decision objectively rather than relying on athletes’ gut feelings. Jared Fletcher and Brian MacIntosh of the University of Calgary tried a new approach to this problem, using electrically triggered muscle contractions to measure the neuromuscular fatigue present in the legs of 14 trained distance runners during a ten-week training cycle. Then they compared this data to reported training and subjective feelings of fatigue.
The results, presented at CSEP, didn’t reveal any obvious patterns or connections. In fact, neither neuromuscular fatigue nor subjective fatigue changed much over the ten-week study. That may be because they measured fatigue just once a week, at the beginning of the week, rather than immediately before or after key training sessions. Or it may be because neuromuscular fatigue simply isn’t an important factor during normal training for distance runners. I don’t think we can draw any firm conclusions about this yet, but it’s an interesting question to consider.
Anyway, that’s a sample of the kinds of presentations that usually stay buried in my notebook. We’ll probably hear about some of them again in another year or two, when more complete results are published somewhere. Others will disappear without a trace. That’s all part of the scientific process, and it’s worth keeping that in mind whenever you hear about new and exciting results.
Discuss this post on Twitter or Facebook, sign up for the Sweat Science email newsletter, and check out my forthcoming book, Endure: Mind, Body, and the Curiously Elastic Limits of Human Performance.