POC and Volvo Are Teaming Up to Test Helmet Safety
The collaboration could answer some crucial questions about what happens to helmets during a crash and points the way forward for better testing and certification
For all the importance we place on bicycle helmets to protect our heads in crashes, we still don’t know much about how they function in real-world situations, like when a driver hits a cyclist with a car. A new collaboration between carmaker Volvo and action-sports brand POC might offer some new insight into those moments, and the two Swedish companies are going to try to find out, with the simplest of approaches: by smashing helmets into hoods in a lab.
The testing—and what POC plans to do with the results—might provide any number of benefits beyond knowing more about the physics of these crashes. Namely, it might enable helmet makers to finally offer more qualitative safety information to cyclists, instead of selling helmets mostly on ancillary benefits like weight and ventilation.
Every bicycle helmet sold in the U.S. must meet impact standards created by the Consumer Product Safety Commission meant to simulate the force of a rider’s head hitting the ground. (There are similar standards elsewhere, in Europe and Australia and New Zealand). But those certification tests, which involve a weighted helmet dropped onto flat (and sometimes also rounded) anvils, have significant limitations. First, they look only at linear impact forces—helmets have to meet a minimum requirement in this area—and ignore rotational energy, which is thought to be a significant factor in concussions and other closed-head traumatic brain injuries. They also don’t account for the fact that, in bike-vehicle crashes, cyclists’ heads don’t just hit the pavement. “It’s a combination of both impacts to the vehicle and to the ground” that causes rider injury, says Magdalena Lindman, a traffic-data safety analyst for Volvo. As Oscar Huss, POC’s head of product explains, the impact forces felt on a helmeted head colliding with a car will vary based on what part of the car the helmet hits (more on that in a bit), something conventional certification testing isn’t designed to explore.
Second, the test protocols simulate relatively low-speed crashes, below 15 miles per hour, and they don’t offer any information about what happens at higher velocities. Finally, they are administered on a pass-fail basis and so don’t give consumers any real information about which helmet designs are actually safer than others.
POC wanted to create other test methods to learn more about how its helmets perform in the real world. “We have a pretty long history of trying to look beyond normal test standards,” says Huss. The brand has collaborated with Volvo on safety projects before: POC and Volvo’s last collaboration focused on a system to communicate a cyclist’s location to a driver, but it never resulted in a physical product, much less wider adoption. This is the first to involve crash testing.
The team didn’t have to look far to create a protocol. Since 1997, European auto-safety regulators have required pedestrian-impact testing as part of the Euro NCAP (New Car Assessment Programme) protocols for any car sold in Europe. Unlike pavement, a car isn’t a uniform, consistent surface: parts like the hood, made of large, flat pieces of sheet metal, deflect under impact, which helps absorb some of the energy of a crash. But elements like the A-pillar, which is essentially part of the car’s unibody structure and supports the roof, windshield, and front windows, can be much harder; there’s almost no deflection there. The Euro NCAP testing is, in part, an effort to make those very hard areas of a car a bit more forgiving or spur related technologies like external (pedestrian) airbags. (If you’re interested in how your car fares, you can enter a make, model, and year and get a detailed report with an overall pedestrian safety rating and a grid of what zones of each car offer particularly good or bad impact absorption.) In the U.S., the National Highway Traffic Safety Administration has studied the issue but never implemented testing, though design changes brought on by European testing still carry over into car models sold here and the rest of the world.
Here’s how it works: testers equip a weighted model head with accelerometers and other sensors and then shoot it against different parts of a car from different angles, meant to simulate pedestrians of different height.
POC and Volvo decided to essentially co-opt these tests for use with bike helmets: they’ll be slamming weighted helmets into various parts of a car to see how their products do in different crash scenarios. Huss points out that this approach is “a little bit more violent than normal [helmet-certification] testing, both in Europe and the U.S.” While the Euro NCAP vehicle tests and the CPSC helmet test use a similar-size headform, for instance, the NCAP protocol fires the helmets at an impact velocity of 25 miles per hour, almost double that of the CPSC test, thus resulting in much higher impact forces.
POC does other testing in its own lab, and it isn’t the only company doing tests outside the normal certification protocols. Giro, for instance, operates a lab called the Dome. One of its test protocols, the B-RAD, involves attaching a headform and helmet to a torso and swinging the whole apparatus into an angled surface to test for rotational energy. Then there’s Virginia Tech’s STAR rating program, which offers a more qualitative, graded assessment of helmet safety and, crucially, a transparently stated scoring system and test methodology, which includes rotational energy.
POC and Volvo will also be measuring rotational energy in their tests. And in addition to using accelerometers, they’ll be recording the studies with high-speed cameras to gather even more data. All of that will offer far more information than they can get from standard helmet-certification testing.
Huss says it’s too soon to know what POC and Volvo will learn from the testing and how it might make its way into product design. According to Lindman, once the test protocol is finalized, it will take about six months to collect data. Once they have the test data, Huss predicts POC might be able to incorporate that into helmet technology in as little as one to two years. So the soonest we might expect new helmets that reflect the testing is 18 months.
Real-world testing is tricky; it’s unethical to enlist human participants in a study that requires them to crash, and getting naturalistic data is tough because every crash is different. A lab approximation of that is the best that helmet makers can realistically offer. So I’m glad POC is doing it. I’m also glad they’re publishing the data, which provides something of a path forward to improve helmet standards. For Volvo’s part, Lindman doesn’t expect major changes to car design based on the tests, partly because it’s already made many of those changes in the course of the mandatory Euro NCAP testing. Besides, she says, the company is more focused on crash-prevention systems, like pedestrian and cyclist detection and active emergency braking.
For all the reasons noted above, the current certification-testing standards are pretty stale. And while various helmet companies do perform additional testing, consumers almost never see any of those results, because helmet makers are (mostly) famously averse to making qualitative safety claims for fear of liability.
Only recently has that started to change, with third-party programs like Virginia Tech’s. Similarly, since POC plans to publish its results, the company is able to neatly sidestep the safety-claims issue by simply providing data, something I hope more companies start to do. It remains to be seen when and in what form the data is published: whether it’s presented in a layout like the STAR system that’s useful for helmet buyers or in a more modest, technical way that at least advances scientific standards within the industry and regulators. The pluses of either approach are obvious. The downside is that since companies often use different test methods and have a stake in the results, clear comparisons between data could be difficult. That’s the attraction of the STAR system (or, for that matter, certification standards): it’s independent, and every helmet is tested the same way.
We also don’t know if the test results will fit what we see in conventional lab testing or deviate in important ways that raise new questions to answer. But the initiative to try something new in testing—especially an attempt to look at how helmets work in the real world—is a step in the right direction. Publishing those results is another. It’s like any other scientific effort: only through transparency and collaboration will we get meaningful progress. Perhaps, if more of the data helmet makers gather is out in the open, the bike industry and regulators can finally put together more complete certification testing that better reflects the real world. And that testing protocol could offer a graded good-better-best approach like the STAR system and, well, every car-safety rating system in the world, rather than the opaque and binary pass-fail approach we have now. We’re a long way from that point yet. But at least we’re on the right path.