At last, a Bulletproof Study on Wellness Programs: A Randomized Clinical Trial Finds Very Few Positive Results

Linda Riddell & Fred Goldstein
Encouraged by the Affordable Care Act and well-funded marketing campaigns, employers have bought wellness programs hoping for healthier employees and lower medical costs.  Published studies showed good results, though those results rarely survived close examination.
Even so, the jury was still out. Until now…
Now, an excellent study published in JAMA randomly assigned 20 BJ Wholesale Club work sites to wellness programs and compared them to 20 similar BJ’s worksites.  The results? Out of 78 measures, researchers found slim gains on two and no cost savings.
The two measures showing an advantage for the wellness worksites were rates of employees doing regular exercise and of managing weight.
The study’s strengths are many:

  • Funding came from disinterested parties (National Institute on Aging, Robert Wood Johnson, Abdul Latif Jameel Poverty Action Lab)
  • It was registered as a clinical trial, which means that the analysis was pre-specified and could not be adjusted later to “discover” results
  • It included all employees who were employed at any time during the study, not just those who had longer tenure or those who participated
  • The statistical analysis was thorough from start to finish

These strengths stand in contrast to the wealth of studies of wellness programs.  As the JAMA researchers note, “Given that most prior studies were based on observational designs with methodological shortcomings such as potential selection bias, results based on random assignment of the intervention are likely more reliable.”  In short, comparing program joiners to non-joiners will produce fatally flawed results. Ironically, on three instances, wellness promoters themselves have inadvertently proved the point that non-joiners are an invalid control group.
Comparing joiners to non-joiners is one way to fabricate outcomes, but there are lots of others, which partly explains why employers have allowed the $8 billion wellness industry to sustain itself.  The core issue is opaque promises that are not or cannot be measured. For example, a program could promise to boost productivity by 14%. Without valid measurement of productivity, the employer-purchaser has only the program’s promise; likely, that is all the employer ever has besides the bill for the program’s fees.
Short of having population health scientists on staff, employers have few tools to screen out weak programs and monitor the results of strong ones.  Or, they can turn to independent entities that validate wellness program results.
The Validation Institute is such an independent entity.  The Institute has independent population health scientists who review outcome measures and validate those that pass muster.  Validated programs have a credibility advantage over vendors whose results are not validated. However significant this credibility advantage was last week, it just became more significant with the publication of this article.
Why? Because over the course of five years, the Institute has never validated savings claimed by a wellness vendor. And in terms of “moving the needle” on health outcomes significantly, only one vendor (US Preventive Medicine) has ever been validated.
That’s one wellness vendor out of roughly 1000, which seems like the approximate odds that a wellness vendor will achieve a measurable outcome.