How to Tell If Your Vendor's Claims Are Valid: A Nine Part Series

Vendor Evaluation: Overview of this Multi-Part Series

The last ten years have witnessed an explosion of vendors “showing savings” that don’t exist…and yet are readily accepted by buyers and their advisors. The mission of the Validation Institute (VI) is to reverse that trend, to eventually eliminate the gap between reported outcomes and actual outcomes.

VI’s validations highlight organizations committed to measurement of actual outcomes. VI’s courses in Critical Outcomes Report Analysis (CORA and the advanced version, CORA Pro) teach buyers, advisors and even vendors how to determine whether outcomes are legitimate or fabricated.

This series is part of CORA, but we’re making it available gratis in order to draw attention to this ever-widening gap between reported and actual outcomes. Our purpose is not wholly altruistic. Reading the full series will encourage more buyers to insist on VI validation and more people to take the CORA and CORA Pro courses. Further, you can retain VI for writing/evaluating RFPs, measuring outcomes, and even forensic litigation consulting.

Part One covers regression to the mean (RTM), RTM may sound like an obscure biostatistical phenomenon of interest only to academics. However, reading this section will likely cause you to say: “Yikes! This is exactly what our vendor does.” Do they promise to reduce the cost, risk or glucose levels of people who are high to begin with? Part One uses many case study examples to show you how incredibly pervasive this phenomenon is, and how to identify and avoid it.

Part Two addresses participation basis. It can be stated unequivocally that any study comparing participants to non-participants will dramatically overstate savings. This is true even if the vendor claims that the declining non-participants are “matched controls” or “propensity-matched”. Participant-vs-non-participant study design may explain 100% of the outcome that the vendor would like to attribute to the program intervention. This is not us talking – these are admissions from vendors themselves.

Part Three turns to “trend inflation.” Whenever you see the words “compared to trend,” you can almost automatically assume the savings are overstated. One consultant even wrote, with unintended irony, that savings can be increased by “choosing” an inflated trend. This part will show you not just how to identify trend inflation (not hard) but also how to adjust for it.

Part Four introduces the concept of plausibility testing. (Future installments will delve farther into plausibility test, which are not only useful and easy, but also often quite hilarious.) The first plausibility test shows you how to compare a cross-sectional change in a non-financial outcome, such as a reduction in Hb a1c, cholesterol, weight etc., with the claimed savings. The savings claim, though only possible to the extent of the risk reduction, nonetheless often dwarfs it. The division of the savings by risk reduction yields what is known as the Wishful Thinking Factor. After you’ve calculated a couple and realize how unrealistic the savings must be, you are likely to abbreviate that phrase and add a few exclamation points.

Part Five helps expose the “validation” techniques that vendors increasingly use to convince prospects that their savings are legitimate. For instance, they will say that actuaries have reviewed their work. They leave out the part that they pay actuaries to do this, and that no actuary would stay in business long if they didn’t “find savings.” More recently, vendors have realized that prospects consider the phrase “peer-reviewed” to settle all debates about legitimacy. This Part will take you inside the thriving peer-reviewed journal industry to show you how peer reviews are bought and sold.

Part Six tackles engagement. It opens with the three things vendors conflate with engagement in order to overstate it. For example, employees who participate in a program to earn a large incentive are engaged in the incentive, not the program. No doubt the incentive is high because the program itself lacks appeal. For reasons like this, engagement is usually measured wrong. Fortunately, the Validation Institute offers a unique tool to measure it correctly. Part Six will describe this tool in detail and show you how to download and use it.

Part Seven puts it all together. You can apply what you’ve learned in the previous six installments to create an RFP that will ensure that responses are valid, comparable and useful. This is in sharp contrast to typical RFPs from consulting firms that are none of the above. And we will reveal the single key question that will draw a bright line between legitimate vendors that achieve results and the posers and pretenders who pay consultants, brokers and carriers to recommend them to employers, usually without disclosing their markup.

Part Eight lists and explains the valid metrics that would earn the highest level of validation, Program Validation. This level carries a $50,000 Credibility Guarantee. The valid methods include several forms of parallel assignment, event rate trends vs. our database or another valid database of event or procedure rates, and natural experiments, such as Oregon Medicaid’s lottery. An engagement program must be measured using the Validation Institute’s Benefits Engagement Survey Tool (BEST).

Part Nine It is now an actual law (The Consolidated Appropriations Act, or “CAA”) that you can only use honest vendors. The penalty is that named fiduciaries (CFO and others) must recompense the lost money from the health benefit. Part Nine provides an example of how a vendor can lie transparently, but it’s your responsibility to either hold them to account or fire them for cause. Fortunately, the Validation Institute has a solution to protect you in case your vendor is lying. It is called the CAA Compliance Certification. A vendor that has that certification indemnifies you for $1,000,000 in support of the representation they are CAA-Compliant.

Get ValidPoints

Sign up for ValidPoints, the complimentary monthly newsletter that offers the latest updates on:

A red line drawing of an archery target with an arrow in the center bullseye, symbolizing precision, goal achievement, and success.

The move toward high-performance and high-value healthcare

Icon depicting customer support and care with a stylized human figure embraced by a hand, set against a segmented circular backdrop.

In-depth analysis of the latest trends and solutions that improve heath outcomes, strengthen accountability, and cut costs

Icon of a seal with a checkmark, representing verified completion or approval, with a radiating effect, in red outline on a light background.

Actionable insights on how to drive better health outcomes at a far lower cost for your organization.

Icon of a badge with ribbons and a check mark, symbolizing achievement, accreditation, or quality assurance in a red outline.

Profiles in innovative solutions and organizations that are “walking the walk” when it comes to delivering better savings, outcomes, and more