Reimbursement News

Why Claims Accuracy Testing, QA Isn’t Working for Healthcare

Healthcare organizations need to get back to basics when it comes to developing and executing their claims accuracy testing and quality assurance programs.

By Mark Benedict

Let’s face it. Testing isn’t working.  That’s the hard truth about the healthcare industry and its track record on claims accuracy testing and quality assurance.

The harder truth is that this problem is expensive – with a price tag of about $275 billion in the United States each year, according to the Centers for Medicare & Medicaid Services – and with the new ICD-10 medical coding and other industry changes, waste and costly errors are potentially even more problematic.

To begin to understand the underlying causes of this multi-billion-dollar problem, we performed meta-analysis of 78 QA and testing assessments performed within the healthcare industry dating from 2009 through 2015.

These 78 separate assessments from both the payer and provider sides covered every phase of life-cycle testing of commercial healthcare information and claims processing applications and platforms, their modifications and customizations, interfaces to other systems (both commercial and custom-built) that are used along the monetary path linking healthcare providers to insurance payers.

Testing issues that contribute to or even enable this multi-billion-dollar quality assurance problem occur in every phase of testing from functional through systems integration and onto pre-production testing of fixes/patches and upgrades, and included every type of testing from “black box” to “glass box.”

The meta-analysis revealed a 40 percent effort waste in test activities, spanning from strategy to execution. This waste is attributed to”

1) Missing/non-existent test coverage

2) Test data failures

3) Unclear and error-laden test scenarios, cases, and scripts

4) Duplication of test coverage and testing activities

5) Test environment issues

6) Applying the wrong approach to testing  

The 2014-2015 World Quality Report projected that by 2017, 29 percent of IT spend will be in testing. 

Combine that with the known 40 percent effort waste during testing, and it is possible to see that nearly one-eighths of total healthcare IT spend will be lost on insufficient, incomplete, and error-prone testing.

Why is healthcare testing so bad?

It’s not about the ever-changing dynamics of healthcare reform or trying to keep-up with the rapid pace of new technologies. It hasn’t been caused by disjointed approaches to adapting standards, or the lost knowledge of a retiring workforce.  It’s not even about having poor environments or weak tools. Those things are just excuses, not causes.

The underlying issue is far more fundamental. Bad testing comes from creating bad test coverage… but believing that it’s right. Another interesting thing was revealed in the meta-analysis: no one believed that their testing was bad, certainly it had its limits and limitations, but everyone truly believes they are doing the best they can with what they’ve got.

So let’s take a deeper look at what causes these problems.

The lack of a plan

When it comes to creating test coverage there is the classic top-down approach of starting with well-formulated test objectives, which then require various forms and levels of proof to fulfill those test objectives.

Alternatively, there is the functional-analysis approach in which testers dissect requirements, or use cases, or functional specs, or some lifecycle source to then create proofs (tests) that these requirements are being fulfilled under various conditions (applying additional techniques like the conditional test model to flesh out the coverage).

There are also bottom-up coverage planning approaches that start with the superset of all involved data variables and their ranges of values that are then programmatically combined using orthogonal, forced-pairs, or combinatorial algorithms to generate test coverage patterns which may or may not be optimized, depending on the data analytics involved.

From any of these approaches – or any combination of these approaches – some finite number of test cases is derived with known descriptive parameters, that are yet to be built.

But at least there is a test coverage plan that includes some ideal of what good coverage would be. Unfortunately this level of thinking is often skipped in the interest of time. In fact, organizations skip the coverage plan more than seventy-five percent of the time, which means they are lacking guidance when continuing their work.

Problems with test creation

Test coverage build-out begins with the testers writing test cases.  Without a plan, coverage becomes a “whatever we build is what you get” endeavor.

Experienced testers are capable of creating good test cases, but creating good test coverage is another story. Without a solid plan, 44 percent of the needed test coverage will be skipped entirely, and nearly a third of the coverage that does get built will be error-prone and may require troubleshooting or rewrites to prevent false positives or false negatives.

Most of the false positives will go unnoticed, creating a path for defect leakage into production. Of the coverage that does get built, test data prep and data conditioning in the healthcare industry scores the worst: with a failure rate ranging from 38 percent to 62 percent.

Living with low expectations

At a recent conference, a hospital administrator was bragging about running “a pretty tight ship” because they recovered over $18,000,000 in charges last year.

If I could have interrupted his speech, I would have said, “You do realize that means your IT systems created more than eighteen million dollars’ worth of mistakes last year, don’t you? And you know your recovery auditors let the small stuff go because they work on commission, so they only hunt big game. And there were things they missed, too. Each lost dollar is just a ripple effect of a root cause.”

“You probably spent over five million dollars to recover from those root causes – but you didn’t actually ‘recover.’ You just corrected a few occurrences of the effect of the mistakes, the root causes are still out there, causing more problems. Wouldn’t you rather prevent all those problems, than have to recover from them at additional costs? Every year? All the time? It is possible.”

But that’s the state of the industry today, everyone accepts the fact that recovery is necessary, and everyone compares themselves on how much they recover – not how much they prevent.

When no one expects testing to actually work, they accept mediocrity as the outcome. They accept the cost of recovery as a routine cost of doing business – and they actually hope to see their recovery figures go up year-over-year!

It’s a backwards metric of success. It’s a way of saying that we’ve gotten good at recovering from the mistakes we make, and our mistakes are getting bigger every year. And peers at these conferences applaud these figures. It’s wrong. And it’s bad for business.

So what should we do?

For the good of the industry, we need a “back-to-basics” movement, a return to common sense and a focus on fundamentals, like clearly-defined test objectives, functional analysis of the objects and systems under test, solid test coverage planning, the deliberate creation of valid test coverage, and the utilization of well-prepared test data.

Testing and quality assurance are both well-defined disciplines. But as disciplines, they take an active understanding of their basic tenets in order to consistently succeed and not fall into bad patterns of re-using out-of-date and pre-determined strategies. A deliberate and purposeful effort to get back to the basics of doing good testing has the chance of turning things around, industry-wide.


Mark Benedict is a Technology Services Director for Top Tier Consulting. As a Solutions Architect, he built several Testing and Automation Centers of Excellence (both TCoEs and ACoEs) for various clients and large scale consulting firms. Always a multi-disciplinary leader, Mark has helped transform numerous organizations needing to optimize their business drivers, applications integrations and lifecycle speeds; identify and nurture innovations; and embrace quality and excellence as a way of doing business.