Reduce Provider Burden by Rethinking the eCQM Development Process

I’ve looked at the feds,
I’ve looked at the vendors,
I’ve tried to find the key
to 50 million burdens.
         
(with apologies to The Who)

The electronic health record promised a transformation in the quality and accessibility of health information. It also promised relief from tedious paper-based documentation chores. The results, so far, are sobering. The transformative benefits of digital recording-keeping have been only partially realized, at best. And while paper charts and clipboards may be on their way to extinction, time spent in documenting patient care has only increased. The EHR’s appetite for patient data and the demands of structured data entry have disrupted traditional workflows and siphoned attention from direct patient care. Doctors and nurses are kept busy with drop-down menus, checkboxes, forms, and distracting alerts designed to ensure that quality measurement criteria are being met. Increasingly, providers are pushing back against documentation chores that feel irrelevant to immediate patient needs. Patients, too, have become resentful, and at any provider’s office you’ll hear the same complaint: the EHR means less time treating you, and more time treating the computer.

Much of this documentation-linked “provider burden” comes from the nature of the data entry required for reporting electronic clinical quality measures (eCQMs) under Meaningful Use (MU) rules and programs. Quality measures themselves are criticized for providing questionable return on investment. The president of the American Medical Group Association argues that “measuring and reporting quality has become a barrier to actually improving it, diverting resources from the provision of care itself.”[1] Anecdotally, one of Lantana’s nurse informaticists witnessed delays in the treatment of emergent patients because the sole institutional MRI scanner was in use to satisfy a ‘stroke’ measure requiring MRI scans for patients with headaches.

To reduce provider burden, the industry has tried several strategies from reducing the number of measures providers are required to report to harmonizing the measures used in various quality reporting programs, to making the measures more meaningful to the providers reporting them. But what if we’ve overlooked a key area of burden reduction: the processes used to develop the measures?  

Getting reliable results for eCQMs depends on capture of the data elements necessary to the logic of the measure and yet capturing that data is a burden. When designing eCQMs, are we asking these questions: 1) are the required data elements routinely noted in a clinician’s workflow? 2) does the clinician capture the data with the fidelity expected by the measure? 3) is the required data meaningful to the clinician? 4) is it meaningful to the patient?

A “no” answer to any of these questions is likely to increase the burden on clinicians trying to participate in quality reporting. If we are not asking these questions, we should be. Clinical workflow is often not appropriately evaluated in eCQM design, with the result that providers are asked to spend time in data capture required by the measure but not directly necessary for care provision. A good example concerns documentation of a patient’s current medications: there are many ways a provider may document this in a patient’s record. The way the eCQM measures if providers performed the documentation is by searching for a ‘documentation of current medications’ procedure recorded in the patient’s chart using a specific terminology code; if the code is missing, the eCQM doesn’t credit the provider for performance. Many EHRs default to having providers use checkboxes to automate the ‘documentation of current medications’ procedure recording in the patient’s chart.

Changes to the way measures are designed and tested can have a significant impact on burden reduction. Changes to the way data element feasibility is determined affects the measure, it’s reportability, and the speed with which robust measures can be implemented. Field testing of measures and trial implementations should be required steps in measure development process. We suggest two improvements to measure design, testing and deployment in national programs. 

Develop a data element “maturity model” for feasibility testing

A data maturity model is a rating system to assess the availability and variability of data in live EHR environments. A high maturity rating on the scale indicates high availability and low variability (of structure and format) of data captured while a low maturity rating indicates a lack of availability, high variability in format and structure, or both. Feasibility testing of measures can be expedited and strengthened by use of a maturity model for individual data elements.

Current methods to assess data during feasibility testing survey clinicians to understand what data elements are captured including their format and structure. The maturity model, in contrast, is developed by analyzing live patient data collected from several production systems, and can provide a realistic picture of the structure, format, fidelity and consistency of capture. Measures based on mature data elements would likely require few changes to design workflows and can be tailored appropriately to the data structure. Measures based on data elements with low maturity can be candidates for trial use (see next bullet) or be deferred for implementation until the data maturity can support it.

“Measure for Trial Use” program

Often measures are developed and adopted into reporting programs with limited field testing. This implies that learnings from that implementation and usage are collected simultaneously while the measure is in production use. Opportunities to alter its design or workflow are limited and costly as they may require drastically changing key elements.

Measures subject to a trial implementation early in the development phase, across multiple sites, would yield real-world data on workflows, provider burden, and data governance that can be used in measure evaluation prior to finalization. The results can alter measure design to better suit provider workflow, alter parts of the logic to use data elements which are more readily available, or defer the measure altogether until the data elements required become more commonplace. The trial implementation also lets providers provide feedback on their experiences using the measure, and whether the measure gives them the right insight into care quality.   

Conclusion

It’s important to acknowledge where we have stumbled, over-promised, and under-delivered. Clinical quality measurement is here to stay, no matter how often we stumble trying to get it right. But the cost to support this system cannot go on the tab of clinical end-users. Their complaints are justified: provider burden is real, and we need to be listening carefully. I hope that the proposals described here can help improve how we design, develop, select, and implement quality measures. And if we can do that, we can begin lifting some of the unnecessary burden from our healthcare providers, give them reason to  embrace this technology, and let them get back to treating patients than the EHRs.