Findings from health informatics researchers at Weill Cornell Medical College offer reassurance that even though electronic health records (EHRs) tend towards analyzing data on a technical, rather than instinctive level, they almost always offer health professionals the same health insights for patients and the public as when those professionals glean information solely through manually-kept health records.
That’s good news for hospitals and other health care professionals who are defined as “eligible providers” under the 2009 Health Information Technology for Economic and Clinical Health Act. Eligible providers get government funding for proper installation of EHRs—and face stiff penalties if they are not in place by 2015. Cornell researchers published their findings last month in Annals of Internal Medicine.
Even as EHRs have gained in popularity, health care providers across the U.S. implementing them have faced an extensive number of challenges to ensure “proper installation.” These challenges have ranged from the technical (such as integrating different systems and interfaces) to the cultural (overcoming the reluctance of private practitioners to accept adoption of electronic records). There is also the issue of data quality, given that the potential for subjective review of patients’ wellness is better found in manual records. Between these and other variations, how could health care professionals relying on EHRs know when patients are truly well, and methods of health care are actually working?
A Data Science Project in Health Care Quality Research
To answer such questions, the Centers for Medicare and Medicaid Services established a system of qualitative measures an EHR has to conform to for its data to be considered of “meaningful use.”
“Meaningful use is designed to move us incrementally closer to the goals we have: a really functional health care system that is interoperable, provides maximum patient health care, better quality, and better safety,” says Dr. Kevin Larsen, a fellow of the American College of Physicians who is medical director for meaningful use at the Office of the National Coordinator of Health IT, part of the U.S. Health and Human Services Department.
While the concept of meaningful use did not exist when researchers began their study, correlating data quality to managing patient outcomes was central, says Dr. Rainu Kaushal, the study’s principal investigator. “Getting electronic quality measurement right is critically important to ensure that we are accurately measuring and incentivizing high performance by physicians so that we ultimately deliver the highest possible quality of care. Many efforts to do this are underway across the country,” says Kaushal, director of Weill’s Center for Healthcare Informatics Policy who is a pediatrician, and an instructor of pediatrics, medicine, and public health at Weill.
To construct this study—funded by the federal Agency for Healthcare Research and Quality—a national panel of experts (including physicians and health care IT managers) helped validate and refine the measures with which the Weill Cornell team worked, which started with 600 possible measures and was finalized to 12 in 2008. The ultimate reviewers collated two types of data: administrative billing data captured via EHRs, and more specific data culled through manual review.
Reviewers began profiling all patients at the participating center in New York State: the Institute for Family Health Center, six sites serving a cross-section of patients. Ultimately 1,154 eligible patients were selected.
The analysts at Weill Cornell chose several measures traditionally used to ensure a diagnosis is accurate. (For example: did a doctor go the extra mile and order a cholesterol test for a patient with diabetes?) Each measure was then rated using specific metrics regularly used in healthcare. Specificity, for example, looked at the number of patients who did not receive certain care despite it being recommended by both EHRs and data in manual files. Sensitivity showed patients who did get the care recommended by both the computer and manual records.
Ultimately results included these: sensitivity of EHR reporting ranged from 46 percent to 98 percent (depending on the measure being used) while specificity ranged from 62 percent to 97 percent. Overall, 9 out of 12 measures showed comparable accuracy between the treatment recommended whether the diagnosis was based on either electronic or manual data, but three measures were statistically significant, showing discrepancies between EHRs and manual records.
For example, EHRs underestimated the quality of care received in two areas: in the case of patients in the study population who should have gotten asthma medication, 38 percent were recommended based on EHRs versus the 77 percent of those who got it when a health professional manually reviewed their data. The need for pneumococcal vaccination was similarly underdiagnosed: 27 percent of patients received their medicine based on EHR data versus 48 percent who got it based on paper data.
And, EHRs overestimated the need for care in one area: recommending cholesterol control in patients with diabetes. Less than 40 percent of the patients with diabetes were also treated for cholesterol problems when health professionals checked their records simply through paperwork. More than half (57 percent) had cholesterol medicines added to their routine based on EHR input.
Limitations in Efforts to Evaluate the Data
The study also showed examples of differences in the quality of records based on the means of data collection, or the design of the records themselves. Administrative data can show how many women received a mammogram; the manual review can show you the mammogram’s actual results. “Sometimes EHR documents aren’t consistent. One has drop-down menus, one has check boxes. And then there’s the information that’s not captured by either. If a PDF’s attached to the electronic record noting, ‘The patient was vaccinated and here’s the report,’ the EHR won’t pick it up,” says Dr. Linda Kern, a co-leader of the study, who is a general internist and public health expert. This inconsistency reveals one of the limitations of evaluating this kind of data, she says.
Standardizing health records offers a chance to improve this situation, Kern says.
“The reason people are so excited about EHRs—and appropriately so—is that EHRs offer the potential to address such limitations. They could generate a measurement of quality with many clinical details, covering a large number of patients. In the past maybe you couldn’t as easily include the results of lab tests; you’d have to take into account whether or not a patient had normal kidney function in the calculation of your measure,” says Kern. “They also provide feedback to clinicians regarding quality performance in real time, thereby improving clinical practice.”
Kaushal says that the federal meaningful use program will enable the deployment of EHRs across the country “thereby enabling health care to enter the digital age.”
“I think EHRs have the potential to transform and advance the measures of quality. We’ve learned that getting better insights is not so much about changing the underlying technology,” says Kern. “We need to refine quality measurement in an electronic era. How to do that for now though, is beyond the scope of this study.”
Wendy Meyeroff has been a freelance medical and science writer since 1987 for clients that range from clinical to consumer, including Life Science Leader, Nurseweek, Good Housekeeping, Merck, Sears, the American Medical Association and the NIH. She is based in Maryland and can be reached at email@example.com.
Home page photo of Weill Cornell Medical College via Wikipedia.