Science —

Hospitals that track their performance don’t improve healthcare

Tracking problems with no incentive to do better doesn't have much effect.

Hospitals that track their performance don’t improve healthcare

In recent years, perhaps in response to an uptick in inquiries about hospital performance and its effect on patient outcomes, a number of programs have been developed to help hospitals track how the patients they care for do. The most prominent of these is the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP). This system allows hospitals to compare their performance relative to that of other participating hospitals and provides them with detailed descriptions of patient outcomes as adjusted for the patients' risks.

Since 1994, the ACS NSQIP has been tracking data on 135 patient-related variables. As its name implies (Quality Improvement Program), the hope is that this will lead to improvements for patient care—having this information will hopefully motivate hospitals to improve their outcomes and reduce the payments charged to Medicare. However, there has not been a study examining whether this expectation has been met until now. The new study published in JAMA seems to indicate that a hospital’s participation in this outcomes/costs-tracking program does not directly lead to improved patient care or reduced Medicare costs.

The study was performed by faculty from the Center for Healthcare Outcomes and Policy at the University of Michigan. It aims to examine the association between participation/non-participation in the ACS NSQIP and Medicare patient outcomes/Medicare costs.

This study examined matched data from 236 hospitals that participate in the ACS NSQIP and 526 hospitals that are non-participants in the program. The inclusion of this non-participant control group makes the study notable because no treatment-comparison study of ACS NSQIP program has been previously published. In the past, studies examining ACS NSQIP participating hospitals have shown an improvement in patient outcomes over time, but until now, there has been no way to determine if that improvement was due to the program or if it would have occurred regardless of participation.

The data for this study was pulled from Medical Analysis Provider and Review files from 2003-2012. The investigators focused on patients ages 65-99 who were treated with 11 high-risk surgical procedures. These procedures are a high priority for quality improvement since they account for a large proportion of the morbidity and mortality tracked by the ACS NSQIP. The authors assessed the association between a hospital's enrollment in ACS NSQIP and improved outcomes/lowered costs, taking into account factors like mortality, serious complications, reoperation, readmission, and reduced Medicare payments.

For analysis, a difference-in-differences approach was used—this is an econometric method that can evaluate changes in outcomes that occur after the implementation of a policy. Use of the difference-in-differences analysis allowed for the identification of any improvements that follow specific medical interventions.

The study found that although there were some statistically significant differences between ACS NSQIP hospitals and controls, none of these differences were clinically relevant. (For example, the patients in ACS NSQIP hospitals were a bit older and more likely to be male.) And in terms of clinically relevant improvements, the study found no significant differences between non-participating hospitals and the ACS NSQIP hospitals.

Hospitals in both groups showed a slight trend toward progressively better outcomes, but enrollment in the tracking program did not significantly influence the rate at which outcomes improved. In other words, tracking outcomes did not make a difference in terms of improved patient care or cost savings.

The authors say that this finding does not mean that hospital-tracking programs should be abandoned. It may just indicate that we need to put the data they provide to better use. For example, if the tracking data were publicly reported, that could increase hospitals’ motivation to improve. The same may be true for pay-for-performance programs, as well as value-based purchasing, in which hospitals are paid more for higher quality healthcare instead of being paid more for a higher volume of services.

Additionally, the authors note that hospitals participating in ACS NSQIP may have attempted to drive improvements by implementing measures such as care coordination and increased adherence to clinical guidelines but that such measures are challenging because they involve changing individual physicians’ practices.

From a public health perspective, tracking of hospital outcomes and costs helps to ensure the accountability of Medicare-supported institutions; however, simply tracking the data does not make it meaningful. If the ACS NSQIP were to pair the hospital data tracking with an improvement incentives measure, in which either the physician or the hospital received a financial benefit for improved patient outcomes, this tracked information could begin to have real clinical relevance.

JAMA, 2014. DOI: 10.1001/jama.2015.25  (About DOIs).

Channel Ars Technica