Skip to main content

Hospital Speaks Up, Helps Tweak Ranking Methods

Analysis  |  By Tinker Ready  
   October 13, 2016

Instead of complaining about quality ratings it received from U.S. News & World Report, Rush University Hospital did something about it.

When Rush University Medical Center launched a major push to ensure patient safety, it was happy with the results. The Chicago system was ranked No. 2 by the University Healthcare Consortium and did equally well on other measures.

Then U.S. News & World Report (USN&WR) released its 2015 hospitals rankings, reporting that Rush, a not-for-profit, 664-bed academic medical center in Chicago, had earned the lowest marks possible for patient safety.

"We didn't think much of it," said Omar Lateef, DO, the chief medical officer at Rush. "For us, it's a consumer publication and it isn't validated."

Then, the hospital's orthopedic doctors showed up at his door. They were upset that the rankings were hurting their program's reputation, he said. That launched a deep dive into Rush's clinical data, a collaboration with USN&WR, and the subsequent correction of a glitch linked to the limitations of Medicare data.


Hospital Quality Measures Continue Their Long Journey


Instead of complaining about external quality ratings, Rush, did something about it.

And instead of brushing the hospital off, USN&WR welcomed the feedback and used it to correct a flaws in its analyses.

The case suggests that there may be a middle ground where academics and consumer-level quality analysts can meet. The recent uproar in the medical community over the Surgeon ScoreCard produced by the investigative journalists at ProPublica suggests otherwise.

But those who rely on peer-reviewed findings need to recognize that data journalists have moved beyond sorting spreadsheets. For better or worse, what was once called "computer-assisted reporting" has evolved into advanced statistical analysis.

USN&WR has been crunching hospital data since 1995. It uses the same Medicare numbers that academics and other analysts use. And it has engaged in a constant effort to improve the process, says Ben Harder, the chief of health analysis at USN&WR.

'Known Limitations'
"All data and all quality measure have limitations and that is something that we are eyes-wide-open about," he said. "We have engaged for many years with hospitals and various other quality measure stake holders around how to address the known limitations in the data sets we use and how to identify any previously unknown limitations we need to be aware of."

So, when Rush University Medical Center came calling, USN&WR listened, Harder said.

Rush had flagged a limitation of the data that had not been identified by the Centers for Medicare & Medicaid Services or its partner, the Agency for Healthcare Research and Quality, Harder said.

In short, Rush was being held responsible for problems patients had when they arrived at the hospitals, conditions known in statistical terms as "present on admission," or POA.

"We decided these were newly identified limitations and there were some ways we could address them. So we made a number of methodology changes," Harder said. "(Rush) felt appreciative that we had understood and addressed their concern, not just for Rush but for the entire analysis we do for more than 4,000 hospitals."

Bala Hota, MD, the chief research information office at Rush put it this way: "US News was great. They really wanted to get to the bottom of what was going on and solve this. "

A Call for An 'Audit Requirement'
Still, he and Lateef offered words of caution in a paper outlining their analysis. "Consumer groups and lay publications that seek to measure and rank hospitals should be commended for the ambition to bring order to the confusing business space of health care, but the enormity of the task being undertaken by these entities should be acknowledged and the potential pitfalls of nontransparent data analysis recognized," they wrote in the October issue of The Joint Commission Journal on Quality and Patient Safety.

USN&WR offers only one of a number of hospital ranking programs. The others range from CMS to the private LeapFrog Group, to the consumer-driven website Yelp. Data sources and methodology vary, and their findings don't always correspond.

An editorial in the same journal as the Rush paper calls for standardized measures and an "audit requirement that would apply to any entities that grade providers." It's not likely to happen soon. In the meantime, there are advantages to having competing quality measurement programs.

"I think there would be a major risk of harm if we simply said CMS is doing it, they have a process, and we are going to abdicate our responsibly as journalists or as the public and let them do it, because if they were doing it wrong, we would never know," Harder said.

Pages

Tinker Ready is a contributing writer at HealthLeaders Media.


Get the latest on healthcare leadership in your inbox.