Researchers led by the Duke Clinical Research Institute looked at whether paying hospitals extra for following specific treatment guidelines
would improve patient outcomes, and you know, reduce costs. "They found no evidence that financial incentives were associated with
improved outcomes, nor that hospitals had shifted their focus from
other areas in order to concentrate on the areas being evaluated for
possible increased payments."

A study recently conducted by Premier, Inc., a group that represents
hospitals participating in a large Center for Medicare & Medicaid
Services (CMS) pilot project of pay for performance, found that paying
hospitals extra money for following specific guidelines led to better
patient care and outcomes. However, that study did not include a group
of hospitals not receiving incentives as a comparison. So the Duke team
compared the CMS data with that of a registry of 105,383 patients
treated for a heart attack at 500 hospitals involved in a national
quality improvement effort.

Did the Premier study fall victim to the Hawthorne effect? Researchers also made three important points about their study relative to the Premier study.

  1. "On one hand, the data showed that care is improving overall in the
    United States, which is obviously good. However, we did not find that
    pay for performance alone will be the sole means of improving care. In
    fact, it all comes down to hard work by individual caregivers and
    institutions." [Imagine that.]
  2. It appears that a voluntary effort to 'do good and improve care' [were] equally as powerful as the incentive for additional payment.
  3. Heart attack mortality declined significantly over time in
    pay-for-performance and non-pay-for-performance hospitals over time
    with better care processes.

Two things strike me. First, there is no silver bullet to improving patient safety and outcomes. Pay for performance is a novel incentive of some value, a carrot if you will, and CMS' proposal to stop reimbursing hospitals for the cost to treat patient's that the hospital itself has injured (initially, the 13 hospital-acquired conditions) adds a stick to the government's armamentarium to improve outcomes. A carrot and a stick together is always better than just one or the other.

I'm also struck by the admirable scientific bent physicians to apply to, well everything. According to one researcher, "...what we really need a robust research base to inform the design of
the program and clearly we need to continuously monitor performance to
ensure that we are achieving our clinical goals through these efforts." To paraphrase, if your tool for studying change is the scientific method, then everything requires a randomized controlled double blind trial. What we need to apply to our persistent patient safety and outcomes problem is some engineering - specifically safety engineering. This is not rocket science, and has been done much more successfully in aerospace for decades.