I have previously discussed Meaningful Use (MU) criteria for EHRs  (here and here) , and Clinical Decision Support (CDS) (here). These topics are closely linked since the  MU requirements mandate the inclusion of CDS.

On February 22, 2012 the Centers for Medicare and Medicaid Services (CMS) released (in a mere 455 pages in manuscript form) a proposed rule for Stage 2 criteria for qualification of an EHR for the Medicare incentive program. Among many topics, the proposed rule includes some elaboration on the mandatory use of CDS's as well as issues related to their design and utilization.

This mandate might be viewed in the context of AHRQ's statement (here) that "Despite thoughtful efforts over the last three decades to translate clinical guidelines into CDS rules, there has not been widespread and successful use of such rules to improve patient care." Of course limited success to date does not mean that CDS cannot be beneficial in the future. In addition there is a wide range of sophistication in systems that might be called CDS, and which would in part satisfy the MU requirements.

In general the CDS Stage 2 objective is to "Use clinical decision support to improve performance on high-priority health conditions."  The associated Stage 2 measure as proposed is to, "Implement 5 clinical decision support interventions related to 5 or more clinical quality measures at a relevant point in patient care", including enabling and implementing drug-drug and drug-allergy interaction checks.  The drug related functionality is intended to provide  information to "advise the provider's decisions" in prescribing drugs to a patient.

The "advise" component of CDS is a challenging issue since the quality of the advice is a critical element in any CDS system. It is here that the degree to which the provider can and should rely on that advice becomes an important  part of how CDS should be used, and equally how it will actually be used. In this regard it is common for advice systems to have a variety of disclaimers and associated assertions that the professional must in effect second guess the advice given and make their own judgement.

While logical, the value of a CDS that you can't rely on is at least questionable, as is whether users will always mentally challenge the results. In this regard there are at least three ways in which CDS's can go astray. One is when the underlying information upon which the advice is based is not strong. A second is that although the underlying data is strong the software is defective. Third, a patient's individual condition may fall outside of the boundaries for which the information is reasonably correct even though the software is a reliable representation of that data. Moreover, the boundaries of the accuracy domain are often ill defined or not defined at all.

The Stage 2 proposed rule takes an interesting pass at these issues by requiring that each clinical decision support intervention must enable the provider to review all of the following attributes of the intervention:

  • Developer of the intervention,
  • Bibliographic citation,
  • Funding source of the intervention, and
  • Release/revision date of the intervention.

Thus instead of just a result (or ranked results perhaps) popping up, the user must be provided with the people and data behind the result, and how old that data is. The proposed rule states that "such information may be valuable so that providers can understand whether the clinical evidence that the intervention represents is current, and whether the development of that intervention was sponsored by an organization that may have conflicting business interests including, but not limited to, a pharmaceutical company, pharmacy benefits management company, or device manufacturer."

This kind of disclosure is good, but it does not fully account for the degree of validation of the actual results that are generated. It is also proposed that the CDS suggested/advised/proposed interventions must be presented through Certified EHR Technology at a time relevant to direct patient care to a licensed healthcare professional. That professional is then expected to "exercise clinical judgment." Therefore it is clear that a CDS cannot be fully relied on, nor can it be a substitute for an appropriate healthcare professional.

The "presented through" requirement suggests that the CDS must be well integrated with the EHR, although this is short of requiring that the CDS actually be part of the EHR. This suggests that EHRs would have to have high flexibility to integrate with the required CDS's, which conceivably come from different vendors, adding to the challenges of--dare I say it--connectivity itself.

Not all CDS systems will need to be particularly sophisticated. One example given in the proposed rule is a CDS that triggers a point-of-care alert from the EHR that prompts a licensed healthcare professional to ask about influenza immunizations when engaged with a patient 50 years old or older. This kind of little reminder is not fraught with the deeper issues of CDS reliability. More generally it is noted that family health history can be used to inform CDS and patient reminders and patient education. Another suggested CDS application is generic drug and insurance formulary information.

Those who believe that activity should not be confused with results will not find comfort in the observation that for Stage 2 CMS does "not propose to require the provider to demonstrate actual improvement in performance on clinical quality measures" through using a CDS, although improvement should be the provider's goal.

This proposed rule has a 60 day comment period after which CMS will cogitate on the comments received and publish a final rule with any revisions deemed appropriate. Effected professionals and hospitals will have to watch for this final rule.

Unrelated to MU and CMS, but an interesting example of the intended use of CDS is the approval by the FDA of a software driven mammography pattern recognition system that analyzes the mammography images and marks suspicious areas consistent with breast cancer for review by a radiologist. Of particular interest to me is that the approved method of use is that the radiologist first reads each image in the conventional manner, and then re-examines each region marked by the software analysis before making a decision about the image.  The instructions for use, also posted by the FDA, include important warnings in this regard including that the marked images do not have the level of detail of the original mammogram and that their only purpose is to provide a reference for the location of the auto-generated marks. Further it is stated that the algorithm will not mark all regions that contain cancer and will mark regions that do not contain cancer. As a result the presence of a mark only indicates that a radiologist should review the marked region again to avoid a potential oversight, and the absence of a mark should not dissuade a radiologist from investigating their own suspicious findings.

 Thus the intended use clearly states that the automated read is not sufficiently reliable to be used as a primary analysis, or perhaps that the manufacturer doesn't want to take on the responsibility of primary analysis, or that the FDA would not accept a claim of primary analysis. It is also possible that radiologists are able to protect their image reading turf by discouraging reliance on automated systems. In any case, the result is a CDS system that is not primary, and cannot be totally relied on.

The next questions then are how will it actually be used, and to the degree that it may be relied on, how good is it? An associated question of interest is whether radiologists would chose to not follow-up on a flagged region that they didn't think was suspicious, or would they always pursue the next steps which may include biopsy. Since those next steps involve the potential for both physical and psychological adverse outcomes, pursuing that which doesn't need to be pursued is not without consequence.

The experiences of radiologists using the system would also be of interest. Even doing what the instructions say, the radiologist may find that the system only finds what they have already found, and/or doesn't find what they have already found, and/or finds things they didn't find that are spurious, or really does find important things that they missed. Each possibility and their combination will inform how such systems actually come to be used, or not used.

William Hyman is Professor Emeritus of the Department of Biomedical Engineering at Texas A&M University. He recently retired and has moved to New York City where he continues his professional activities.