I have previously discussed Meaningful Use (MU) criteria for EHRs (here and here) , and Clinical Decision Support (CDS) (here). These topics are closely linked since the MU requirements mandate the inclusion of CDS.
On February 22, 2012 the Centers for Medicare and Medicaid Services (CMS) released (in a mere 455 pages in manuscript form) a proposed rule for Stage 2 criteria for qualification of an EHR for the Medicare incentive program. Among many topics, the proposed rule includes some elaboration on the mandatory use of CDS's as well as issues related to their design and utilization.
This mandate might be viewed in the context of AHRQ's statement (here) that "Despite thoughtful efforts over the last three decades to translate clinical guidelines into CDS rules, there has not been widespread and successful use of such rules to improve patient care." Of course limited success to date does not mean that CDS cannot be beneficial in the future. In addition there is a wide range of sophistication in systems that might be called CDS, and which would in part satisfy the MU requirements.
In general the CDS Stage 2 objective is to "Use clinical decision support to improve performance on high-priority health conditions." The associated Stage 2 measure as proposed is to, "Implement 5 clinical decision support interventions related to 5 or more clinical quality measures at a relevant point in patient care", including enabling and implementing drug-drug and drug-allergy interaction checks. The drug related functionality is intended to provide information to "advise the provider's decisions" in prescribing drugs to a patient.
The "advise" component of CDS is a challenging issue since the quality of the advice is a critical element in any CDS system. It is here that the degree to which the provider can and should rely on that advice becomes an important part of how CDS should be used, and equally how it will actually be used. In this regard it is common for advice systems to have a variety of disclaimers and associated assertions that the professional must in effect second guess the advice given and make their own judgement.
While logical, the value of a CDS that you can't rely on is at least questionable, as is whether users will always mentally challenge the results. In this regard there are at least three ways in which CDS's can go astray. One is when the underlying information upon which the advice is based is not strong. A second is that although the underlying data is strong the software is defective. Third, a patient's individual condition may fall outside of the boundaries for which the information is reasonably correct even though the software is a reliable representation of that data. Moreover, the boundaries of the accuracy domain are often ill defined or not defined at all.
The Stage 2 proposed rule takes an interesting pass at these issues by requiring that each clinical decision support intervention must enable the provider to review all of the following attributes of the intervention:
- Developer of the intervention,
- Bibliographic citation,
- Funding source of the intervention, and
- Release/revision date of the intervention.
Thus instead of just a result (or ranked results perhaps) popping up, the user must be provided with the people and data behind the result, and how old that data is. The proposed rule states that "such information may be valuable so that providers can understand whether the clinical evidence that the intervention represents is current, and whether the development of that intervention was sponsored by an organization that may have conflicting business interests including, but not limited to, a pharmaceutical company, pharmacy benefits management company, or device manufacturer."
This kind of disclosure is good, but it does not fully account for the degree of validation of the actual results that are generated. It is also proposed that the CDS suggested/advised/proposed interventions must be presented through Certified EHR Technology at a time relevant to direct patient care to a licensed healthcare professional. That professional is then expected to "exercise clinical judgment." Therefore it is clear that a CDS cannot be fully relied on, nor can it be a substitute for an appropriate healthcare professional.
The "presented through" requirement suggests that the CDS must be well integrated with the EHR, although this is short of requiring that the CDS actually be part of the EHR. This suggests that EHRs would have to have high flexibility to integrate with the required CDS's, which conceivably come from different vendors, adding to the challenges of--dare I say it--connectivity itself.
Not all CDS systems will need to be particularly sophisticated. One example given in the proposed rule is a CDS that triggers a point-of-care alert from the EHR that prompts a licensed healthcare professional to ask about influenza immunizations when engaged with a patient 50 years old or older. This kind of little reminder is not fraught with the deeper issues of CDS reliability. More generally it is noted that family health history can be used to inform CDS and patient reminders and patient education. Another suggested CDS application is generic drug and insurance formulary information.
Those who believe that activity should not be confused with results will not find comfort in the observation that for Stage 2 CMS does "not propose to require the provider to demonstrate actual improvement in performance on clinical quality measures" through using a CDS, although improvement should be the provider's goal.
This proposed rule has a 60 day comment period after which CMS will cogitate on the comments received and publish a final rule with any revisions deemed appropriate. Effected professionals and hospitals will have to watch for this final rule.
Unrelated to MU and CMS, but an interesting example of the intended use of CDS is the approval by the FDA of a software driven mammography pattern recognition system that analyzes the mammography images and marks suspicious areas consistent with breast cancer for review by a radiologist. Of particular interest to me is that the approved method of use is that the radiologist first reads each image in the conventional manner, and then re-examines each region marked by the software analysis before making a decision about the image. The instructions for use, also posted by the FDA, include important warnings in this regard including that the marked images do not have the level of detail of the original mammogram and that their only purpose is to provide a reference for the location of the auto-generated marks. Further it is stated that the algorithm will not mark all regions that contain cancer and will mark regions that do not contain cancer. As a result the presence of a mark only indicates that a radiologist should review the marked region again to avoid a potential oversight, and the absence of a mark should not dissuade a radiologist from investigating their own suspicious findings.
Thus the intended use clearly states that the automated read is not sufficiently reliable to be used as a primary analysis, or perhaps that the manufacturer doesn't want to take on the responsibility of primary analysis, or that the FDA would not accept a claim of primary analysis. It is also possible that radiologists are able to protect their image reading turf by discouraging reliance on automated systems. In any case, the result is a CDS system that is not primary, and cannot be totally relied on.
The next questions then are how will it actually be used, and to the degree that it may be relied on, how good is it? An associated question of interest is whether radiologists would chose to not follow-up on a flagged region that they didn't think was suspicious, or would they always pursue the next steps which may include biopsy. Since those next steps involve the potential for both physical and psychological adverse outcomes, pursuing that which doesn't need to be pursued is not without consequence.
The experiences of radiologists using the system would also be of interest. Even doing what the instructions say, the radiologist may find that the system only finds what they have already found, and/or doesn't find what they have already found, and/or finds things they didn't find that are spurious, or really does find important things that they missed. Each possibility and their combination will inform how such systems actually come to be used, or not used.
A slide set from the April 3, 2012 webinar on “The Next Steps: An Overview of Meaningful Use Stage 2” from the National eHealth Collabarative has been made available at:
http://www.nationalehealth.org/ckfinder/userfiles/files/MU2%20PowerPoint%204_3.pdf
A recording of the webinar is said to be forthcoming.
Here is a somewhat more favorable assessment of CDS’s (or CDSS’s), also from AHRQ, with the overly positive headline: “AHRQ study shows clinical decision support systems are effective but research is needed to promote widespread use.” The actual conclusion from the Absract is: “Conclusion: Both commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload, and efficiency outcomes remains sparse.”
http://www.ncbi.nlm.nih.gov/pubmed/22529043
While I believe some Clinical Support Decision rules are helpful, like the drug-drug interactions, not all of them are simply time savers. I don’t want to see medicine turn into a machine that the patient is just put into the cycle because the technology says they should be doing xyz. An interaction checker just saves times and cuts down on a possible oversight. That’s where the real value comes in. When the computer can do something automatically and instantly that used to be done by hand, or a physician would have to memorize and keep up to date on.
EMRs Can Reduce ER CT Scans
May 14, 2012
“A new electronic medical record tool that tallies patients’ previous radiation exposure from CT scans helps reduce potentially unnecessary use of the tests among emergency room patients with abdominal pain, according to a study from researchers at the Perelman School of Medicine at the University of Pennsylvania, which was presented at the annual meeting of the Society for Academic Emergency Medicine. The new study shows that when the tool is in use, patients are 10 percent less likely to undergo a CT scan, without increasing the number of patients who are admitted to the hospital.”
http://www.surgicalproductsmag.com/scripts/ShowPR.asp?PUBCODE=0S0&ACCT=0000100&ISSUE=1205&RELTYPE=NWS&PRODCODE=0000&PRODLETT=DY&et_cid=2647415&et_rid=60850037&linkid=http%3a%2f%2fwww.surgicalproductsmag.com%2fscripts%2fShowPR.asp%3fPUBCODE%3d0S0%26ACCT%3d0000100%26ISSUE%3d1205%26RELTYPE%3dNWS%26PRODCODE%3d0000%26PRODLETT%3dDY
The final rules for Stage 2 EHR meaningful use, including considerable discussion of CDS, were released on August 23, 2012. There are two parts. One is the CMS requirements for Meaningful Use. The other is the ONC requirements for EHR certification.
CMS - http://www.ofr.gov/OFRUpload/OFRData/2012-21050_PI.pdf
ONC - http://www.ofr.gov/OFRUpload/OFRData/2012-20982_PI.pdf
The NYTimes on 3/11 has an article entitled Computer Algorithms Rely Increasingly on Human Helpers. The thesis is that software has not (cannot?) captured and replicated human thoughtfulness and reasoning. It is noted that “the computers themselves are literal minded and context and nuance often elude them.”
http://www.nytimes.com/2013/03/11/technology/computer-algorithms-rely-increasingly-on-human-helpers.html?emc=eta1&_r=0