The AHRQ has released a report (available here) on the implementation of clinical decision support (CDS) software within the context of an EMR. This report reviews the work to date of two AHRQ demonstration grant recipients, Brigham and Women's Hospital and Yale University School of Medicine. In each of these projects the intent was, at least in part, to implement two or more existing practice guidelines as on line and integrated component of the EMR.
In the context of these projects CDS means the provision of clinical knowledge and patient-specific information to help make decisions that enhance patient care. While this type of general statement remains somewhat vague as to what constitutes such help, the report comments further that in a CDS "the patient’s information is matched to a clinical knowledge base, and patient-specific assessments or recommendations are then communicated effectively at appropriate times during patient care". Therefore, as used here, CDS is more than just the effective presentation of integrated patient information, as might be done by a Medical Device Data System (as discussed here) for example. Instead it is knowledge based and the relevant knowledge is used to compare a patient to a predefined pattern in order to "suggest" or "advise" (or "tell") the clinician what course of treatment is to be followed.
In this regard a CDS is, in older and somewhat forgotten terminology, an expert system. Introduced in the late 1990's,the idea of an expert system was that the knowledge and expertise of one or more human experts could be captured and implemented as a computer code. Once this code was written (and perhaps verified), it would be possible to enter a new situation within the domain of the expert system, and the expert system would then provide the same result as the original expert or experts. It was further believed that some expert systems could be written that could "learn" such that it actually became more expert than the original experts whose knowledge was tapped (by a knowledge engineer) in its original creation. Of course such learning could only occur if the expert system was given controlled feedback along with having a coding scheme that was self adjusting. Neural nets was one of the popular approaches to such learning.
Many challenges were realized in the early development of expert systems, and these challenges are now being rediscovered in the attempts to develop CDSs. The first challenges are whose expertise is it that we want to capture, how expert are they, and how wide and deep is the agreement among several such experts. In this regard a student of mine once suggested that we title a paper "Expert System or One Guy's Opinion." Even given an expert or group of experts it was then discovered that in many cases they couldn't tell you exactly how they made decisions because the basis of their expertise was actually at least in part vague and uncertain. It was then further discovered that being perhaps somewhat embarrassed by their lack of certainty, the expert would fabricate a methodology that wasn't what they actually did.
In the current CDS projects the challenges of the starting expertise is generally being approached by using practice guidelines that are already established (e.g. the American Diabetes Association's Diabetes Management Standards of Care), i.e. there already exists a set of knowledge (mostly in words) that can be used as the basis for the software. However, and perhaps not surprisingly, even existing practice guidelines have their challenges. One is how robust are they, especially with respect to unusual patients, and how is the unusual patient identified such that the CDS is not used for their case? This is the domain problem; who is in the domain in which the CDS works, and who is not in that domain, or perhaps not fully within the domain. For such patients, will they be clearly distinguishable as not being appropriate candidates for the conclusions reached by the CDS? And will this stop the CDS from providing its recommendations? Where there is understanding of this clinicians might be properly reluctant to follow the CDS recommendations. A related question is who is responsible if the CDS makes a recommendation that turns out to be wrong. It is of interest here that the CDS would be a product subject to both negligence and product liability, whereas a human decision is subject only to negligence law.
Thus one barrier to CDS implementation identified in the AHRQ report is that clinicians other than the developers might be reluctant to follow the system's advice. And perhaps appropriately so. Interestingly, even clinicians participating in the development did not necessarily want to use the system that resulted.
A fruther problem identified is that "written guidelines do not allow for their direct translation to computable code". This occurs because code generally seeks certainty while written language may not, perhaps because certainty does not exist. The answer here is not to force people to create certainty. The imperative to code should not be used to make people agree on things that aren't necessarily correct, and with which they actually don't agree. A closely related issue is that "the technical translation of guidelines into executable code requires a high level of clinical...knowledge and experience". This is again in part because the original guideline language is not exact, probably because the underlying medical knowledge is not exact. In this regard it can be noted that the term evidenced-based medicine does not mean proof-based medicine. The AHRQ report reaches a disturbing conclusion in this regard; that "guidelines should be specific, unambiguous and clear." At least this needs the addition of "when possible."
Beyond these fundamental expertise questions, a number of others were raised by the two projects including understanding the scope and requirements of the task, data capture, clear and effective management, plans for obtaining clinician buy-in, and work-flow implications. The AHRQ report also concludes that these interventions take considerable time and effort, and that a lack of alignment with an organization’s overall goals and incentives can adversely affect these projects. These observations are hardly surprising and are applicable to almost everything we do, and the bigger the potential impact of the project is, the more this is the case.
Trackbacks/Pingbacks