It may be helpful to compare these new guidances with the pending MDDS rule, discussed here, in which the proposed rule defines an MDDS as Class I, the class with the lowest FDA scrutiny. Unlike MDDS, in the current case these CADe devices are not newly defined. However the FDA does acknowledge that the terminology may not widely known or used. A CADe system is not in the same class as an MDDS, and therefore is not an MDDS, because of the degree to which it analyzes medical device data.

The Federal Register posting defines CADe’s as “computerized systems that incorporate pattern recognition and data analyses capabilities (i.e. combine values, measurements or features extracted fro the patient radiological data) intended to identify, mark, highlight, or in any other manner direct attention to portions of the an image, or aspects of radiology data, that may reveal abnormalities during interpretation of patient radiology images or patient radiology device data by the intended user (i.e., a physician or other health care professional)”. As with the MDDS rule, it can be helpful to know what is excluded from the category as well as what is included. Here certain types of systems are defined to not be CADe. These include:

  • CADx devices (which) are computerized systems intended to provide information beyond identifying, marking, highlighting, or in any other manner directing attention to portions of an image, or aspects of radiology device data, that may reveal abnormalities during interpretation of patient radiology images or patient radiology device data by the clinician. CADx devices include those devices that are intended to provide an assessment of disease or other conditions in terms of the likelihood of the presence or absence of disease, or are intended to specify disease type (i.e., specific diagnosis or differential diagnosis), severity, stage, or intervention recommended. An example of such a device would be a computer algorithm designed both to identify and prompt lung nodules on CT exams and also to provide a probability score to the clinician for each potential lesion as additional information.
  •  Computer-triage devices (which) are computerized systems intended to in any way reduce or eliminate any aspect of clinical care currently provided by a clinician, such as a device for which the output indicates that a subset of patients (i.e., one or more patients in the target population) are normal and therefore do not require interpretation of their radiological data by a clinician. An example of this device is a prescreening computer scheme that identifies patients with normal MRI scans that do not require any review or diagnostic interpretation by a clinician.

A key issue in a 510(k) submission for such a system is therefore its clinical performance. The suggested elements of the 510(k) are algorithm design and function, algorithm training, image processing, features and feature selection, models and classifiers, databases, reference standard, scoring, and information on soft-ware controlled devices as described in yet another guidance. In this regard the FDA notes that it considers the “level of concern” for CADe software to be moderate or major.

The FDA reminds us in the first guidance that “determinations of substantial equivalence will rest on whether the information submitted, including appropriate clinical or scientific data, demonstrate that the new or changed device is as safe and effective as the legally marketed predicate device and does not raise different questions of safety and effectiveness than the predicate device”. Yet in what might be a stretch of the 510(k) concept, the FDA further notes that “because each new CADe device represents a new implementation of software, FDA expects that each new CADe device (as well as software and other design, technology, or performance changes to an already cleared CADe device) will have different technological characteristics from the legally marketed predicate device even while sharing the same intended use. The FDA recommends that “you measure and report the performance of your CADe device by itself, in the absence of any interaction with a clinician (i.e., standalone performance assessment)”. It is suggested that this testing include an appropriate study population, detection and localization accuracy, reproducibility testing, algorithm stability testing and training performance, and the inevitable “other”. Clinical performance testing including a live reader is also recommended and the FDA states that they believe that a standalone performance assessment without a clinical performance assessment (i.e. a reader study) will usually not be adequate to demonstrate adequate performance. User training needs are also to be addressed.

A general question for all “decision support” software is the degree to which the clinical practitioner is to rely on the system to flag what needs to be flagged such that a full independent reading is either not necessary or is not carried out even if it is actually necessary. While designers of such systems like to say it is “just an aid” and thereby in part try to avoid responsibility for the quality of the output, the likely reality is that it will be relied upon, and therefore a high level of performance should be assured. The FDA recognized this as-it-will-be-used perspective in the proposed MDDS rule, but here they seem to be suggesting that cautionary labeling will suffice. For example the suggested standard Indications for Use language is “The device is intended to assist [target users] in their review of [patient/data characteristics] in the detection of [target disease/condition/abnormality] using [image type/technique and conditions of imaging], and a suggested Warning is “[target user] should not rely solely on the output identified by [device trade name], but should perform a full systematic review and interpretation of the entire patient dataset”.

Remember that these are currently draft guidance documents, and that even if and when they become official guidance documents they still are not regulations but only FDA recommendations and expectations.  This leaves open the opportunity to submit a 510(k) which is contrary to the guidance and to assert that the guidance is just recommendations. Whether or not this approach would be fruitful, or whether what the FDA “expects” to see is what they will actually require, remains a judgment call. If you take this ignore-the-guidance approach, let us know if it works.