Legislation Seeks to Deregulate Medical Software
Introduced in the House back in October was the wittily named Sensible Oversight for Technology which Advances Regulatory Efficiency Act of 2013 which has the acronym SOFTWARE. Not to be outdone on the creation of legislative acronyms, now comes the Senate version with a bill entitled Preventing Regulatory Overreach To Enhance Care Technology, which of course gives us PROTECT. Both of these bills seek to define and sub-define medically related software, and then to take part of what they have defined away from the FDA, and do something else with it that has not yet been clearly identified.
The premise of these bills is that the FDA inhibits entrepreneurs by peskily requiring, at least in some cases, that the developer meet regulations that are supposed to provide some measure of safety and efficacy before these products are used for or by the public. These issues arise in part because the definition of a medical device does not explicitly include or exclude software. This has allowed the occasional debate about whether and what kind of software is or is not a medical device. The FDA’s position is to simply look at the function of the software and the definition, and then to say that if what the software does meets the definition then it is a medical device. Debating this with the FDA is typically not a fruitful endeavor. Some other countries have explicitly included software, presumably to try to end the discussion. For example the UK explicitly includes “software” in its list of the multiple categories of things that may be a medical device.
We have of course seen this theme played out before in the form of the FDA’s regulation of Medical Apps which is now manifested in the Medical Mobile Apps Guidance for Industry and Staff, and the associated mobile apps web presence. The FDA has sought to define what kinds of apps are or are not medical devices, and among the latter which the FDA would focus on using its regulatory discretion. It is worthwhile remembering here that Guidance Documents are not regulations but instead reflect the FDA’s “current thinking”, and that the FDA can adjust its thinking on these matters relatively easily. On the other hand, codifying such distinctions in legislation is a far more rigid and potentially slower enterprise, and if bills such as SOFTWARE and PROTECT are ever passed it would be relatively hard in today’s environment to modify or undo what they do if that proved necessary or desirable.
Both SOFTWARE and PROTECT offer definitions of ” clinical software” and “health software” that seek to distinguish them from other types, notably software that directly drives what is generally understood to be a medical device. The SOFTWARE bill also defined “medical software” as software that (1)(A) is intended to be marketed to directly change the structure or any function of the body of man or other animals; or (B) is intended to be marketed for use by consumers and makes recommendations for clinical action that (i) includes the use of a drug, device, or procedure to cure or treat a disease or other condition without requiring the involvement of a health care provider; and (ii) if followed, would change the structure or any function of the body of man or other animals; (2) is not software whose primary purpose is integral to the functioning of a drug or device; and (3) is not a component of a device. Having provided this definition the act removes such medical software from the definition of a device, but then assigns such software to CDRH to regulate it in the same manner as a device. Exactly what this accomplishes eludes me.
Focusing now on the more recent PROTECT bill, clinical software is defined as “decision support software or other software (including any associated hardware and process dependencies) intended for human or animal use that ‘‘(A) captures, analyzes, changes, or presents patient or population clinical data or information and may recommend courses of clinical action, but does not directly change the structure or any function of the body of man or other animals; and ‘‘(B) is intended to be marketed for use only by a health care provider in a health care setting”. One key factor here seems to be that it does not directly effect the body, presumably meaning that the effect is mediated through the human healthcare provider. This falls into the general category of software whose errors are not of direct consequence because the clinician is supposed to catch any such errors before they reach the patient. In standard risk management parlance this allows severity and frequency of the hazard to be offset by detectability, i.e., if the hazard manifests itself the clinician will prevent the hazard from causing harm. Of course the theoretical ability of the clinician to second guess the explicit or implied advice provided by the software is not the same as this actually occurring.
Heath software is defined as “software (including any associated hardware and process dependencies) that is not clinical software and ‘‘(A) that captures, analyzes, changes, or presents patient or population clinical data or information; ‘‘(B) that supports administrative or operational aspects of health care and is not used in the direct delivery of patient care; or ‘‘(C) whose primary purpose is to act as a platform for a secondary software, to run or act as a mechanism for connectivity, or to store data.” I find this definition more cryptic than the one above. A key exclusion here appears to be not used in the direct delivery of patient care, although this begs the question of how remote from patient care it has to be in order to qualify.
Both clinical and health software are further constrained by the limitation that they do “not include software ‘‘(A) that is intended to interpret patient-specific device data and directly diagnose a patient or user without the intervention of a health care provider; ‘‘(B) that conducts analysis of radiological or imaging data in order to provide patient-specific diagnostic and treatment advice to a health care provider; ‘‘(C) whose primary purpose is integral to the function of a drug or device; or ‘‘(D) that is a component of a device.’’. Clause A would appear to exclude patient used apps that provide diagnostic information based on information that the patient provides either manually or otherwise. The imaging limitation is interesting because it seems to distinguish imaging clinical decision support from other kinds of such support, implying that there is greater risk in the former than the latter. I do not know if there is any evidence for this.
Given all this, the PROTECT bill then removes clinical and health software from the purview of the FDA. One thing to be analyzed here is whether the new definitions draw clear and unequivocal lines within the medical software space. The answer is probably no. Secondly, even if it did, does deregulating what is defined here as to be deregulated serve the public health with respect to a reasonable level of assurance of well designed and reliable software. Assuming that FDA regulation has merit for medical devices in general, these exclusions are not likely to be fully rationale, or to improve upon the current regulatory flexibility that the FDA currently has and exercises.
Whether anything more is heard from these bills remains to be seen, especially since our current congress has not demonstrated its ability to consider and pass much legislation.
Pictured with this post is U.S. Representative Marsha Blackburn, sponsor of the SOFTWARE Act.