Recently the FDA released a discussion paper on a Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).

For starters this paper captures several familiar themes. One is that standalone software can be a medical device (SaMD) and therefore subject to FDA regulation. In the Clinical Decision Support space of standalone software this captures systems that are built by machine learning (ML), as opposed to systems that are rule-based, or more algorithmic. One way to appreciate this distinction is that for algorithmic systems the user can in principle access and therefore check the logic that the system uses to reach its conclusions. Whether such checking actually will occur is a separate matter. On the other hand ML based systems are in general derived from the automated review of large data sets in order to produce a mostly "black box" process of patient data in and answer (diagnosis, advice, suggestion, etc) out. There is essentially no underlying knowledge here (making "learning" perhaps the wrong word) and nothing that can be reasonably duplicated by the user.

Once an ML product is cleared or approved by the FDA it may become necessary for the vendor to revise the system either because of errors or because new data is available. Regulatory issues associated with modifying a medical device are well known with the appropriate degree of further regulatory scrutiny depending in part on the potential impact of modifications on device performance. Software modifications in general are the subject of a 2017 Guidance Document.

Note that the ML issue here is not one of self-learning in the field. While such local learning is theoretically possible it requires that the system be able to receive local feedback that it produced a bad answer. At a minimum this requires users to provide such information and for the system to have a way to receive and act on that information. If a system could do this it would not only drift from its design point, but drift differently in different settings. If all of the systems were interconnected this might be manageable, but such products are rare if they exist at all. Instead MI systems are generally "locked" and can only be modified by the vendor (hacking aside).

When the vendor is going to change an ML product the FDA is suggesting that the change process itself can enable improved, and perhaps faster, review. This is consistent with the FDA's long standing Quality Systems approach and the current SaMD Precertification Pilot.

For ML the FDA defines three types of changes: those related to performance, with no change to the intended use or new input type; those related to inputs, with no change to the intended use; and those related to intended use. The FDA has also defined two key but somewhat cryptic elements of a pre-planned process. One is SaMD Pre-Specifications (SPS) in which the manufacturer draws a “region of potential changes” around the initial specifications and labeling of the original device. The second is having an Algorithm Change Protocol (ACP) which is a step-by-step delineation of the data and procedures to be followed so that the modification achieves its goals and the device remains safe and effective after the modification.

Preapproved SPS and ACP would enable a somewhat streamlined regulatory pathway which parallels the existing means by which decisions must be made about whether or not medical device modifications require new FDA scrutiny. The 510(k) model is that there are some level of change that can be accomplished with internal documentation only.

In addition the changes themselves should be managed under Good Machine Learning Practices (GMLP) where applicable, even though the FDA does not elaborate here on what constitutes GMLP.

In a nutshell the thinking here is that effective change requires a predefined (and approved) plan as opposed to just charging ahead.