Medical devices that contain software, or are software, are subject to the well-known medical device classifications of I, II, III, listed in increasing level of risk and correspondingly increasing degree of FDA scrutiny before marketing. In addition to classification, for Software as Medical Device (SaMD) the FDA has suggested, via a Guidance Document (as discussed here), that there are four categories depending on the state of the healthcare situation or condition, and the significance of the information provided by such software to a healthcare decision.

In addition to this ranking of the importance of the device, the FDA’s Software Pre-Certification Pilot (discussed here) is examining rating companies along with the software they produce. Companies could be Level 1 (not quite excellent) or Level 2 (more-or-less excellent). Level 1 would allow for no or streamlined device review only for lower risk software while Level 2 would allow limited review for moderate risk software. Moderate risk is as high as pre-cert is going.

In addition to classification and rating of devices, and rating of companies (or business units), the FDA has now brought us two levels (here called tiers) for cybersecurity considerations as explained in the October 18, 2018 Draft Guidance Document “Content of Premarket Submissions for Management of Cybersecurity in Medical Devices.

These tiers are Tier 1 - Higher Cybersecurity Risk and Tier 2 - Standard Cybersecurity Risk. Tier 1 is categorized by two criteria:

- The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet

- A cybersecurity incident affecting the device could directly result in patient harm to multiple patients

Tier 2 is simply not Tier 1.

The first criterion of Tier 1 is at the heart of connectivity in that if there is any connectivity this clause is met. As always in guidance and regulation, there could be some gray areas. One is a single medical device that has more than one part and where these parts communicate only with each other. In this case “another medical device” might not be triggered, but network or internet connection might still be in play. Also not expressly captured is the risk of external tampering with a device even though it does not have intentional connectivity.

The second criterion of Tier 1 requires some more fleshing out. “Patient harm” is defined in the draft as physical injury or damage to the health of patients, with the explanation that cybersecurity exploits may pose a risk to health and may result in patient harm. What is not further defined is “directly.”  Does this mean that the exploited device itself must harm the patient, or does it include additional scenarios in which the exploited device triggers additional device or system failures and these in turn cause patient harm? The restriction to “multiple” patients is also of interest. Does this mean that a single exploit must effect multiple patients all at once, or does it mean that if the exploit were repeated there could be harm in each case?

The draft also defines “trustworthy device” which is interesting because one might have thought that all legally marketed medical devices ought to be trustworthy. Here a trustworthy device is one containing hardware, software, and/or programmable logic that: (1) is reasonably secure from cybersecurity intrusion and misuse; (2) provides a reasonable level of availability, reliability, and correct operation; (3) is reasonably suited to performing its intended functions; and (4) adheres to generally accepted security procedures.

A curiosity here is that this definition seems to exclude non-electronic devices that inherently have no cybersecurity issues. This reminded me of the Y2K panic in which lists of devices that did or did not have a software issue associated with the year rolling over to 2000 were being generated. One item that made it to the list was a medical device that consisted of a liquid in a jar. This device did not have a Y2K bug. While 2000 is long gone, data capture issues do arise every time we change from standard to daylight savings time, or back.

It is also of interest that the draft defines a list of 14 types of cyber information that should be included as part of device “labeling”. In brief these are:

  1. Recommended cybersecurity controls appropriate for the intended use environment
  2. A description of the device features that protect critical functionality even when cybersecurity has been compromised.
  3. A description of backup and restore features and procedures
  4. Specific guidance to users regarding infrastructure requirements
  5. A description of how the device is or can be hardened using secure configuration
  6. A list of network port and other interface functionality
  7. A description of procedures for downloading software and firmware from the manufacturer
  8. A description of how the design enables the device to announce anomalous behavior
  9. A description of how forensic evidence is captured
  10. A description of the methods for retention and recovery of device configuration
  11. System diagrams for end-users
  12. A Cybersecurity Bill of Materials (CBOM) including a list of software and hardware components that enables users to understand and manage the potential impact of identified vulnerabilities and to deploy countermeasures
  13. Technical instructions to permit secure network deployment and servicing, and instructions on how to respond to cyber incidents
  14. Information, if known, concerning cybersecurity end of support dates

We can note here that this information is extensive and highly technical and that as a result it is aimed at IT resources, or at least IT savvy individuals. Clearly clinical users are unlikely to be the ones reading, understanding or using this information. While big hospitals may have access to such resources, smaller hospitals, clinics, non-hospital practices, and patients will continue to be black-box consumers of technology that may have cyber issues.