Some of us remember when distributed medical devices such as patient monitors had their own dedicated server. That server had one job, to support that manufacturer's medical devices. It was not connected to the enterprise network, in part because there was no network. It did not utilize wireless communication, although there were instances it which it might be susceptible to rogue wireless input. Even the manufacturer could not talk to it remotely. Other functions were not supported by this server. For example, the hospital’s vending machines were on their own.
Only a limited number of people had physical access the these early medical device systems and updating the server required hands on activity by dedicated and trained personnel. (Were updates as common then?). Peripheral people could not cause havoc by downloading malicious software to an unrelated system. Ransomware was not yet upon us because downloading it wasn’t likely. Spying and data gathering were not an issue. If there was a server problem it was self-contained to the system being supported. This could be locally problematic but was ultimately limited.
We should note that in a recent municipal ransomware attack in Florida the police and fire systems were not affected because they were physically separate from the main city computer system. We are not told if this was a wise cybersecurity decision or just happenstance, but it turned out to be a good thing. An open question in this regard is whether virtual segmentation, which is software controlled, is itself vulnerable to attack.
Cybersecurity has often been described as an ongoing and probably endless battle between the citizenry and the criminals. There are no connected systems that are not hackable, nor are there likely to be. New designs at best addresses old threats. Personnel training to avoid more-or-less obvious threats appears to be ineffective.
There are at least three malicious reasons to hack a medical device. One is to capture data that is contained within it, second is to use the medical device to gain access to the network, third is to manipulate or disrupt the device’s function. Vulnerability reports have largely focused on the latter, although actual instances of adverse events have not been reported.
We are missing a key set of questions in pursuing universal connectivity. These include: Why should this device be connected to the network, and in turn, to the world? If connected, does it have to be continuous, or can I connect only when necessary and then disconnect physically otherwise? If connected, what can the vendor tell me about their cybersecurity efforts and maintenance, and is what they’ve told me convincing and useful? In this regard FDA is increasingly asking about cybersecurity efforts before devices are marketed, but this screen is inherently limited in scope.
There is a current FDA Draft Guidance Document on Content of Premarket Submissions for Management of Cybersecurity in Medical Devices that sets out recommendations, if not requirements, for device manufacturers. This document separates devices into two tiers. A device is Tier 1 if it is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet; and a cybersecurity incident affecting the device could directly result in patient harm to multiple patients. This definition has at least two flaws. One is that it appears that connectivity must be intentional, ignoring the possibility of a wireless vulnerability that is not part of the intended functionality of the device, or where the device allows off-line connectivity such as via a USB port. Also, the multiple patients proviso is curious. Does this mean all at once or one at a time? In any case if the device isn’t Tier 1, then it is Tier 2 which allows for a risk-based avoidance of most of the provisions.
The Draft Guidance calls for using the NIST framework for creating a “trustworthy” device. The definition of trustworthy is “containing hardware, software, and/or programmable logic that: (1) is reasonably secure from cybersecurity intrusion and misuse; (2) provides a reasonable level of availability, reliability, and correct operation; (3) is reasonably suited to performing its intended functions; and (4) adheres to generally accepted security procedures. Of course, the term “reasonable” suggests that full assurance cannot be achieved.
In addition to good design principles a device must be properly (and extensively) labeled including recommended cybersecurity controls appropriate for the intended use environment, a description of the device features that protect critical functionality, even when the device’s cybersecurity has been compromised, a description of backup and restore features and procedures to regain configurations, specific guidance to users regarding supporting infrastructure requirements, a description of how the device is or can be hardened using secure configuration, and a list of network ports and other interfaces that are expected to receive and/or send data.
In addition there should be a description of systematic procedures for authorized users to download version-identifiable software and firmware from the manufacturer, a description of how the design enables the device to announce when anomalous conditions are detected, a description of how forensic evidence is captured, including any log files kept for a security event, a description of the methods for retention and recovery of device configuration by an authenticated privileged user, and a list of commercial, open source, and off-the-shelf software and hardware components to enable device users (including patients, providers, and healthcare delivery organizations) to effectively manage their assets, instructions to permit secure network deployment and servicing, and instructions for users on how to respond upon detection of a cybersecurity vulnerability or incident. Also allowed is end of support information.
This set of informational requirements is possibly daunting and raises the questions of will it really be useful, and will users really understand and incorporate it. And even if the answers are yes, we still are left with “reasonable” and an evolving threat environment.
One further note on these cybersecurity requirements is that for Class II devices it upends the idea that a device has to be only as safe and effective as its predicate. Under this Draft Guidance a device has to be better than its predicate if the predicate was not designed and cleared under these recommendations.
There is also a Final Guidance Document on Postmarket Management of Cybersecurity in Medical Devices. This focuses on the need for ongoing attention to threats, potential patient harm and risk mitigation. This guidance applies to any marketed and distributed medical device including: 1) medical devices that contain software (including firmware) or programmable logic; and 2) software that is a medical device (SaMD), including mobile medical applications. Responsibility for ongoing assessment and appropriate responses is based on the Quality System Regulation, including complaint handling, quality audit, corrective and preventive action, software validation and risk analysis, and servicing. FDA also cites the shared responsibility of manufacturers and customers.
The Internet of Things (IoT), or Medical IoT, or Internet of Everything is upon us for better or worse, although some have bought in more than others. Our ability to deal with this level of connectivity safely and securely is behind the development and deployment curve.
Manufacturers offer weakly protected devices and institutions haven’t met the challenges. This may not mean turn everything off, but it at least means we should be more rationale about what is connected and why, what must be done to keep it reasonably safe, and how to respond when attacked. This may require the use of external resources who have the skill and experience to undertake these challenges.