In 1987, James Gleick wrote the book Chaos which was a layman’s description of different instances where chaotic behavior was displayed in systems. In one of the last chapters titled Inner Rhythms, he described the then latest research regarding physiological system characteristics and the seemingly oxymoronic idea that variable response of the system indicated health versus illness, specifically with regard to heart rate variability, and that these systems exhibited characteristics of nonlinear dynamics in which physiologists ‘began to see chaos as health.’ That book, which I read in 1988, and in particular that chapter re-ignited my desire to become a biomedical engineer and I subsequently left the active Air Force working in space systems to start a new career in clinical engineering. While my second career was not enmeshed in research such as that discussed in the book, I still remained interested in how the analysis of data measured by medical devices could possibly provide better indications of the underlying system ‘health and wellness’, a term we used for space satellite system status using telemetry.

Fast forward 30 years and I am at the Association for the Advancement of Medical instrumentation (AAMI) conference in Austin. One of the featured keynote speakers was Dr J. Randall Moorman, a practicing cardiologist and professor of medicine, biomedical engineering and molecular physiology and biological physics at the University of Virginia, who spoke about the impact of ‘big data’ on medical devices and showed an overview of their CoMET product. First, I had yet to meet someone with the same last name outside of those of my direct relatives. Second, he presented on the operational implementation of the ideas promulgated in Gleick’s book. I was fortunate enough to speak briefly with Dr Moorman and his chief scientist Matthew Clark, PhD, of Advanced Medical Predictive Devices, Diagnoses and Displays (AMP3D) and they agreed to be interviewed. Over the last ten years, Dr Moorman and Dr Clark have been working on the research and development of data driven predictive medical devices to assist clinicians in administering care. This blog post can be considered a continuation of the interview series showing the manifestation of a ‘virtual medical device’ as described in my interview with Tracy Rausch of DocBox.

Bridget: Can you briefly tell me how you got involved in this area of data driven virtual medical device development and production.

Dr Moorman: In 1985 there was an influential paper published that reviewed adults that had heart attacks and sorted them based on heart rate variability. The result was that clinicians could tell who would die or not die within a standard deviation after a heart attack. At that time, I got interested from a signal processing perspective how to analyze time series and random processes. The underlying idea is that a dynamical system has an invariant measure called entropy. Entropy can be thought of as the unpredictability of a system state or the average information content of a system state. By defining physiological ‘phenotypes’ comprised of a specific set of physiological variables (for example: heart rate, beat-to-beat variability, certain lab values, etc.) and system states for underlying physiological systems, we can measure the entropy of that system and in effect build a probabilistic model with regard to how the dynamical system will behave.

We noticed in an analogous sense, that the symptoms of a decrease in heart rate variability along with a deceleration as that seen in fetal strips taken during labor and an indication hypoxemia could also be ‘phenotyped’ to indicate inflammation that is not caused by global hypoxemia.   Premature infants are vulnerable to sepsis which is a bacterial infection in which an infant can seem physiologically normal and then very rapidly decline such that it is too late to intervene for a positive outcome. We thought that it would be beneficial to have a model or monitoring system that can provide a faster leading indication that sepsis is highly probable with enough time for proper intervention, so we studied the onset of sepsis in premature infants. We determined that the inflammation ‘phenotype’ described above is a leading indicator of sepsis and that it could be modeled and used to detect sepsis earlier. We have since added other variables to our models.

We have worked for over 15 years, starting in the NICU, on the idea of converting data to knowledge that can assist in clinical matters. We have built the HERO and CoMET products to demonstrate that idea. They are based upon the use of time series models and the measure of entropy in those models.

Bridget: What do you believe your role is in this area?

Dr Moorman: I have seen two distinct groups of people involved in this area of informatics: clinicians who have good questions and understand the nuance and possible applications of informatics in the delivery of healthcare, however, they don’t have the skills or tools that mathematicians, physicists and computer scientists have to apply the informatics, while the mathematicians, physicists and computer scientists don’t fully understand the types of questions, operational or implementation aspects of using informatics in the delivery of healthcare but have the skills, and understand the underlying tools, to be able to apply the informatics in the delivery of healthcare. Through my educational path and experience, I have managed to be literate in both medicine and informatics. I see myself as providing a bridge between the science of informatics and the application of it in the delivery of healthcare.

Bridget: What do you see in the future?

Dr Moorman: I have always believed in supervised ‘learning’ for models and/or hypothesis driven models, however, we just finished up and will be publishing results from an unsupervised learning approach for classifying the heart rate time series for atrial fibrillation, i.e., development of the coefficients for our models of entropy. We were able to get similar model results for accuracy with the unsupervised learning approach using 35 beats. However, we still don’t know the method by which that model was derived as the learning was unsupervised, so we are cautious with regard to an outright endorsement of that approach. Fortunately, we have baseline models using the supervised approach and can do some comparisons to further study the differences in the learning approaches for model development.

The other challenge for the future is implementing or injecting a new kind of information source or technology into clinical practice. Technology has changed nearly everything in the practice of medicine and especially cardiology (except the use of direct current for cardioversion) since I graduated from medical school nearly 40 years ago. It is necessary for clinicians to understand the information and technology, to use it and to have it presented so that it is useful. We started our journey in the NICU with similar information and technology however it has been more difficult to get traction or use of this similar information and technology in the adult units. There is a need to own the workflow like the EHR currently does and dictates. Implementation of our product can be done in support of the learning healthcare system, as CoMET and its derivatives demonstrate being able to use new technology and information sources, tune them and then optimize them.

Bridget:   Can you briefly describe the CoMET (Continuous Monitoring of Event Trajectories) product

Dr Clark: CoMET and its predecessor HeRO (heart rate observation) are monitors which depict a measure of a patient’s well-being with an identification of high risk patients and giving an indication of a physiological function that is going awry. Essentially, it is a temporal indicator of a set of complex variables that have been shown to represent the trend of a patient’s status with regard to specific high-risk and impact low-probability critical clinical events. Currently our display has a respiratory based indication on the horizontal axis and cardiovascular based indication on vertical axis (see display screen sample in figure above). The points on display are based on the predictive model indication of that particular physiological function. It takes into account all of the relevant specific sensor data points (heart rate, respiration rate, temperature, etc.) and depicts a physiological function status and trend. It is a virtual medical device which is derived from the raw sensor data and the display status depends on the disease state and how it is affecting the different physiological systems and functions.

There are no alarms and the display design is intended to nudge the attention of a clinician towards starting their rounds on a specific patient or highlight where to focus their attention when administering care to a patient. We analyze the time series of several fairly easily obtainable vital signs (EKG’s, Chest Impedance (CI) and Pulse Oximeter waveforms, together with vital signs such as heart rate, oxygen saturation (SpO2), and respiration rate), build a predictive model based on variability calculations, for example beat-to-beat variability of a heart rate, and display that prediction and its trajectory on a monitor for clinicians to use in their work.

Bridget: One of the things that struck me about your ‘monitor’ was the user interface design. It is not ‘engineering-centric.’ In addition, it looks to me like it functions like a ’soft’ clinical decision support.

Dr Clark: Yes, we worked with user interface experts to design the interface to depict a projection of the current patient state over time in a three dimensional space. This led to the comet tail-like indicators for direction and extent of patient state movement, the axes depicting of physiological system indicators and the colors indicating severity. As above, we did not want to be an alarming monitor: we want to inform the clinician on the patient state and then rely on the fact they know how to take care of the patient.

Bridget: Tell me more about the premise behind your products.

Dr Clark: We’ve built ‘phenotypes’ for different patient physiological states which describe the characteristics of that physiology. A simple ‘phenotype’ example would be a bleeding patient exhibiting an increasing heart rate and decreasing blood pressure. We started in the NICU by trying to predict the onset of sepsis in neonates. We did a retrospective review of the patient records to determine at what points intervention could have been done earlier and then built a predictive algorithm to detect those points. An example of that would be looking at emergent intubation; what were the vital signs leading up to that intervention as a time series, what was in common with all of the patients as a possible predictor of deterioration, and was there something the clinicians could have done earlier for a better outcome. In ICUs or ERs, a benefit to us is the proliferation of continuous patient data available for analysis: EKGs, pneumograms, plethysmograms, RN gathered data such as temperature, and lab values for biochemistry. All of these data types can be used to build the predictive algorithm/model and then that algorithm can be used with current and past patient data to predict a possible future patient status.

Bridget: What equipment have you used to gather and aggregate the medical device data?

Dr Clark: We began with GE and Phillips monitors.   We used the GE Carescape Gateway and Phillips Data Warehouse Connect products. Now we use third party data aggregators: Bernoulli and Bedmaster by Excel Medical. We did have direct integration from vendor integration products, but ran into vendors modifying their software which would ‘break’ the interface. I am agnostic with regard to the medical device vendor as the data presented to me for my algorithms is the same from all vendors.  Currently, I see no standardized way for getting the data out of the medical device vendors’ products, so I use third parties to buffer/manage the data interface. For my purposes, the data items need to be in raw form for my algorithms. For example, I want the raw ECG voltages coming from the sensor as our algorithms’ accuracy relies upon calculating beat-to-beat variability. We can’t do that with averaged data. We see many vendors digitizing and processing the data at the sensor when the analog-to-digital conversion takes place, for example, peak filtering and other smoothing mechanisms provided by medical device vendor products.

Platform and System Architecture for CoMet Implementation. Image used courtesy of AMP3D

Bridget: Can you describe the system architecture and resources required to support your product?

Dr Clark: It all depends on the amount of resources the hospital is willing to expend for the different integration layers. I recommend some forethought with regard to architectures and integration to be able to build and use the virtual medical devices we develop. We rely upon one server connected the hospital backbone. It provides a web service to view the data (algorithm output) real-time and independent of the platform display (mobile phones, pads, desktop displays, large screens). When we were designing the current product we wanted a ‘lightweight’ native application to drive a web service; basically a repository for signal processing sitting on hospital network. The actual model and algorithm development is done in a separate environment. Crunching the numbers is intensive, i.e. deriving the coefficients and defining the models takes time and processing power. Driving the results to be displayed is not, in other words using the model and displaying model results is not intensive. Currently the display is processing using the derived model and is static. We are using a proprietary static database with fixed features (static models and coefficients) with a future goal to building a knowledge base (which is computationally intensive) that is dynamically updated.

Bridget: I’ve noticed the display implies a lead and lag time for the condition and those are different for different conditions. Can you explain that?

Dr Clark: The predictive lead time for a ‘phenotype’ is disease specific. Neonatal sepsis is a slow moving disease until final systemic onset. Therefore the algorithm/model takes the raw data, processes it in the model and averages it over 12 hours. The model can predict sepsis onset before actual onset by 24 hours. In the case of bleeding in the surgical ICU, the lead time is 4 to 6 hours not 24 hours, so, the model predictive lead time/lagging indicator depends on the underlying physiological condition. Another contributor to the lag time might be access to a particular piece of data, such as a lab value. Each condition then has specific latencies associated with the tail of the display based on last data set and clinical expertise about the efficacy of the latent reading with regard to possibly requiring action on the part of the clinician.

Bridget: How do you go about building the algorithm/model?

Dr Clark: In the model development process there are two main tasks: picking what to predict and what is possible to predict. We use event driven models: events to focus on and tipping criteria are determined by clinicians reviewing previous records to determine what was interesting and possibly actionable. We only need 50 events for a statistical power of > 0.9. In other words, the MDs go through individual records and look for the severe events and determine if some type of intervention could have been done before that severe event occurred. From there we build the predictive algorithm. We place all the variables in the model then pare it down. In general we have one feature for every 15 events. Training to develop the model/algorithm is done ‘off-line’ as it is processing power and data intensive. That model/algorithm is then ‘implemented’ with the web server which is much less processing power intensive and what is displayed. Our model knowingly has false positives because of the low rate of these critical events, however, it is intended to be used as a way-finding function or highlighting function (nudge) and not as direct advice. With this process we can also develop bespoke or custom models for our customers.

Bridget: You mentioned that you require continuous data, however, you also seem to be moving into areas that don’t have as much data available. How does that affect your model and its accuracy?

Dr Clark: You are correct, as we move away from the ICU, we have a reduced set of continuous monitoring data. For example, in the cardiac acute care ward, there are only EKG leads and therefore a reduced feature set; the RN documented vital signs, as well as the lab information has a lower frequency. In general medicine acute care, we don’t have continuous monitoring, just RN recorded vital signs, lab results and socio demographic data. Most patients coming to the hospital are not monitored continuously, but episodically by nursing protocols. Of course, the predictive model effectiveness and accuracy decreases without continuous monitoring. We have surmised about how our models might work outside of the hospital. Take the use case of a patient presenting at the ER. If we knew what was going on physiologically before the patient came to the ER or if, when and how an EMT administered assistance, an MD would then get access to context that is more objective. A physiological functional baseline is altered once care is administered, i.e. the state changes. We could possibly use some of the remote, handheld or wearable data with battery storage and have access to data that ‘predates’ some episode.   From there we could also build newer models for increased risk. Essentially, a patient would have a CoMET that would move with them as a proactive monitor versus reactive monitor.

Bridget: What is your ‘wish list’ for the future?

Dr Clark: I would prefer not to have so many different integration profiles, i.e. having to manage differences in the device vendor sensor outputs, software revisions and processed data. Essentially, I would like more continuous monitoring raw data to be available for any patient that enters a hospital.

Thank you for your time, Dr Moorman and Dr Clark. I look forward to seeing your ideas become more mainstream and part of the standard of care in the future.

For more information about research on heart rate variability and neonatal sepsis see the following:

http://wmpeople.wm.edu/asset/index/jbdelo/researchinmedicalphysics4

https://www.dovepress.com/hero-monitoring-to-reduce-mortality-in-nicu-patients-peer-reviewed-article-RRN

For specific clinical research articles by Dr Moorman, et al, see the following:

https://www.ncbi.nlm.nih.gov/pubmed/28771487

https://www.ncbi.nlm.nih.gov/pubmed/27452809

https://www.ncbi.nlm.nih.gov/pubmed/27452809

https://www.ncbi.nlm.nih.gov/pubmed/28296811