When I do presentations on the use of standards, I invariably have a slide which defines interoperability as "the ability of a system or a product to work with other systems or products without special effort on the part of the customer." My second slide then defines syntactic and semantic interoperability.
Syntactic interoperability occurs when there are two or more systems capable of communicating and exchanging data and this is usually attainable with the use of physical standards, data standards, and messaging structures. Semantic interoperability is defined as the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems.
Semantic interoperability is usually achieved with a common information exchange reference model where the content of the information exchange requests are unambiguously defined, i.e. what is sent is the same as what is understood. In order to have the type of interoperability as defined above, systems should drive their integration goals towards semantic interoperability. This idea of attempting to attain semantic interoperability was highlighted with two conversations I had this summer while working on a project.
This project entailed identifying and analyzing healthcare organizations that are integrating remote monitoring or patient generated data into their EMRs. With the proliferation of remotely generated data expanding greatly due to mobile application and wearable sensor vendors offering cloud based solutions, it is believed that personalized healthcare will become more prevalent. In addition, it is thought that if all of that data is aggregated and analyzed (Big Data), a better understanding of the underlying symptoms, possible diagnoses and ultimately cures for diseases will occur more rapidly. Notwithstanding that utopic vision, as usual, the hard work is done in the ‘trenches’ building the infrastructures, devices and workflows. Given the objective of incorporating patient generated medical device data into the EMR, it is useful to compare this to how medical device data is acquired and used in the acute care setting.
In the traditional healthcare enterprise medical device data integration workflow process, a medical device is connected to the hospital network, the data is aggregated with a medical device data system (MDDS) provided by the medical device manufacturer or a device integration vendor’s solution. The MDDS then sends HL7 messages to the integration broker of the EMR application. The data coming from the MDDS generally is delivered every 30 seconds or 1 minute to the EMR integration broker. Usually, the MDDS has a server with some buffering capability such that the device data can be queried or retrieved or stored for future forwarding if there is a problem at the interface.
From the perspective of the clinician in the hospital using a charting application in the EMR which has integrated medical device data, the data available for annotation into the flow chart is usually displayed in intervals and the clinician chooses one of those data points within that interval as the validated parameter for the EMR application. So, if the clinical protocol for that particular patient is Q15 or the required documentation frequency of the specific physiological parameters is every 15 minutes, 29 or 59 available data points are discarded once the clinician chooses one for charting purposes. This data being integrated into the EHR is usually located in the hospital or clinic (a controlled environment), the device is ‘trustworthy,’ and the data being entered into the EMR is validated by the clinician. The data ‘provenance’ of location of generation, trustworthiness of the measurement device, and validation is assumed based on the clinical workflow and technical infrastructure supporting integration.
In the remotely monitored or patient generated data situation, there is usually a vendor provided service which offers ‘cloud’ based access to the data. This server usually resides outside of the healthcare enterprise infrastructure and the data is not traditionally integrated into the healthcare EMR application. In this situation, ‘data provenance’ quality changes greatly. Compared to the acute care setting, the frequency or volume of data available is greatly reduced, perhaps once daily. Additionally, the clinical workflow does not have an immediate or near real-time clinician validation step. Lastly, the environment in which the physiological measurement is taken is not as controlled as in a clinic or hospital, data generation may be dependent on patient actions or technique and the medical device or sensor may not be as accurate as that found in the clinic or hospital setting.
EMR/EHRs are considered medical legal documents such that clinicians retain liability for any information in the document. This makes them and healthcare institutions more carefully vet any information or data that is integrated into the EMR/EHR. In addition to the medical legal reason cited above, there is the trust issue regarding data veracity. If a clinician doesn’t have the correct level of trust in the data, then any action they may or may not take can be inappropriate.
There are very few healthcare organizations that are integrating remotely monitored *and* in-hospital medical device data into their EMRs. Those that are integrating it are following these approaches: integrated data viewing in a patient context without data-comingling, and/or integrated viewing with a separate co-mingled data repository which provides separate data provenance. Integrated patient context viewing creates a viewing environment in which it may seem the data resides in the same place when in fact, it may be in separate physical repositories. Data co-mingling means the data resides in the same physical data repository.
Another idea that was discussed was that use of the HL7 3.0 messaging standard (object oriented and XML-based) merely ‘kicked the work’ downstream. "XML doesn’t solve the difficult problem of identifying data, it just allows for data to be identified with tags that have no semantics. The problem is not really syntax, but semantics. The different data codes do not identify when they are to be used or the difference between instances.” (See VA link below.) This is a bit of an extension to the idea above of data having provenance. Again, with having semantic interoperability, the data item being integrated would be understood with regard to not only the value and type, but where it came from, and a measure of the confidence one might have in the data value (this would help in the clinician trust with regard to data veracity).
The current data standards that are recommended for use in medical device information integration do not take into account the ‘provenance.’ The need for defining provenance comes into play if or when EMRs start making use of the data gathered outside of the ‘controlled’ healthcare environment.
For even more examples of the issues with remote monitoring data and semantics, readers may wish to visit the Center for Connected Health blog as well as read about the VA Telehealth integration activity (pdf file which will download). Both of these organizations in the USA are ahead in the thinking and implementation of incorporating remotely generated patient data into their EHRs. Each is using a different approach and yet both understand the difference between syntactic and semantic interoperability at a pragmatic level.
So as a healthcare organization starts planning to integrate medical device data generated outside the controlled healthcare enterprise, they should think about data provenance and how to effectively move towards semantic interoperability. Merely using standards at the interfaces will not guarantee semantic interoperability. A determination of the data gathering location, quality and timeliness will be required to properly place data in the proper context for a clinician to appropriately use that data. Providing this context will go a long way to alleviating clinician fears regarding the use of the patient generated data and/or allow the clinician to better judge any actions they may take based on the data.
Trackbacks/Pingbacks