Interoperability – Barriers to Adoption

There’s been some great comments on the recent post about the announcement of MD FIRE. That plus some other activities I’ve been involved in have inspired some thoughts on barriers to adoption for medical device interoperability.

For this discussion, interoperability refers to the ability of a medical device to be controlled by another medical device or third party information system. Medical device systems from a single vendor frequently include interoperability between the medical devices and applications running on general purpose computers, but since all the components are from the same vendor I’m excluding them from this discussion.

Like everything else, medical device interoperability will walk before it runs. In a recent comment, JimW suggests that MD FIRE’s focus is on driving the adoption of, “tightly coupled, low latency, deterministic connection[s].” I beg to differ; I found the MD FIRE text to be very general and that it avoided any kind of design solution whatsoever. Further, it is interesting that what little multi vendor interoperability that is actually on the market, is based on “tightly coupled, low latency, deterministic connection[s].” The example that comes to mind (actually the only one I can think of right now) is the use of CANopen to integrate radiographic equipment with contrast injectors in diagnostic imaging. Perhaps someone can provide additional examples.

Jim is right that such interoperability presents considerable product design and verification challenges, which is why I think that a different kind of interoperability will broad gain market traction before designs with tightly coupled deterministic connections. If you’ve heard Julian Goldman speak on this topic, he provides numerous examples of interoperability that revolve around simple safety interlocks like:

  • Using a patient monitor to cease therapy delivery (and sound an alarm, of course) on a PCA pump when respiratory arrest is detected,
  • Linking a heart lung machine and ventilator in surgery to ensure that one is turned on when the other is turned off, and
  • Linking diagnostic imaging with a ventilator to ensure ventlation is restarted after it is stopped to reduce motion artifact during imaging.

A portion of the 500 patients who die unnecessary deaths every day in U.S. hospitals are killed because simple safety interlocks like these don’t exist. Why? Clearly such interoperability is not rocket science.

Barriers to Adoption

A conventional approach to interoperability (not to mention a lot of connectivity) is to create one-off systems integrations. For the patient monitor/PCA pump example above, this means that each individual patient monitor vendor must approach each individual PCA pump vendor and and negotiate an agreement where both vendors undertake a separate product development projects to design and test the required interoperability. Such an approach is crazy.

R&D Considerations

Let’s start with looking at potential impacts on R&D:

  • Let’s say there are 3 patient monitor vendors and 3 PCA pump vendors, this equates to 18 R&D projects – 9 projects, each pairing 2 vendors, with two projects (one for each vendor) for each connection.
  • Given that the technical demands of such an effort are pretty straight forward, let’s peg the cost at $1 million  per project, for a total R&D spend of $18 million. The first development effort of each vendor will cost more, but costs will go down for subsequent projects based on experience. The absence of industry standards means that each integration will be custom and unique; what could be done once with industry standards has to be repeated 18 times without standards.
  • Time to market would be measured in years using the conventional approach. Each project will take 8 months, with no one vendor doing more than one project at a time (which means that no more than 6 projects can occur simultaneously, resulting in just 3 actual cross vendor integrations). This equates to a minimum elapsed time of 24 months, but more likely 36 to 48 months. Oh, and don’t forget the 4 to 6 months vendors will need to negotiate the terms of their collaborations before a product manager or engineer lifts a finger. That’s a minimum of an additional 15 months elapsed time.
  • Medical device vendors rarely go back and do major R&D on existing products. Because a project like this would likely fall outside what a vendor would consider sustaining engineering, they would probably wait to undertake this interoperability project when they design a new product. The life cycle of medical devices is 5 to 7 years, and sometimes much longer. This reticence to make major enhancements to existing products exists for both pump and patient monitoring vendors. For the stars to align and result in an interoperability project, both vendors would have to have some degree of overlap in their new product development schedules. Fortunately, a vendor wouldn’t have to delay the launch of their new product by more than a year or so to include interoperability. Vendors have from time to time released new products with a basic feature set, and followed that up with additional features a year or two later. A two phase release would enable device vendors to begin to recoup their R&D investment while releasing the interoperability feature in a second release. So now the time to market estimated above is extended by additional years.
  • Now that our patient monitoring and PCA pump vendors have one-off integrations (3 for each vendor in our example), let’s talk sustaining engineering for real. Every time a vendor changes their product, a risk analysis must be done to determine if the changes will impact the interoperability feature. (Potentially both the vendor making the change and the vendor on the other side of the integration could have to do a risk analysis, depending how their integration deal was structured, how they designed their feature, and how much they trust the other guy.) If they’re lucky, this risk analysis might result in one or both vendors having to repeat verification test of the safety interlock. Worst case, one or both vendors would have to modify the feature and do a new release. Remember, there are 18 halves of the 9 integrations that must be maintained and supported. If this feature was implemented only on new product releases, sustaining engineering projects will be more frequent, as they always are for brand new products. If the feature is added to a more stable and established product, sustaining would be less. Let’s guess another $2 to $5 million per interface half over 5 years, for a total sustaining cost of $36 to $90 million for all vendors. Gulp. This makes the initial R&D costs seem like a bargain.
  • The final fly in this buggy R&D ointment is the question of vendor ROI. What kind of return could a vendor expect for enabling such an interoperable safety interlock? Remember the monetary value extracted from the customer has to be split between the two vendors. Can two separate vendors receive the same, more, or less value when the solution is split between two purchases? How does framing the sale primarily as an existing product replacement purchase compare to a buying a new patient safety capability impact the customer’s willingness to pay? These are difficult (and expensive) questions to answer with any certainty.

Regulatory Implications

Another important consideration is how cross-vendor interoperability would be regulated. Current law dictates that a single vendor must “own” the regulatory burden for the entire regulated device. No King Solomon “cutting the baby in half” here. While each vendor may develop their half of the interoperability, and both may verify and validate the entire resulting solution, only one vendor can assume the regulatory responsibilities and receive the approvals that enable them to make the marketing claims. This means that only one of the two vendors can promote and sell the resulting solution.

Distribution using a systems integrator to create the interoperability is problematic too. Each custom system integration would be a one-off medical device requiring each customer engagement to follow the FDA’s quality system regulation (QSR) and seek regulatory approval for each installation. This doesn’t seem very practical.

Summary

Current medical device vendor distribution strategies are designed for selling proprietary end to end solutions. Hospitals are notorious for preferring “single vendor solutions,” so would they accept buying pieces of a solution separately from vendors – not to mention receiving fragmented service and the inevitable finger pointing?

Other industries with similar distribution challenges use indirect channels, systems integrators or resellers, who represent products from numerous vendors that come together in broad based solutions. All this reminds me of the medical device dealers that were the predominate distribution model over 20 years ago.

An interesting problem, isn’t it? Okay, it’s really  more depressing than interesting.  But we need to face these issues squarely if we’re going to solve the problem of interoperability. Our only alternative is to forego the capabilities offered by interoperability and continue to let patients die unnecessarily. No one wants that, right? Because  this post is already too long, I’ve saved the parts on how the interoperability market might actually evolve for another post.

Share

5 comments

  1. Pete McMillan

    Hi Pete,

    Thanks for the great summary. And thanks for using the “correct” definition of device interoperability in your article.

    In your article you have examined the regulatory issues of device-to-device interoperability. How would that change if the devices were not controlled by another device, but by an independent CIS system? Eg the CIS receives data from device A, analyzes them using a decision algorithm, and decides to send a control command to device B.
    1. Who owns the regulatory responsibility? I would think that it is the CIS vendor. The device A vendor has to ensure that correct data is being sent out, device B vendor has to ensure that the device responds as documented to a control command. What is your take on this?
    2. If the CIS vendor has to bear the regulatory responsibility, would that mean that the decision algorithm cannot be altered?
    3. In such a CIS system, if the users define/edit the algorithm (like how they define advisories in philip’s new carevue system), who bears the regulatory responsibility for the new algorithms? Or will such editing be considered off-label?

    The reason I raise these questions is because there is another barrier to adoption of interoperability – the clinician. No clinician would like to have a vendor decide what is the best course of action. As much as they would like automated control systesm, they would want to be the ones defining the behaviour of the system. Hence the concern with regulatory issues governing the decision algorithms.

  2. Pete, you’re anticipating my next post on this topic, which will get into how I think interoperability will come to market – and third party solutions will figure prominently in the discussion.

    To answer your questions:
    1. The CIS vendor’s interoperability will be regulated as a medical device. Also the CIS vendor is exclusively responsible for the safety and effectiveness of the system – the device vendors are not responsible for anything beyond the intended use of their medical devices.
    2. This depends on the labeling of the CIS vendor’s regulated medical device and the specifics of the algorithm. This is a very interesting area and one of a lot of vendor focus. I see a lot of incorrect assumptions being made here.
    3. As always, the CIS vendor bears all the regulatory responsibility for their product. If the algorithms are user definable, that fact is part of the vendor’s intended use and regulatory submittal.

    The flip answer to your final issue is to focus on developing things the market wants to buy rather than things you’d rather sell – you know, like interoperability.

    Seriously though, established algorithms and protocols should be fixed (but fully transparent) – any clinician objections should be easy to overcome. There’s also a place for user defined algorithms for certain things.

  3. Pete McMillan

    Thanks for clarifying, Tim. Looking forward to your next post.

    PS. Sorry for addressing you as “Pete” in that previous comment. It’s not that I’m so full of myself that I use my name for other people – when I typed it, I was also online with the biomed chief, who is also a Pete :)

  4. Tim, thanks for a thoughtful overview of this challenging issue.

    I always compare this to the approach taken in the early days of modem development. The manufacturers didn’t see any reason for adopting standards, but the users loved them.

    Do you know of any organization that is working on an open set of standards. Meaning a set that anyone can get access to without paying $1000s?

  5. Bernard, the standards effort called the Integrated Clinical Environment, or ICE, is one such effort.

    I think a straight forward standard has a good chance of being adopted by startups and smaller players; established device vendors will prefer to stick with IHE and 11073.

    There’s also a terrific opportunity for one or more open source software projects. This business model has appeal to startups, end user organizations, and professional services enterprises.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>