Tom Quinn has lead a recent discussion on the Biomed Listserv about the FDA and medical device safety. In his reading of the FDA's CDRH Annual Report for 2004,
the FDA attributes user error as the key factor in most complaints and
adverse events. (Tom, I hope I haven't put words in your mouth. Correct
me if I'm wrong.)

Having participated in adverse event investigations from the vendor
side, frequently the product performed as specified and the resulting
event was put down to "user error".

These errors are all over the board. In one case that made the national
press recently, a hospital bought a telemetry system with no arrhythmia
analysis software, contributing to a number of patient deaths. I've
seen examples of failure to rescue due to the way alarms were
configured (alarm priorities and levels set in such a way that created
alarm fatigue and masking of alarms resulting in sentinel events).
There have also been more egregious situations where alarms were turned
off or ignored.

The obvious problem revolves around product design and end user
training. Clearly design influences usability, training requirements
and
susceptibility to user error. Design impact on the actual use of a
product is qualitative and hard to judge. I think there's an untapped
need for some professional organization or non profit to look into
usability factors, best practices, and vendor performance in this area
(ECRI?).

Part of the problem is that many medical devices have moved beyond
standalone boxes to become systems with networks, servers, client
applications, and interfaces to other systems that impact patient care.
This ever expanding notion of a "medical device" adds considerable
variability regarding installation, deployment, configuration, and use.
It is market requirements that drive the development of flexible and
highly configurable systems, to both meet a broad range of end user
workflows and to broaden the market appeal for vendor's products.

Vendors make recommendations on system configuration upon installation.
Sometimes customers ignore the vendor's advice, and sometimes things
like patient acuity, case mix or staff experience change over time
rendering initial configurations inadequate.

Finally we come to that actions (or inactions) of users. Like any tool,
a medical device can be used well or poorly. Some errors are honest
mistakes and others are truly negligent. I'm not sure
the FDA is the place to make those judgments and mete out punishment.
Negligence is already punished through criminal and civil means.

Rather than consider this a regulatory oversight problem, the industry
has taken a "safety improvement" approach. Hospital accreditation and
numerous patient safety initiatives are focused on proper policy and
procedures, training and data gathering to close the loop on problems
that occur in patient care. There's also been a lot said about creating
an environment for safety improvement that encourages problem reporting
and the open consideration of ways to improve patient safety. The
industry is trying to avoid a witch hunt mentality where caregivers are
afraid to raise problems and safety issues.

It's almost as if today's more pervasive and connected devices
disappear into the overall diagnostic and patient care process,
rendering user error as much a function of workflow and how care is delivered as it
is product design.