This year my wife and I attended the annual AACC meeting in Chicago. She is a supervisor in a large, university hospital laboratory, and they were in the market for a new instrument. As we waded through the thick pile carpets of a device manufacturer, past the latest technological marvels, I noticed a look of frustration on her face. "I don't want all these bells and whistles. I just want a reliable instrument that gives consistent results."
Diagnostics, like any technology-driven business, has adopted the more-is-better mentality that has dominated the consumer electronics industry. In an era when phones can take your picture, check your stock portfolio, and find you the closest pizza restaurant (complete with reviews and driving directions), it shouldn't be a surprise that diganostic instruments are more complex than ever.
But is complexity what the user needs? If we go back to the basic tenet of design control, the final validated design should meet the user's requirements. Part of the issue is that users act like they do need this instrument. My wife will be the first to tell you that when they were looking for their last instrument, they wanted all of the features. The new tests, the automated processes that free-up tech time, the LIS-ready interface.
But with all this complexity comes a higher probability of instrument downtime. While each component may have a long mean-time between failures, as you add more pieces together you increase the risk that one of them will force a maintenance call. This means the instrument is offline, techs are running the test manually, or they're hand-entering results into the hospital LIS.
This issue isn't just for big automated systems either. With the dawn of personalized medicine, we now have complex algorithms that take multiple biomarkers, with intricate molecular pathways, and link them together along with patient symptoms and other clinical data. This new generation test is designed to help physicians decide whether it's time to change therapy, maybe recommend surgery, or just stay the course.
But is it better? The blending of all these data does offer the promise of a more informed result. But often companies, in an rush to get the test out as quickly as possible, will truncate their clinical trials so that the utility of the test is ambiguous to the end user. And while these tests are scientifically sound, and perform well in the lab, they lack the clinical studies that help the physician decide what to do with the results.
Thus the burden of seeing the added value of the test is passed on to the physician, which may be acceptable to early adopters, but represents a real issue for the typical high-volume clinical practice. This is more than just a product uptake issue: There are regulatory implications as well.
Increasingly, FDA is becoming reluctant to clear tests that have a vague or general clinical utility. This is especially true for tests trying to link their result to general health trends through sophisticated algorithms. Keep in mind that FDA is always trying to balance risks with benefits, and they are hesitant to clear a test that doesn't clearly improve the overall healthcare environment.
My advice, particularly those making their first run at a regulatory submission, is to not be to enamoured with what you've made. Your test may be really cool and have state of the art techology, but if you haven't spent the time designing clinical studies that clearly demonstrate its utility, they may sit on the shelf. That is, if they can get cleared at all.