More Than Enough of a Good Thing

Friday, October 30, 2009

Would taking a better picture solve the problem? Not really, because the problem is that you don't know for sure what you're seeing, and as pictures have become better we have put ourselves in a position where we see more and more things that we don't know how to interpret.
—Malcolm Gladwell, "The Picture Problem" (2004)

[S]creening may be increasing the burden of low-risk cancers without significantly reducing the burden of more aggressively growing cancers and therefore not resulting in the anticipated reduction in cancer mortality.

In response to a recent study published in JAMA, the American Cancer Society is thinking about re-tooling their message on some cancer screenings.  This study lends new credence to an observation that's been gaining traction over the past three or four years.  It appears that some number of growths we currently identify as cancer aren't necessarily harmful and just go away on their own.  This throws a pretty big wrench into the conventional wisdom about regular screenings, early detection, and aggressive treatment.  We are still in the early stages of what promises to become a big conversation... but it is starting to appear that we may be significantly over-treating some forms of cancer.

Cancer screenings are a good thing, but having more of a good thing does not necessarily yield a better outcome.  This sounds like a classic lesson from a greek myth or a Victorian morality play.  Increasingly, however, I suspect that this sensibility will begin to drive the public discussion of health care costs. 

Regardless of what happens in Washington over the next few months, one thing seems clear: we're all going to become a lot more directly involved in understanding our health care costs better.  Whether we end up paying for health care through increased taxes, ever-increasing premiums or ruinious self-pays, our direct costs will likely continue to rise until we start asking some very difficult questions.  Pretty soon I think we want to know one thing in particular: what are we actually getting for all this?

Judging by the reaction to this administration's early efforts to fund comparative effectiveness research, it would seem that we're not entirely ready to ask that question yet.  Sooner or later, though, I suspect we're going to have to pick up that rock and deal with what's underneath it.  But I am not an expert on health care financing or public policy analysis, so I should drive this a bit closer to home. 

It seems plausible that outcomes will play an ever-larger role in the regulatory process.  So what does it look like for the IVD industry when comparative effectiveness is the norm?  How does it change the development and clearance of IVDs if we come to believe that better detection does not necessarily lead to better outcomes?  There are many possible answers to these questions.  In a blog format, I can only hope to scratch the surface on a few of them at a time.

Let's start with a the most obvious scenario: what might it look like if FDA were tasked with including outcomes in its regulatory decisions?

For starters, this wouldn't be an entirely new idea, as FDA already gives some serious thought to the question of outcomes.  It might be difficult, for example, to get clearance on a test that identified patients were at a low risk for a condition if personal health choices are also associated with reduced risk.  Totally apart from the question of such a test's accuracy, FDA will not hesitate to consider whether it creates a good outcome for patients to be given information that might predictably lead them to make less-healthful personal choices.  Evaluating an IVD in this light requires a broader view of risk vs. benefit, one that goes well beyond assessing clinical accuracy and lableling.

But what if FDA took it a step further?  What if they needed proof not only that a test produced a valid diagnosis, but also that having a valid diagnosis created a better outcome for the patient?  It's a common (but mostly unstated) assumption that accurate information and proper diagnosis are worthy ends in and of themselves.  What if you begin to pick away at this assumption?

The overly-easy, off-the-cuff answer is that post-marketing studies could become a standard part of the regulatory process for IVDs.  That's not going to be a very popular measure with industry, even if it's only limited to the highest-risk devices.  Follow this to its logical extreme and it doesn't take much imagination to picture prospective outcome studies that must be conducted before a marketing application is approved.  Since outcomes might not be known for years after treatment, prospective outcome studies could be a truly onerous burden.

Let's not jump to that easy conclusion just yet, however.  I don't think that politicians or the public are likely to clamor for comparative effectiveness in the abstract.  I think it will be applied primarily as a tool for trimming costs.  Viewed through this lens, it's hard to imagine that adding years of research to an already-challenging development cycle is going to catch on.   I don't think anyone is hoping for a smaller number of more expensive tests, even if that does happen to yield better outcomes.

This is, I think, where the conversation is stuck at the moment.  On the one hand, trimming costs will always be a popular idea.  On the other hand, it is very important to preserve and promote innovation.  We're stuck thinking of compartive effectiveness as a burdensome measure that needs to strike a balance between two incompatible goals. 

But what if measuring outcomes represents a market opportunity instead of just another burden?

Take, for example, the prostate-specific antigen (PSA) test.  The test works well enough, in that elevated PSA levels appear to be a good indicator of an enlarged prostate.  But it is becoming less clear that having an enlarged prostate is a good indicator of having a case that requires treatment.  It might be possible to develop a test using a different mix of biomarkers that could show greater predictive value when it comes to who might require (or respond well to) treatment.  Such a test might be tremendously valuable, but only in a marketplace and regulatory environment where outcomes are examined and weighed.

What if our goal shifted from making diagnoses to screening for response to treatment?  It's a slightly different way of thinking about clinical utility, but one might open some new and interesting doors.  At present, prostate cancer tests are likely to be compared to PSA and are likely not to show as good predictive value as the gold standard.  Once we have an idea what constitutes a disease state, it can be difficult to develop a test that embraces different diagnostic criteria. 

It's way too early to predict the full range of FDA's reactions to knowledge that is developing as we speak.  I'd like to hope (and will certainly be advocating) that new knowledge doesn't have to lead, inevitably, to higher regulatory burdens.  It could just as easily lead to new opportunities to define and assess performance. Outcomes-based intended uses could be just the thing to provide a clinical utility "hook" that would allow us to leapfrog "good enough" tests that turn out not to be that great after all.  

Tags: Outcomes, Policy

Other Blog Authors

Dylan Reinhardt
Dave Kern
Steve Gutman
Jo-Ann Gonzales

Recent Blog Posts

The Staple That Changed an Industry LDT regulation and the perils of challenging Eminem to a rap battle
FDA Releases the Kraken Whichever way this breaks, it's long past time to have this conversation in a meaningful way
Illumina Acquires Myraqa Acquisition Strengthens Illumina’s Clinical Readiness
The Spirit of GLP A Best Practice for IVDs
Process Performance Qualification (PPQ) Lots So, how many lots are required?
The Case for Risk-based Monitoring Better the devil you know...
Form Follows Function OIR Reorganizes to Meet the Advancing Wave of Molecular Diagnostics
CDRH Unveils New PMA Guidance Documents Attention shoppers, it's two-for-one day!