Early this spring the Center for Medical Technology Policy (CMTP) released an under-heralded report with the daunting title “Evaluation of Clinical Validity and Clinical Utility of Actionable Molecular Diagnostic Tests in Adult Oncology”. This work, the latest in a series of publications of what CMTP calls Effectiveness Guidance Documents (EGDs), was intended to “provide specific methodological recommendations targeted to clinical researchers and test developers regarding the design of clinical studies intended to inform decisions by payers, clinicians, and patients.” The handiwork of a 3 person writing team (Patricia Deverka, Donna Messner and Tania Dutta), the document was produced with the support of both a Molecular Diagnostic Technical Working Group and a Molecular Diagnostics Advisory Group composed of an all star line-up of more than two dozen outside experts including researchers, clinicians, payers, industry, guideline developers and patient advocates.
Central to this work are 10 recommendations addressing in varying detail the key elements of test performance: analytical validity, clinical validity, and clinical utility. Of note seven of these focus on the important but elusive topic of clinical utility. The ideas outlined are not necessarily new. But the document is unique in its efforts to organize these ideas into an evidence oriented hierarchy and to offer concrete advice on what choices are available to establish clinical utility and when and how these should be used.
Of particular interest and value is the final recommendation addressing decision-analytic modeling techniques. As the authors note “decision-analytic models are useful in the common situation when there is no direct evidence of clinical utility.” And as they caution modeling is not recommended “when there is a high degree of uncertainty about the underlying disease process, lack of a clinical intervention with known benefit, or when there is a high uncertainty about the link between test results and the effectiveness of interventions.” But given the paucity of randomized clinical trials available for the evaluation of new diagnostic tests, the ability to find alternatives such as modeling to establish new test value is of paramount importance. The authors deliberately eschew a recap of modeling methodology but provide a well referenced discussion of how and when to use modeling. They highlight one of the best examples of its successful use, the EGAPP assessment of mismatch repair mutation testing in Lynch syndrome.
Perhaps the most provocative observation in this rich document is buried in the highlighted text for Recommendation 3: Appropriate Metrics for Clinical Validation. Here it is noted “When the clinical outcome of interest is a continuous or time-to-event variable … regression methods may be used to model the relationship between the test (discrete or continuous) and the outcome of interest. To classify patients into clinically actionable risk groups, it may be necessary to apply cutoffs to the results of the test and to the clinical outcome (e.g., disease-free survival at 5 years, tumor shrinkage of 50% or more (emphasis mine). This is not the first observation that the time-to-event outcome may not be a perfect endpoint for evaluating a new biomarker. But it is the first to espouse such a clear and elegant solution.
For those looking for a quick fix and an easy read, this is not the document for you. But for the aficionado of evidence based medicine, this represents a step forward in our understanding of how to evaluate laboratory tests and is a document worth savoring.