The Case of the Dangling Validation

Tuesday, March 05, 2013

It was a quiet night in the office. I could hear the rain tapping in the downspout like some Jamaican steel drum band. Just then, a colleague sends me a slide show on developing performance measurements. Says they want my review. It gets me to thinking about effective validation projects.

Statisticians are frequently asked to validate parts of statistical programs or reports. Clients may specify in great detail what they want covered or else leave it all up to the persons assigned the task.  Carte blanche could become your worst nightmare, where doing it all may make the validation more complicated and costly than the actual project.  I remind myself not to get too carried away, but focus on what is meaningful.

Then it hits me like a cross-town bus at rush hour: Why not approach the validation as a detective might?  Will I be Alex Cross, Colombo, Sherlock Homes, Jessica Fletcher, or Nancy Drew?  Who cares – I’m not really interested in prime time or bestseller status.  Although my investigation may lead me into unplanned directions, my ultimate goal is to get approval in a timely and efficient manner, and to resolve and understand any detected differences. 

Although I am generally an optimist, my experience in programming has made me a realist.  If my coding works perfectly the first time, I might seriously want to take a second look and do some spot-checking. There’s no such thing as a free lunch.

If I don’t uncover any errors, how much sleuthing is really necessary?  It can be challenging to police your own work.  The rule of thumb in validation is to find an independent person to review not only the output, but also the thought process for getting there.  Then I remember the three guidance references (below) that every good statistical gumshoe uses to establish the fundamental perspectives that must be understood before proceeding with this task.

With the focus of a dog chasing a bone, I like to look at the following:

  1. Choose to review what matters most – Is it relevant to the project, the user and the auditor?  Know your audience; know the intended user; know the product; know the way the information was gathered.  Wherever possible, go back to the approved protocol.  Ask several people involved with the study to get different perspectives regarding importance.  Keep in mind that you are being asked to validate not to develop or redesign.
  2. Have a strategy – Will it be reviewed based on the type of data, program or document or a combination?  Do I review all adverse event data or sample subjects across sites? Is the correct statistical approach being applied properly?  Which software package should I use for confirmation? 
  3. Make a concrete plan and provide an explanation – Avoid ambiguity as to what will be checked, what will not, and why.  Can someone else follow your work?  Can you figure out what you did a year from now?
  4. Think differently and creatively – Wear an auditing hat. Do you start at the back of the document and go forward?  Perhaps tackling it in a different or random sequence might be more illuminating.   One colleague counted paragraphs and caught significant information that was omitted from a language translation.
  5. Know when to stop, so you realize the best “bang for the buck” – Do you want to make this task a life’s work?

As a statistician, I naturally gravitate to making tables and graphs. I recommend making two tables. The initial table contains the following headings:

  • Purpose
  • Specific Question to be answered
  • Measurement Information
  • Depth of Investigation

See if this will clarify the necessary steps and lead to combining or eliminating some.  What are the most appropriate levels and types of measurement?  In my opinion, determining how and what to measure is the most difficult and critical aspect of the process. 
     
The final table identifies the following:

  • Priority (coded as 1 Must, 2 Advised, 3 Nice)
  • Source (as a page #, document, line listing #, etc.)
  • Observation
  • Recommended Correction

With this method, the statistical validation is summarized and documented regarding strategy and assessment.  With both creative and systematic techniques throughout the process, statistical validation can result in a successful outcome. Another case closed.

References

Guidance for Industry  - Process Validation: General Principles and Practices, September 2011

Guidance for Industry: E9 Statistical Principles for Clinical Trials, September 1998, ICH

Statistical Guidance on Reporting Results from Studies Evaluating Diagnostic Tests, March 13, 2007

Tags: Best Practices, Biostats

Other Blog Authors

Mya Thomae
Dylan Reinhardt
Dave Kern
Steve Gutman

Recent Blog Posts

The Staple That Changed an Industry LDT regulation and the perils of challenging Eminem to a rap battle
FDA Releases the Kraken Whichever way this breaks, it's long past time to have this conversation in a meaningful way
Illumina Acquires Myraqa Acquisition Strengthens Illumina’s Clinical Readiness
The Spirit of GLP A Best Practice for IVDs
Process Performance Qualification (PPQ) Lots So, how many lots are required?
The Case for Risk-based Monitoring Better the devil you know...
Form Follows Function OIR Reorganizes to Meet the Advancing Wave of Molecular Diagnostics
CDRH Unveils New PMA Guidance Documents Attention shoppers, it's two-for-one day!