I often get the question what does FDA mean when they say something like " . . . you can't do that as it may introduce bias into the study"? Although the concept of bias is somewhat intuitive, I decided to spend some time digging into how FDA defines and manages this issue so that I can share this information with our clients.
Just as an aside, I enjoy this sort of project, as definitions really help me put the pieces of the regulatory puzzle together and keep me squarely focused on what something is ... and what it is not. Definitions are truly helpful when trying to put the pieces of any regulatory project together. Fortunately, FDA seems love definitions, too!
What is bias?
Let's start simple. FDA's definition is: "Bias is the introduction of systematic errors from the truth."
That helps some, but raises a few additional questions too. What does FDA mean by systematic errors and what is the truth? Unfortunately the guidance document does not have a definition for systematic errors, but Wikipedia certainly does: "In a statistical context, the term systematic error usually arises where the sizes and directions of possible errors are unknown. Measurement errors can be divided into two components: random error and systematic error."
OK, that helps a lot but I've got to keep digging for "the truth"...
From the diagnostic perspective truth is defined by FDA as: "In a diagnostic clinical performance study you are characterizing the test by performance measures that quantify how well the diagnostic device output agrees with the true subject status as determined by a clinical reference standard."
So putting this together, when FDA says something may introduce bias into a diagnostic clinical study they mean we need to find clinical designs that avoid systematic errors as these are the errors (rather than random errors) that could lead to a false set of data (missing the truth without realizing it). The preferred way of doing that is to have clinical reference samples where truth is already known. There are other options such as comparing data from a new method to actual patient outcomes (but with this we have to be wary of other types of bias -- a topic for another day).
In my next blog I'll discuss the various types of bias (CLS-L-RTV), and I'll teach you a little song (to the tune of Supercalifragilisticexpialidocious) so you'll never forget them. More to come...