Science

AddToAny

Google+ Facebook Twitter Twitter

Reflections of a Quality Manager pt.2

Mairiead MacLennan takes a look at the validation, verification, variation and the uncertainty that surrounds these terms. 

p26-27-science-reflection-of-scientist-alamy-bm1eea.jpg

Confusion, concern and misunderstanding abound in discussions on validation and verification – two very different processes (they have two separate sections in the standard for that reason). 

Evaluation can accompany either, which may be the source of confusion for some. To keep it simple, consider the following.

Before introducing a piece of equipment or new platform, the department decides, based on service needs and the task the platform must perform, which item they need/want from several options. Requirements might include that it must:

  • Interface with the LIMS 
  • Fit in the space available
  • Get through the front door, (or the
  • door of the lab in which it will be used)
  • Be possible to run specific kits currently being used
  • Be validated to run that kit with their platform
  • Achieve acceptable limit of detection
  • Evidence that manufacturer’s validation meets the needs of the department
  • Demonstrate the process has been performed and described previously in a peer-reviewed article.

Brainstorming with colleagues will identify these requirements, which now become the acceptance criteria. It’s never a good idea to set acceptance criteria after the item is installed. Prioritise the list. Evaluation is the act of performing the “test drive” of the platform and determining whether the acceptance criteria are met.

Verification

The laboratory is required to determine that the same output is achieved when operating in situ, as would be expected from their current test, using previously tested samples and/or from other material tested elsewhere. However, the laboratory must also have reviewed the information on the new kit/platform as a standalone, not just comparing it with the previous kit.

While testing or evaluating, the laboratory must ensure validation and CE mark. The expectation of performance must also be predetermined and these are the acceptance criteria of that kit/platform combination. The results of these test runs must be comparable to a high degree.

When doing exactly what is described in the kit insert, and using the platform as per the manufacturer’s instruction, this is a verification, so long as the manufacturer’s validation has been closely scrutinised during the review.

Validation

If, for example, the sample type the laboratory wants to use with the specified kit on the specified platform is not listed as validated on the kit insert, then a validation is required. Such novel work must, by its very nature, be more extensive and detailed, to take account of unknown variables in order to identify them as such. The process has usually not been performed and described previously in a peer-reviewed article. There may be more specimens required from a patient than would normally be taken, for example one each of the validated sample type and the non-validated sample type to be able to demonstrate that the test detects the target as well as (or even better than) the current method or other ‘gold standard’. Validation demonstrates that an alternative , novel process gives valid results. Verification demonstrates that the laboratory can reproduce what the kit insert describes.

Uncertainty

Measurement uncertainty (MU) also causes consternation. It is concerned with determining, understanding and managing the inherent error or variation in any test that primarily involves measuring or counting something. The “something” can be anything and is called the measurand. For the cell sciences this has only become an issue since accreditation to ISO15189.

As recently as four years ago, well-respected experts in the field were still saying, “tell UKAS we don’t do MU in microbiology and cell pathology”. However, we most certainly do.

The most obvious examples of measurands for these disciplines are antibiotic disc measurements, cell counts, colony counts and tissue excision margin measurements, though there are more.

Variation

It becomes clear that the blatant very low counts or very high counts or measures, well away from the cut off, are not an issue. Around the critical level or decision-making values, the “error” or variation must be determined as accurately as possible. Any variation between and within operators must be established and importantly assessed for the impact on the results that the laboratory issues.

Sophisticated statistical analysis of the data is not necessary, nor actually, in my opinion, desirable. All operators must be able to understand what this MU means to their result output, so that if a result around a cut-off value is achieved, they understand the impact and the required action.  

I have no doubt that reading this are those who do not believe that tests will give results in the critical range where some that could be positive on one “run” could be considered negative on the next, because the manufacturer wouldn’t have set the test up to do that. I can assure them it does happen. In a PCR test with CT value cut off of 38, above which the result is deemed negative, it is entirely possible that a value of 37 might be achieved and, if run again, a value of 39. So, what is the “right” answer?

These are the areas in which values are important to reporting clinicians to permit appropriate interpretations. So the understanding of MU is vital when reporting such results. If a platform is set to interpret a result as positive or negative on a discreet figure, the team that understands the test, with the clinicians, must perform an impact assessment and establish policies on managing such results. A record of such a discussion based on verification data will provide evidence that measurement uncertainty has been established and the impact assessed and considered in relation to critical values when reporting.

Competence

Providing statistical data, which operators do not understand, can lull a laboratory into a false sense of security. If K factor or % CV are used to express MU, do all the operators in that laboratory, including clinical reporting staff, know how to apply that figure to the results? If not, the exercise is pointless. To demonstrate competence, a question on MU must be included in their training record, thereby closing the loop of knowledge understanding and application. This is what a Technical Assessor means when they ask if MU has been determined, applied and impact assessed.  

Download PDF

Image credit | Alamy

Related Articles

Top