Lessons Learned from Interlaboratory Method Validation Studies: The Good, the Bad, and the Ugly

Collaborative Efforts to Improve Environmental Monitoring
Oral Presentation

Prepared by H. McCarty, K. Roberts, Y. Chambers-Velarde
General Dynamics Information Technology, 6361 Walker Lane, Suite 300, Alexandria, VA, 22310, United States


Contact Information: harry.mccarty@gdit.com; 703-254-0093


ABSTRACT

Various U.S. laws designed to protect human health and the environment require routine monitoring be conducted by utilities, industry, private entities, and regulatory authorities. Depending on the regulation, environmental monitoring may require use of specific analytical methods, and various EPA programs may “approve” or recommend those methods based on data that validate their performance in the matrices of interest. Such data may be generated by EPA itself, by voluntary consensus standards bodies, or private companies that have developed the methods. While the method validation process itself and the goals may differ with the organization involved, the most widely used approach involves multiple laboratories performing the method on the same samples (i.e., a multi-lab validation study).

As with most everything else in life, even despite careful planning and implementation, “Murphy’s Law” applies to validation studies. Although studies involving very complicated methods and multiple matrices have more opportunities for problems, even studies of relatively simple procedures can go awry. We provide examples from over 40 years of method validation support to the EPA Office on Water, ranging from logistical issues like lost shipments, vendors substituting reagents or materials, labs not following instructions, data processing issues, and more, in the hope that others can learn from both our successes and our failures.