Nondiagnostic clinical assays quantify drug mechanisms and responses in human clinical trials, including predicted biomarkers of safety and efficacy, as well as mechanistic markers of pharmacodynamics and toxicodynamics. These biospecimen assays are used to support key decisions during drug development.  

The integrity of these assays is critical to advance the most promising experimental therapies and make relevant measurements that inform drug development decisions. The methods to obtain specimens for testing is equally critical in order to ensure assay integrity and suitability for intended measurements.  

Throughout the entire process, from specimen collection to data reporting, the use of validated methodologies and reference standards is essential to achieve valid assay results and guide successful drug development directions. Without these key components, evidence of genuine drug activities will be missed, the duration of clinical development for both successful and unsuccessful treatments will be prolonged, and the treatment of clinical trial patients will be suboptimal. 

Teams from across the Frederick National Laboratory have contributed to standardization efforts in Metrology of Drug Development (Biospecimen) assays, including:

  • In Vitro Evaluation and Molecular Pharmacology Laboratories
  • Molecular Characterization Laboratory
  • Cancer Pharmacodynamics-Biomarkers Program

 

Image
Icon for key reagent collection

 

1. Key reagent collection 

Key reagents are components that are critical for the performance, robustness and reliability of an analytical method, such as reference standards, proteins, antibodies, labeled analytes, detector reagents and matrices.  

Key reagents might be used for specimen collection, shipment, processing, and the analytical assay procedure, but must be predefined and consistently utilized. The key reagents, especially those used for the actual assay, must be characterized (e.g. identity, purity and stability) and well documented, including the record of receipt, use, storage, certificate of analysis, lot or batch number, manufacturer, as well as manufacturing and expiration dates.  

Reagents must never be used beyond their expiration date. Assay validation is required when there are changes to key reagents, including the use of a new lot or batch.  

Given that key reagents have a direct impact on the assay results, it is important to consider the long-term supply and availability of these reagents. Supply chain issues with commercial reagent sources should be considered if the assays are used to support clinical trials over longer time schedules.    

Reference and guidance documents from external sources

 

Image
Icon for key reagent qualification

2. Key reagent qualification 

To maintain consistency in assay performance over time, stringent incoming testing for key reagents is required. Because research and development reagents are produced to minimum criteria rather than manufactured to specifications, reagent characterization is critical to minimize lot-to-lot variation with research grade reagents and to identify mislabeled or degraded reagents.  

Reagent qualification is critical to distinguish between the supplier’s acceptable and unacceptable lots in order to reject or discard the latter. Examples of reagent characteristics that can vary from lot to lot include identity, purity, stability, extent of denaturation, specific activity, and the specificity and selectivity of antibodies. Each new lot of a key reagent must be qualified for the expected assay performance prior to its use with a validated assay procedure. 

 

Image
Icon for specimen collection

3. Specimen collection 

To ensure the validity of any measurement, procedures for obtaining patient specimens must be optimized and proven fit for purpose. This includes clinical collection, handling, processing and storage methods that are compatible with clinical practices.  

Sources of clinical specimens include needle biopsies, blood draws in Streck, buccal swabs, sputum, urine and feces.  

It is important to note that tumor biopsies for biomarker analysis are quite different from tumor biopsies for disease diagnosis. For example, the collection of multiple timepoints after drug administration requires two to three biopsies from the same organ site at optimal collection times. Parallel processing of these specimens is required to minimize the preanalytical variables. 

Biopsies for disease diagnosis do not require multiple timepoints. Also, pharmacodynamic biopsies require enough viable tumor cells in multiple biopsies to adequately represent the drug-induced changes, whereas the tumor content required for standard diagnostic biopsies is typically low.  

To assess biomarkers, pharmacodynamic biopsies must enable quantitative assessments that are limited to tumor cell populations (often labile proteins) across multiple biopsy timepoints to provide an accurate assessment of drug response. In contrast, the positive staining of one or more stable biomarkers is often used to confirm a diagnosis from a standard diagnostic biopsy.  

Therefore, the suitability of pharmacodynamic biopsies for an assay is critical and recognized as a challenge for successful biomarker studies.  

Each assay will have certain requirements and not all specimen collection methods are suitable for a given assay.  

For example, rapid preservation at the point of care is essential for preserving labile biomarkers, such as the phosphorylation of tyrosine, serine and threonine. Under conditions of cold ischemia, the half-life of pY1234/1235-MET is only three minutes. Valid pMET assay results require flash freezing of core needle biopsies within two minutes of collection (2016 Srivastava et al., PMID 27001313). In contrast, DNA sequencing has been successfully performed from mummies.  

Therefore, understanding the stability of an analyte during specimen collection and processing is critical for the downstream success of an assay. 

Standard operating procedures

Publications regarding differences in tumor biopsy for biomarker studies and for diagnosis

Additional resources

  • The National Cancer Institute Biospecimen Evidence-Based Practices series is an expanding collection of procedural guidelines developed using and annotated with evidence from primary research publications in the field of human biospecimen science. 
  • Assay portal: The Clinical Proteomic Tumor Analysis Consortium (CPTAC) Assay Portal serves as a centralized public repository of "fit-for-purpose," multiplexed quantitative mass spectrometry-based proteomic targeted assays. 

 

Image
Icon for specimen processing

4. Specimen processing 

The shipment, storage and processing of specimens is an integral part of the overall assay system. It is critical to consider all required measurements from a specimen prior to its collection and processing. Some collection and processing steps will be incompatible with certain assays.  

For example: 

  • Blood collected in heparin can affect enzymatic reactions, such as inhibition of the DNA polymerases used in polymerase chain reactions.  
  • If blood is collected in EDTA tubes and sufficient time passes before processing to plasma, it will undergo cell lysis and release unwanted cellular analytes into the plasma.  
  • Genomic DNA contamination dilutes the fragmented cell-free DNA (cfDNA) already found in plasma, making circulating tumor DNA analytes less concentrated for circulating tumor DNA (ctDNA) sequencing assays. 

Certain extraction methods are geared for different types of specimens and sometimes different sizes of specimen. For example, cfDNA extraction of smaller nucleic acid fragments are preferred over contaminating genomic DNA.  

When possible, it is important to define quality acceptance criteria for incoming specimens. These criteria are important to match to the assay technology. An example is DV200 values calculated based on RNA fragment sizes. The higher the DV200 value, the larger the intact RNA fragments. Some assays might require longer fragment sizes than others.  

There is no universal cut point for all assays, but if the fragment size is too low the quality of results might be impacted. Applying an assay intended for research might permit the assessment of specimens that do not meet specimen quality criteria. However, these specimens should, at a minimum, be flagged and the overall results interpreted accordingly. 

Standard operating procedures 

Additional resources 

 

Image
Icon for assay quantification and quality check

5. Assay quantitation and quality check 

The National Institutes of Health Assay Guidance Manual describes an assay as an analytical measurement procedure defined by a set of reagents and reaction conditions that produces a detectable signal, allowing a biological process to be quantified using appropriate data analysis procedures.

It is critical that assays implemented at mid-to-late-preclinical and clinical stages be robust, which indicates reliability during normal usage. The U.S. Food and Drug Administration defines robustness as a measure of an analytical procedure’s capacity to remain unaffected by small, but deliberate variations in method parameters. 

Throughout the entire assay system from specimen collection through data analysis, there are many potential sources of variation that can affect an assay’s robustness. Each assay will have specific specimen and specimen-processing requirements that must be well defined prior to applying the assay. If these upfront criteria are not well understood or defined, the risk is irreproducible results from the assay.  

Assay controls, calibrators and reference materials are critical to monitoring assay performance.  Assay controls conducted with each batch of specimens ensures the assay and reagents performed within the expected range. Calibrators are critical to quantitative assays and provide a known measuring stick for test specimen comparison.  

Reference materials can be considered specimens with a known result that can be used to test assay performance. The validation of key reagents, including controls, calibrators and reference materials, is critical prior to their implementation in an assay.  

Reagents can introduce variability into assays due to differences among lots and batches. Other sources of potential variability include supplies, operator(s), analysis methods and instrumentation. 

Routine calibration of analytical instrumentation including balances, plate readers, liquid handlers and pipettes is required to support assay operations.  

During assay development, it is critical to identify and mitigate all sources of assay artifacts and interferences, which might or might not be related to the detection technology. Finally, the type and validity of data analysis method(s) should be confirmed as the assay systems are designed and validated.

The benefits of multiplexing

The ability to make multiple independent measurements from a specimen, or multiplexing, provides a number of advantages. For example, in an immunofluorescence assay, multiplexing can preserve valuable specimens by maximizing drug mechanism information per slide. Also, it provides additional channels for tissue segmentation and/or phenotypic markers for image analysis.  

Multiplexing can also improve the likelihood of finding a significant pharmacodynamic signal in the sampling window. Moreover, the observation of corroborating biomarker responses can increase confidence in the interpretation of results. 

Validation of analytical procedures

Validation procedures are implemented at different stages to verify that an assay is suitable for the intended analytical measurement, including its sensitivity, specificity, reproducibility and dynamic range. Detailed standard operating procedures must be developed from validated assay procedures and carefully followed to ensure robust and reliable results.  

Pre-study validation confirms that an assay is acceptable for its intended purpose after the assay performance has been evaluated using appropriate controls and the finalized methods for specimen collection, specimen processing, detection, quantitation, and analysis.  

In-study validation is used to verify that an assay remains acceptable during its routine use. For example, the use of blinded testing of materials periodically by lab staff, known as proficiency testing, will ensure the assay is behaving as expected.  

Cross-study validation is needed if an assay is transferred to another location or if procedural changes are made to an assay, including changes in reagents, instrumentation, or personnel. If the assay fails to validate at any of these steps, it requires re-optimization at an earlier step in the process. 

Fitness-for-purpose studies

While different types of validation procedures confirm the ability of an assay to reliably make an analytical measurement, fitness-for-purpose studies are required to prove clinical readiness.  

Fitness-for-purpose studies should be performed in model systems that simulate the first-in-human clinical trial as closely as possible. Critical considerations for these studies include the relevant preclinical model(s), therapeutic regimen, tumor sampling procedure, and specimens.  

The goal is to demonstrate a significant drug effect within the known biological and technical variability as well as to establish a timeframe of significant pharmacodynamic effect. The clinical trial logistics must allow for tumor sampling during the relevant timeframe. 

Publications showing fit-for-purpose studies of pharmacodynamic biomarkers

  • Isoform- and Phosphorylation-specific Multiplexed Quantitative Pharmacodynamics of Drugs Targeting PI3K and MAPK Signaling in Xenograft Models and Clinical Biopsies Read more 
  • Evaluation of Pharmacodynamic Responses to Cancer Therapeutic Agents Using DNA Damage Markers Read more 
  • Clinical Evolution of Epithelial-Mesenchymal Transition in Human Carcinomas Read more 
  • NCI Comparative Oncology Program Testing of Non-Camptothecin Indenoisoquinoline Topoisomerase I Inhibitors in Naturally Occurring Canine Lymphoma Read more 
  • Development of a quantitative pharmacodynamic assay for apoptosis in fixed-tumor tissue and its application in distinguishing cytotoxic drug-induced DNA double strand breaks from DNA double strand breaks associated with apoptosis Read more 
  • Molecular Pharmacodynamics-Guided Scheduling of Biologically Effective Doses: A Drug Development Paradigm Applied to MET Tyrosine Kinase Inhibitors Read more 
  • Pharmacodynamic Response of the MET/HGF Receptor to Small-Molecule Tyrosine Kinase Inhibitors Examined with Validated, Fit-for-Clinic Immunoassays Read more 
  • Modeling pharmacodynamic response to the poly(ADP-Ribose) polymerase inhibitor ABT-888 in human peripheral blood mononuclear cells Read more 
  • Development of a validated immunofluorescence assay for γH2AX as a pharmacodynamic marker of topoisomerase I inhibitor activity Read more 

Reference and guidance documents from external sources

  • Methods, Method Verification and Validation by the U.S. Food and Drug Administration Office of Regulatory Affairs 
    • This document provides basic requirements to Office of Regulatory Science laboratories for the development, validation, and verification of method performance specifications for new methods, modified methods or procedures previously validated externally. 
  • Bioanalytical Method Validation Guidance for Industry by the U.S. Food and Drug Administration Center for Drug Evaluation and Research Center for Veterinary Medicine. 
    • This final guidance incorporates public comments to the revised draft published in 2013 as well as the latest scientific feedback concerning bioanalytical method validation and provides the most up-to-date information needed by drug developers to ensure the bioanalytical quality of their data. 
  • Assay Guidance Manual by the National Institutes of Health 
    • This is freely available from the National Library of Medicine, NCBI Bookshelf. With more than 50 chapters, this eBook covers a range of topics critical for the development, implementation, and interpretation of in vitro and in vivo assays with an emphasis on assays for preclinical drug discovery. 

Additional resources

 

Image
Icon for analysis

6. Data analysis, interpretation, and reporting 

Although the ultimate goal of a given assay might be to measure a biomarker of safety or drug efficacy, multiple analyses of different data types are critical for interpreting the measurement of primary interest. Examples include:  

  • quality checks on the specimen, assay reagents, and reference standards 
  • measurements of assay and instrument calibration 
  • overall assay performance.  

Information derived from quantitative “fit-for-purpose” bioassays can include concentration response curves, half-maximal effective concentrations (IC50, EC50, LD50, etc.), lower and upper limits of quantitation, limits of agreement, area under the curve, and statistical parameters. 

In all cases it is important to select appropriate data analysis models to be employed as the assay systems are designed, validated, and implemented. The data analysis models and their underlying assumptions play a critical role in interpreting the raw data.  

Considerations in the selection of an appropriate data model will include characteristics of the specimen, controls and reference standards, the assay technology, and reproducibility of the measurement(s). 

Accurate recording and reporting of the implemented analytical procedures, including data analysis models, filters applied, and software versions, is essential for the reproducibility and replicability of experimental results.  

Reference and guidance documents from external sources