Evaluation of Clinical Method & Calibration Curves PDF

Summary

This document provides an overview of clinical chemistry, focusing on the evaluation of clinical methods and calibration curves. It covers topics like biochemical analysis, qualitative and quantitative analyses, and criteria for selecting analytical methods. The content also details the importance of accuracy, precision, and validation procedures.

Full Transcript

What is Clinical Chemistry? Clinical chemistry is the biochemical analysis of body fluids in support of the diagnosis and treatment of disease. Testing in this specialty utilizes chemical reactions to identify or quantify levels of chemical compounds in bodily fluids Evaluations of a Clinical Meth...

What is Clinical Chemistry? Clinical chemistry is the biochemical analysis of body fluids in support of the diagnosis and treatment of disease. Testing in this specialty utilizes chemical reactions to identify or quantify levels of chemical compounds in bodily fluids Evaluations of a Clinical Method & Calibration Curves MLS2001: Clinical Chemistry I L Grech PhD 2024 Background & Objectives Biochemical analysis & criteria involved in selecting an analytical method Accuracy & Precision Bias & Measurement error Validation procedures Calibration Preparing a calibration curve & types of calibration curves Serial dilutions Clinical evaluation & validation Biochemical Analysis Def: Characterisation of biological components in a sample using laboratory techniques. What type of Samples? Biochemical Analysis Qualitative Quantitative To determine whether a To determine the biomolecule is present or absent quantity/concentration of a particular biological molecule in a sample i.e the offer a binary outcome, amount or conc. typically positive or negative e.g pH, Hb, glucose conc in blood e.g. test blood for a particular drug or for the presence of Criteria for selecting an analytical method 1. Number of samples to be analysed 2. COST of test & availability of equipment 3. Ease and convenience 4. Duration of analysis (turnaround time) 5. Level of accuracy and precision required 6. Expected concentration range of the analyte in the samples** 7. Sensitivity & detection limit of the technique** 8. Analytical Specificity 9. Type and physical form of sample available 10.Likelihood of interfering substances e.g. Lipaemia & cross-reactivity 11.Operator skills ** Limits of Linearity Limits of Linearity Linearity is the ability to provide laboratory test results that are directly proportional to the concentration of the analyte (quantity to be measured) in a test sample. A limited range of values between which results can be regarded as ‘Accurate’. IF results exceed the upper limit, it may be necessary to perform the test on a diluted sample to obtain an accurate value. The limit of detection (LOD) is one of the most important terms used for comparing various If result are below the accepted value → repeat, report analytical procedures, techniques, or result, inform physician, occasionally repeat on larger instruments. It is defined as being the lowest volumes of sample, analyse other analytes. concentration of the analyte that can be distinguished with reasonable confidence from the blank or background. Thus, course of action taken for results out of range! Analytical Specificity & Sensitivity Analytical Specificity: Ability to measure only the analyte in question E.g. Immunoassays ® Ag-Ab interactions Ideally completely specific BUT cross-reactivity can occur or interfering substances might be present → get detailed Patient History!... Helps to report result with confidence Analytical Sensitivity: 1. The smallest amount or concentration of an analyte that can be detected 2. Limit of detection for the assay 3. Generation: 1st vs 2nd vs 3rd, etc. Accuracy & Precision Accuracy The ability of a test to produce the TRUE value of an analyte in a sample. When the same analyte is measured 20 times in succession, there will be a distribution of values – centring around this true value (reflects the errors in the measuring process). The ‘TRUE’ value then becomes the mean of all the measurements Accuracy & Precision Precision The ability of a test to reproduce the SAME result CONSISTENTLY in the same specimen. The spread of the results or their distribution reflects the variability of the method used. Variability is usually expressed as the Standard Deviation (SD) NOTE (e.g.): A weighing balance with a fault in it (i.e. a bias) could give precise (very repeatable) but inaccurate (untrue) results. Always do some test for precision. What can be done?? Which is accurate and/or precise? Accurate BUT Imprecise... Inaccurate & Depicting Random Error Imprecise Accurate & Precise... This is the IDEAL state but can be difficult to achieve Precise But not Accurate... due too many variables e.g. Lab Depicting Systemic Error staff, steps involved, etc Systematic Error: when an assay has a Random Error: Test values are scattered constant bias against the true value. Results around the true mean value but are are grouped closely together and are within 1SD of the mean value BUT different from imprecise since are more than 2SD the True mean. apart. Accuracy & Precision Accuracy can be aided by: Use of properly standardized procedures Statistically valid comparisons of new methods with established reference methods Use of samples of known values (controls) Participation in proficiency testing (PT) programs Accuracy & Precision Precision can be ensured by: Proper inclusion of standards Reference samples or control solutions Statistically valid replicate determinations of a single sample Duplicate determinations of sufficient numbers of unknown samples Day-to-day and between-run precision measured by inclusion of control samples Bias – Systematic Error Bias or systematic error is a form of measurement error that skews the results to one side Incorrectly calibrated instruments Change in reagent/calibrator lot Inadequate storage of reagents/calibrators Change in sample/reagent volumes due to pipettor misadjustments or misalignments Change in temperature of incubators and reaction blocks Change in procedure from one operator to another To overcome: Use a carefully standardised procedure Try to measure a single variable in several different ways and see if you get the same results. Work ‘blind’ when possible (such as use of codes ) Measurement Error Examples: Carelessness or mistakes in taking readings, Faulty equipment, Limits in the accuracy of the equipment, In spike and recovery, a known amount Errors in preparing solutions or dilutions, of analyte is added (spiked) into the Calibration errors, natural test sample matrix. Then the Errors due to interfering substances assay is run to measure the response (recovery) of the spiked sample matrix compared to an identical spike in the To overcome: standard diluent. Take repeated readings, Compare readings of one instrument to another, Spike and measure recovery, Keep records of batch numbers and measurements for preparations of solutions etc, Check controls or standards – construct a standard or calibration curve Validation Procedures Use Standard Operating Procedures (SOPs) https://www.ncbi.nlm.nih.gov/books/NBK379132/ Calibrate assays using certified reference materials containing known amount of analyte and traceable to a national reference lab Quality assurance (QA) and quality control (QC): QA is the system used to verify that the entire analytical process is operating within acceptable limits. QC – the mechanisms established to measure non-conforming method performance. ❖ Essential for all assays ❖ Give confidence to the lab staff and to users of the service with regards to precision and accuracy of all tests performed. ❖ Early detection of poor assay performance Proper corrective action can be taken to minimises the risk of patients receiving the incorrect result (anxiety!) Validation Procedures Keep records: Batch numbers Analysis performance Results Reagent temperature Calibrators temperatures Being able to quote the performance of the assay at the time of measurement, allows clinical labs to confidently defend the results, AND also gives confidence in the results to clinicians dealing with patients Calibration Calibration → Aligning things to work together or setting something at a precise point Why done? Ensures that: Measurements are accurate and precise Readings from an instrument are consistent with other instruments Eliminates or reduces bias in an instrument's readings over a range for all continuous values How is it done? Use of reference standards with known values for selected points These cover a range with the instrument in question. A functional relationship is established between the values of the standards and the corresponding measurements – calibration curves! How to correct the instrument for BIAS? 1. Selection of Reference Standards with known values to cover the range of interest. N.B. Standards have a known amount or concentration of the substance being measured. 2. Measurements of reference standards using the instrument that needs calibration. 3. The relationship between the response and the amount/concentration is then plotted by means of a graph which is called the ‘calibration curve’ or ‘standard curve’. 4. This curve is then used to determine the amount/concentration of the substance in test samples. → Correction of all measurements by the inverse of the calibration curve. Types of Calibration Curve Curve-linear calibration Sigmoid/S-shaped curve calibration curve Log-linear calibration curve Inverse calibration curve Preparation of a Calibration Curve 1. Decide on an appropriate test method. 2. Select (a) amount or (b) concentration, and an appropriate range and number of standards. The range of standards should cover the levels expected from the test samples. 3. Prepare standards. NB: Poor standard preparation will always lead to inaccurate results. Attention to the grade of volumetric flask, standing time and temperature of solution since they affect accuracy. Always include a blank or ‘zero standard’. Leave to stand if lyophilised! Sometimes must be protected from light. 4. Assay the standards and your unknown (test) samples preferably at the same time. Take replicate readings. NB: for instruments that need warming up make sure you have let them warm up sufficiently (REM: bulb). 5. Sometimes you can measure standards, then samples, then every so many measurements, read the zero and highest concentration of standard and see if they give the same values. Preparation of a Calibration Curve 6. Draw the standard curve or determine the underlying relationship. Determine what type of curve it is. Draw the ‘line of best fit’ for straight line graphs. For linear curves you can quote r2. This is a measure of ‘fit’. Determine the amount or concentration in each unknown sample. Read from curve, or better still use the mathematical relationship e.g. for linear curves y=mx+c Correct for dilution or concentration If you have for example diluted your test samples in the event that the reading went above the reading for the highest standard (never extrapolate the graph – you cannot assume that the curve remains linear). If you assayed 0.2ml of test sample you would need to multiply the value by 5 to get the value per 1ml. Quote your test results to an appropriate number of significant figures Should reflect the accuracy of the method used, not the size of your calculator’s display. Be constant with the number of significant figures. When to carry out Calibration? Preparing Serial Dilutions Linear dilution series The concentrations are separated by an equal amount E.g. 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 µg/ml. Prepare a stock solution Use [C1]V1 = [C2]V2 The Tocris dilution calculator is a useful tool which allows you to calculate how to dilute a stock solution of known concentration where you have the following: V1 = volume (mL) of standard solution V2 = volume of diluted solution (FIND!) C1 = starting concentration C2 = concentration of diluted solution Clinical evaluation & Validation Clinical Sensitivity: describes the ability of an assay to detect only patients with a particular disease process. Clinical Specificity: describes the ability of an assay to detect only that disease process. Clinical Validation: examines the probability that an analytically correct result is really possible for that patient. Result is auto-validated by: Delta check: examine any value obtained for an analyte against the previous result Range check: determines whether the result is physiologically possible Reference Range: within which 95% of the healthy population fall Literature https://www.jove.com/v/10188/calibration- curves-principles-and-applications https://sciencestruck.com/calibration-curve Bishop, ML, Fody, EP, Schoeff, LE (2010). Clinical Chemistry Techniques, Principles, Correlations. 6th Edition

Use Quizgecko on...
Browser
Browser