The Modern Clinical Chemistry Laboratory Uses High Automation PDF
Document Details
Uploaded by BrilliantPromethium8976
Tags
Summary
This document provides an overview of automation in modern clinical chemistry laboratories. It discusses the principles and components of automated chemistry and immunoassay analyzers, highlighting the history and evolution of these technologies. The document also examines the advantages of automation, such as increased efficiency, reduced variation in results, and decreased potential errors.
Full Transcript
The modern clinical chemistry laboratory uses a high degree of automation. Clinical laboratory automation is characterized by automating, using robotics and instrumentation to do tasks that were traditionally manually performed by humans. Many steps in the total testing process can now be performed...
The modern clinical chemistry laboratory uses a high degree of automation. Clinical laboratory automation is characterized by automating, using robotics and instrumentation to do tasks that were traditionally manually performed by humans. Many steps in the total testing process can now be performed automatically, permitting the laboratorians to focus on manual or technical processes that increase both efficiency and capacity. The total testing process can be divided into three major phases: pre-analytic, analytic, and post- analytic. These phases correspond to sample processing/preparation, analyte measurement, and data management/sample storage, respectively. Substantial innovation and improvements have occurred in all three areas during the past decade. This chapter focuses on the foundation of automated clinical laboratories beginning with the principles and components of automated chemistry and immunoassay analyzers. Considerations specific to automated immunoassay methods are discussed. Automated chemistry and immunoassay analyzers allow testing to occur with minimal operator intervention and can be stand-alone instruments or integrated into modules. Steps in the pre-analytic phase can be automated by either stand-alone systems or systems that connect to the analytical modules with tracks that transport the specimens. Post-analytic refrigerated storage and retrieval systems can also be integrated with pre-analytic and analytic automation. These comprehensive automated systems are referred to as total laboratory automation (TLA) systems. The chapter will provide an overview of TLA and its functionality. History and Evolution of Automated Analyzers Following the introduction of the first automated analyzer by Technicon in 1957, automated instruments proliferated from many manufacturers. 1 This first “AutoAnalyzer” (AA) was a continuous flow, single-channel, sequential batch analyzer capable of providing a single test result on approximately 40 samples per hour. In continuous flow, liquids (reagents, diluents, and samples) are pumped through a system of continuous tubing. Samples are introduced in a sequential manner, following each other through the same network and reaction path. A series of air bubbles at regular intervals serve to both separate and clean the tubing. Continuous flow also assists the laboratory that needs to run many samples requiring the same procedure. The more sophisticated continuous flow analyzers used parallel single channels to run multiple tests on each sample. The major drawbacks that contributed to the eventual demise of traditional continuous flow analyzers (i.e., AA) in the marketplace were significant carryover problems and wasteful use of continuously flowing reagents. Technicon’s (now Siemens) answer to these problems was a noncontinuous flow discrete analyzer (the RA1000), using random access fluid to reduce surface tension between samples/reagents and their tubing, thereby reducing carryover. Later, the Chem 1 was developed by Technicon to use Teflon tubing and Teflon oil, virtually eliminating carryover problems. The Chem 1 was a continuous flow analyzer but only remotely comparable to the original continuous flow principle. CASE STUDY 5.1, PART 1 While Mía was training on the Roche Cobas, the following results were obtained: © Ariel Skelley/DigitalVision/Getty Images. The next generation of Technicon instruments to be developed was the Simultaneous Multiple Analyzer (SMA) series. SMA-6 and SMA-12 were analyzers with multiple channels (for different tests), working synchronously to produce 6 or 12 test results simultaneously at the rate of 360 or 720 tests per hour. It was not until the mid- 1960s that these continuous flow analyzers had any significant competition in the marketplace. In 1970, the first commercial centrifugal analyzer was introduced as a spin- off technology from NASA space research. Dr. Norman Anderson developed a prototype in 1967 at the Oak Ridge National Laboratory as an alternative to continuous flow technology, which as noted earlier had significant carryover problems and costly reagent waste. He wanted to performanalyses in parallel and also take advantage of advances in computer technology. The second generation of these instruments (1975) was more successful because of miniaturization of computers and advances in the polymer industry for high- grade, optical plastic cuvettes. Centrifugal analysis uses the force generated by centrifugation to transfer and then contain liquids in separate cuvettes for measurement at the perimeter of a spinning rotor. Centrifugal analyzers are capable of running multiple samples, one test at a time, in a batch. Batch analysis is their major advantage because reactions in all cuvettes are read virtually simultaneously, so that it takes the same amount of time to run a full rotor of about 30 samples as it would take to run just a few. Laboratories with a high-volume workload of individual tests for routine batch analysis may use these instruments. Again, each cuvette must be uniformly matched to each other to maintain quality handling of each sample. The Cobas-Bio (Roche Diagnostics), with a xenon flash lamp and longitudinal cuvettes, 2 and the Monarch (Fortress Diagnostics), with a fully TABLE 5.1 integrated walk-away design, were two of the most widely used centrifugal analyzers. Another major development that revolutionized clinical chemistry instrumentation occurred in 1970 with the introduction of the Automatic Clinical Analyzer (ACA, DuPont [now Siemens]). It was the first noncontinuous flow, discrete analyzer, as well as the first instrument to have random access capabilities, whereby stat specimens could be analyzed out of sequence on an as-needed basis. Plastic test packs, positive patient identification, and infrequent calibration were among the unique features of the ACA. Discrete analysis is the separation of each sample and accompanying reagents in a separate container (i.e., reaction chamber, cuvette, well). Discrete analyzers have the capability of running multiple tests from one sample at a time or multiple samples one test at a time. They are the most popular and versatile analyzers and have almost completely replaced continuous flow and centrifugal analyzers. However, because each sample is in a separate reaction container, uniformity of quality must be maintained in each cuvette so that a particular sample’s quality is not affected. The high-volume chemistry and immunoassay analyzers listed in Table 5.1 are examples of contemporary discrete analyzers with random access capabilities. Summary of Features for Selected High Volume Chemistry and Immunoassay Analyzers Description Other major milestones were the introduction of thin film analysis technology in 1976 and the production of the Kodak Ektachem (now VITROS) Analyzer (Ortho Clinical Diagnostics) in 1978. This instrument was the first to use microsample volumes and reagents on slides for dry chemistry analysis and to incorporate computer technology extensively into its design and use. This dry slide technology is still in use today on the VITROS analyzer and offers several unique advantages that will be discussed below. Since 1980, several primarily discrete analyzers have been developed that incorporate such characteristics as ion-selective electrodes (ISEs), fiberoptics, polychromatic analysis, continually more sophisticated computer hardware and software for data handling, and larger test menus. The differences among the manufacturers’ instruments, operating principles, and technologies are less distinct now than they were in the beginning years of laboratory automation. Driving Forces and Benefits of Automation The pace of changes in current routine chemistry analyzers and the introduction of new ones has slowed considerably. Certainly, analyzers are faster and easier to use as a result of continuous reengineering and electronic refinements. Methods are more precise, sensitive, and specific, although some of the same principles are found in today’s instruments as in earlier models. Manufacturers have worked successfully toward automation with “walk-away” capabilities and minimal operator intervention. 3 Manufacturers have also responded to the physicians’ desire to bring laboratory testing closer to the patient. The introduction of small, portable, easy-to-operate benchtop analyzers in physician office laboratories, as well as in surgical and critical care units that demand immediate laboratory results, has resulted in a hugely successful domain of point- of-care (POC) analyzers. 4 Another specialty area with a rapidly developing arsenal of analyzers is immunochemistry. Immunologic techniques for assaying drugs, specific proteins, tumor markers, and hormones have evolved to an increased level of automation. Instruments that use techniques such as fluorescence polarization immunoassay, nephelometry, and competitive and noncompetitive immunoassays with chemiluminescent detection have become popular in laboratories. The most recent milestone in chemistry analyzer development has been the combination of chemistry and immunoassay into a single modular analyzer. Modular analyzers combining chemistry and immunoassay capabilities are now available from several vendors that meet the needs of mid- and high-volume laboratories (Figure 5.1). Figure 5.1 Modular chemistry/immunoassay analyzers. (A) Siemens Dimension Vista 500. (B) Roche Cobas modular analyzers. (C) Abbott ARCHITECT ci8200. (D) Beckman Coulter Synchron Lxi 725. (A) Courtesy of Siemens Medical Solutions USA, Inc.; (B) Photograph courtesy of Roche Diagnostics; (C) ARCHITECT is trademark of Abbott or its related companies. Reproduced with permission of Abbott, © 2021. All rights reserved; (D) Photograph courtesy of Beckman Coulter, Inc. Other forces are also driving the market toward more focused automation. Higher volumes of testing and faster turnaround times have resulted in fewer and more centralized core laboratories performing more comprehensive testing. 5 The use of laboratory panels or profiles has declined, with more diagnostically directed individual tests as dictated by recent policy changes from Medicare and Medicaid. Researchers have known for many years that chemistry panels only occasionally lead to new diagnoses in patients who appear healthy. 6 The expectation of quality results with higher accuracy and precision is ever present with the regulatory standards set by the Clinical Laboratory Improvement Amendments (CLIA), The Joint Commission (TCJ), the College of American Pathologists (CAP), and others. Intense competition among instrument manufacturers has driven automation into more sophisticated analyzers with creative technologies and unique features. Furthermore, escalating costs have spurred health care reform. There are many advantages to automating procedures. One benefit is to increase the number of tests performed by one laboratorian in a shift. Labor is an expensive commodity, and staffing shortages in clinical laboratories are not uncommon. Through mechanization, the labor component devoted to any single test is minimized, and this effectively lowers the cost per test. A second benefit is minimizing the variation in results from one laboratorian to another. By standardizing the procedure, the coefficient of variation is lowered, and reproducibility is increased. Accuracy is then not dependent on the skill or workload of a particular operator on a particular day. This allows better comparison of results from day to day and week to week. Automation, however, cannot correct for deficiencies inherent in methodology. A third advantage is gained because automation eliminates the potential errors of manual analyses such as volumetric pipetting steps, calculation of results, and transcription of results. A fourth advantage accrues because instruments can use very small amounts of samples and reagents. This allows less blood to be drawn from each patient, and the use of small amounts of reagents decreases the cost of consumables. Consumables are components of a test that are “consumed” or used during analysis and then discarded, such as disposable cuvettes, pipette tips, and reagents. In addition, automation facilitates better space utilization through consolidation of analyzers. TABLE 5.2 Steps in Automated Analysis The major processes performed by an automated analyzer can be divided into specimen identification and preparation, chemical reaction, and data collection and analysis. An overview of these operations is provided in Table 5.2. Summary of Chemistry Analyzer Operations Identification and Preparation 1. Sample identification The analyzer will scan/read the bar code on the labeled primary specimen tube or an aliquot tube. This information can also be entered manually. 2. Determine test(s) to perform Upon bar code scanning, test order information is retrieved from the LIS and automatically sent to the analyzer via an interface. Chemical Reaction 3. Reagent systems and delivery One or more reagents can be dispensed into the reaction cuvette. 4. Specimen measurement and delivery Asmall aliquot of the patient sample is introduced into the reaction cuvette. 5. Chemical reaction phase The patient sample and reagents are mixed and incubated. 6. Measurement phase Optical readings may be initiated before or after all reagents have been added. 7. Signal processing and data handling The analyte concentration (result) is estimated from a calibration curve that is stored in the analyzer. 8. Send result(s) to Middleware/LIS The analyzer sends results for the ordered tests via an interface to the Middleware/LIS and subsequently to the electronic medical record. Operations generally occur in the order listed from 1 to 8. However, there may be slight variations in the order. Some steps may be deleted or duplicated. Most analyzers have the capability to dilute the sample and repeat the testing process if the analyte concentration exceeds the linear range of the assay. LIS, laboratory information system/software © Jones & Bartlett Learning. Each step of automated analysis is explained in this section, and several different applications are discussed. Several instruments have been chosen because they have components that represent either common features used in chemistry instrumentation or a unique method of automating a step in a procedure. None of the representative instruments are completely described; rather, the important components are described in the text as examples. Specimen Preparation and Identification Most major automated chemistry and immunoassay analyzers can use the original labeled specimen collection tube (also known as the primary tube), after plasma or serum separation; the test tube itself can be used as the sample cup. Samples may also be assayed in labeled aliquot tubes or sample cups in scenarios where the specimen must be manipulated prior to analysis (i.e., small sample volume, filtration, dilution, concentration). The sample must be properly identified, and its location in the analyzer must be monitored throughout the system. The approach that is commonly used today employs a barcode label affixed to the primary collection or aliquot tube. This barcode label contains patient demographics and also includes specific physician-ordered test requests for that patient sample. Specimen identification is traceable throughout the automated analytic process. Most automated analyzers read the linear 1D barcode, whereas the 2D data matrix barcode (also known as a QR code) is more commonly used for positive patient identification at the bedside. Specimen Measurement and Delivery Most instruments use either circular carousels or rectangular racks to hold primary/aliquot sample tubes or disposable sample cups in the loading or pipetting zone of the analyzer. The slots in the trays or racks are usually numbered to aid in sample identification. The trays or racks move automatically in one- position steps at preselected speeds. The speed determines the number of specimens to be analyzed per hour. As a convenience, the instrument can determine the slot number containing the last sample and terminate the analysis after that sample. The instrument’s microprocessor holds the number of samples in memory and aspirates only in positions containing samples. On the VITROS analyzer, sample cup trays are quadrants that hold 10 samples per quadrant in cups with conical bottoms. The four quadrants fit on a tray carrier (Figure 5.2). Although the tray carrier accommodates only 40 samples, more trays of samples can be programmed and then loaded in place of completed trays while tests on other trays are in progress. Roche Cobas analyzers can use five-position racks to hold samples (Figure 5.3). A modular analyzer can accommodate as many as 60 of these racks at one time. Figure 5.2 VITROS. The four quadrant trays, each holding 10 samples, fit on a tray carrier. Photograph courtesy of Ortho-Clinical Diagnostics. Figure 5.3 Roche five- position rack. © Wolters Kluwer. A limitation of contemporary systems is that samples are uncapped and exposure of the sample to air can lead to sample evaporation, produce errors in analysis, as well expose the laboratorian to biohazards during the uncapping step. Evaporation of the sample may be significant and may cause the concentration of the constituents being analyzed to rise 50% in 4 hours. 7 For instruments measuring electrolytes, the carbon dioxide present in the samples will be lost to the atmosphere, resulting in low carbon dioxide values. Manufacturers have devised a variety of mechanisms to minimize this effect, such as, lid covers for trays and individual caps that can be pierced, which includes closed tube sampling from primary collection tubes. 8 Aliquots of the patient sample are aspirated from the sample into a probe. When the instrument is in operation, the probe automatically dips into each sample cup and aspirates a portion of the liquid. The sample probe then delivers the sample to a discrete reaction chamber or cuvette on the analyzer. Sampling probes on instruments using specific sampling cups are programmed or adjusted to reach a prescribed depth in those cups to maximize the use of available sample. Those analyzers capable of aspirating sample from primary collection tubes usually have a parallel liquid-level sensing probe that will control entry of the sampling probe to a minimal depth below the surface of the serum, allowing full aliquot aspiration while avoiding clogging of the probe with serum separator gel or clot (Figure 5.4). Figure 5.4 Dual sample probes of a chemistry analyzer. Note the liquid-level sensor to the left of probes. Photograph courtesy of Roche Diagnostics. Certain pipettors use a disposable pipette tip and an air displacement syringe to measure and deliver both the patient sample and necessary reagents. When this is used, the pipettor may be reprogrammed to measure sample and reagent for batches of different tests comparatively easily. Besides eliminating the effort of priming the reagent delivery system with the new solution, no reagent is wasted or contaminated because nothing but a disposable pipette tip contacts it. The cleaning of the probe and tubing after each dispensing to minimize the carryover of one sample into the next is a concern for many instruments. In some systems, the reagent or diluent is also dispersed into the cuvette through the same tubing and probe. Deionized water may be dispensed into the cuvette after the sample to produce a specified dilution of the sample and also to rinse the dispensing system. If a separate probe or tip is used for each sample and discarded after use, carryover is not an issue. VITROS has a unique sample dispensing system. A proboscis presses into a tip on the sample tray, picks it up, and moves over the specimen to aspirate the volume required for the tests programmed for that sample. The tip is then moved over to the slide metering block. When a slide is in position to receive an aliquot, the proboscis is lowered so that a dispensed 10-μL drop touches the slide, where it is absorbed from the nonwetting tip. A stepper motordriven piston controls aspiration and drop formation. The precision of dispensing is specified at ±5%. In several discrete systems, the probe is attached by means of non-wettable tubing to precision syringes. The syringes draw a specified amount of sample into the probe and tubing. Then the probe is positioned over a cuvette and the sample is dispensed. The Roche/Hitachi chemistry analyzer used two sample probes to simultaneously aspirate a double volume of sample and deliver it into four individual test channels, all in one operational step (Figure 5.5). The loaded probes pass through a fine mist shower bath before delivery to wash off any sample residue adhering to the outer surface of the probes. After delivery, the probes move to a rinse bath station for cleaning the inside and outside surfaces of the probes. Figure 5.5 Sampling operation of the Hitachi 736 analyzer. Courtesy of Roche Diagnostics. Many chemistry analyzers use computer-controlled stepping motors to drive both the sampling and washout syringes. Every few seconds, the sampling probe enters a specimen container, withdraws the required volume, moves to the cuvette, and dispenses the aliquot with a volume of water to wash the probe. The washout volume is adjusted to yield the final reaction volume. If a procedure’s range of linearity is exceeded, the system will retrieve the original sample tube, repeat the test using a portion of the original sample volume for the repeat test, and calculate a new result, taking the dilution into consideration. Economy of sample size is a major consideration in developing automated procedures, but methodologies have limitations to maintain proper levels of sensitivity and specificity. The factors governing sample and reagent measurement are interdependent. Generally, if sample size is reduced, then either the size of the reaction cuvette and final reaction volume must be decreased or the reagent concentration must be increased to ensure sufficient color development for accurate photometric readings. Reagent Systems and Delivery Reagents may be classified as liquid or dry systems for use with automated analyzers. Liquid reagents may be purchased in bulk volume containers or in unit dose packaging as a convenience for stat testing on some analyzers. Dry reagents are packaged in various forms. They may be bottled as lyophilized powder, which requires reconstitution with water or a buffer. Unless the manufacturer provides the diluent, the water quality available in the laboratory is important. A second and unique type of dry reagent is the multilayered dry chemistry slide for the VITROS analyzer (rebranded in 2001 as the VITROS MicroSlide technology). These slides have microscopically thin layers of dry reagents mounted on a plastic support. The slides are approximately the size and thickness of a postage stamp. Reagent handling varies according to instrument capabilities and methodologies. Many test procedures use sensitive, short-lived working reagents; so contemporary analyzers use a variety of techniques to preserve them. One technique is to keep all reagents refrigerated until the moment of need and then quickly preincubate them to reaction temperature or store them in a refrigerated compartment on the analyzer that feeds directly to the dispensing area. Another means of preservation is to provide reagents in a dried, tablet form and reconstitute them when the test is to be run. A third is to manufacture the reagent in two stable components that will be combined at the moment of reaction. If this approach is used, the first component also may be used as a diluent for the sample. The various manufacturers often use combinations of these reagent-handling techniques. Reagents also must be dispensed and measured accurately. Many instruments use bulk reagents to decrease the preparation and changing of reagents. Instruments that do not use bulk reagents have unique reagent packaging. To deliver reagents, many discrete analyzers use techniques like those used to measure and deliver the samples. Syringes, driven by a stepping motor, pipette the reagents into reaction containers. Piston-driven pumps, connected by tubing, may also dispense reagents. Another technique for delivering reagents to reaction containers uses pressurized reagent bottles connected by tubing to dispensing valves. The computer controls the opening and closing of the valves. The fill volume of reagent into the reaction container is determined by the precise amount of time the valve remains open. The VITROS analyzers use slides to contain their entire reagent chemistry system. Multiple layers on the slide are backed by a clear polyester support. The coating itself is sandwiched in a plastic mount. There are three or more layers: (1) a spreading layer, which accepts the sample; (2) one or more central layers, which can alter the aliquot; and (3) an indicator layer, where the analyte of interest may be quantified (Figure 5.6). The number of layers varies depending on the assay to be performed. The color developed in the indicator layer varies with the concentration of the analyte in the sample. Physical or chemical reactions can occur in one layer, with the product of these reactions proceeding to another layer, where subsequent reactions can occur. Each layer may offer a unique environment and the possibility to carry out a reaction comparable to that offered in a chemistry assay, or it may promote an entirely different activity that does not occur in the liquid phase. The ability to create multiple reaction sites allows the possibility of manipulating and detecting compounds in ways not possible in solution chemistries. Interfering materials can be left behind or altered in upper layers. Figure 5.6 VITROS slide with multiple layers contains the entire reagent chemistry system. Photograph courtesy of Ortho Clinical Diagnostics. Chemical Reaction Phase This phase consists of mixing, separation, incubation, and reaction time. In most discrete analyzers, the chemical reactants are held in individual moving containers that are either disposable or reusable. These reaction containers also function as the cuvettes for optical analysis. If the cuvettes are reusable, then wash stations are set up immediately after the read stations to clean and dry these containers (Figure 5.7). This arrangement allows the analyzer to operate continuously without replacing cuvettes. Examples of this approach include ADVIA Centaur (Siemens), ARCHITECT (Abbott Diagnostics), Cobas (Roche Diagnostics), and UniCel DxC Synchron (Beckman Coulter) analyzers. Alternatively, the reactants may be placed in a stationary reaction chamber in which a flow-through process of the reaction mixture occurs before and after the optical reading. Figure 5.7 Wash stations on a chemistry analyzer perform the following: (1) aspirate reaction waste and dispense water, (2) aspirate and dispense rinse water, (3) aspirate rinse water and dispense water for measurement of cell blank, and (4) aspirate cell blank water to dryness. Photograph courtesy of Roche Diagnostics. Mixing A vital component of each procedure is the adequate mixing of the reagents and sample. Instrument manufacturers go to great lengths to ensure complete mixing. Nonuniform mixtures can result in imprecision in discrete analysis. Most automated wet-chemistry analyzers use stirring paddles that dip into the reaction container for a few seconds to stir sample and reagents, after which they return to a wash reservoir (Figure 5.8). Other instruments use forceful dispensing to accomplish mixing. Figure 5.8 Stirring paddles on a chemistry analyzer. Photograph courtesy of Roche Diagnostics. Separation In chemical reactions, undesirable constituents that will interfere with an analysis may need to be separated from the sample before the other reagents are introduced into the system. Protein causes major interference in many analyses. One approach without separating protein is to use a very high reagent-to-sample ratio (the sample is highly diluted) so that turbidity caused by precipitated protein is not detected by the spectrophotometer. Another approach is to shorten the reaction time to eliminate slower-reacting interferents. In the VITROS MicroSlide technology, the spreading layer of the slide not only traps cells, crystals, and other small particulate matter but also retains large molecules, such as protein. In essence, what passes through the spreading layer is a protein-free filtrate. Most contemporary discrete analyzers have no automated methodology by which to separate interfering substances from the reaction mixture. Therefore, methods have been chosen that have few interferences or that have known interferences that can be compensated for by the instrument (i.e., using correction formulas). Incubation A heating bath in discrete analysis systems maintains the required temperature of the reaction mixture and provides the delay necessary to allow complete color development. The principal components of the heating bath are the heat transfer medium (i.e., water or air), the heating element, and the thermoregulator. A thermometer is located in the heating compartment of an analyzer and is monitored by the system’s computer. On many discrete analyzer systems, the multicuvettes incubate in a water bath maintained at a constant temperature of usually 37°C. Slide technology incubates colorimetric slides at 37°C. There is a precondition station to bring the temperature of each slide close to 37°C before it enters the incubator. The incubator moves the slides at 12- second intervals in such a manner that each slide is at the incubator exit four times during the 5-minute incubation time. This feature is used for two-point rate methods and enables the first point reading to be taken partway through the incubation time. Potentiometric slides are held at 25°C. The slides are kept at this temperature for 3 minutes to ensure stability before reading. Reaction Time Before the optical reading by the spectrophotometer, the reaction time may depend on the rate of transport through the system to the “read” station, timed reagent additions with moving or stationary reaction chambers, or a combination of both processes. An environment conducive to the completion of the reaction must be maintained for a sufficient length of time before spectrophotometric analysis of the product is made. Time is a definite limitation. To sustain the advantage of speedy multiple analyses, the instrument must produce results as quickly as possible. It is possible to monitor not only completion of a reaction but also the rate at which the reaction is proceeding. The instrument may delay the measurement for a predetermined time or may present the reaction mixtures for measurement at constant intervals of time. Use of rate reactions may have two advantages: the total analysis time is shortened, and interfering chromogens that react slowly may be negated. Reaction rate is controlled by temperature; therefore, the reagent, timing, and spectrophotometric functions must be coordinated to work in harmony with the chosen temperature. The environment of the cuvettes is maintained at a constant temperature by a liquid bath, containing water or some other fluid with good heat transfer properties, in which the cuvettes move. Measurement Phase After the reaction is completed, the formed end products must be quantified. Almost all available systems for measurement have been used, such as ultraviolet, fluorescent, and flame photometry; ion-specific electrodes; gamma counters; and luminometers. Still, the most common are visible and ultraviolet light spectrophotometry, although adaptations of traditional fluorescence measurement, such as fluorescence polarization, chemiluminescence, and bioluminescence, have become popular. Analyzers that measure light require a monochromator to achieve the desired component wavelength. Traditionally, analyzers have used filters or filter wheels to separate light. The old AAs used filters that were manually placed in position in the light path. Many instruments still use rotating filter wheels that are microprocessor controlled so that the appropriate filter is positioned in the light path. However, newer and more sophisticated systems offer the higher resolution afforded by diffraction gratings to achieve light separation into its component colors. Many instruments now use such monochromators with either a mechanically rotating grating or a fixed grating that spreads its component wavelengths onto a fixed array of photo diodes—for example, Roche analyzers (Figure 5.9). This latter grating arrangement, as well as rotating filter wheels, easily accommodates polychromatic light analysis, which offers improved sensitivity and specificity over monochromatic measurement. By recording optical readings at different wavelengths, the instrument’s computer can then use these data to correct for reaction mixture interferences that may occur at adjacent, as well as desired, wavelengths. Figure 5.9 Photometer for a chemistry analyzer. Fixed diffraction grating separates light into specific wavelengths and reflects them onto a fixed array of 11 specific photodetectors. Photometer has no moving parts. Courtesy of Roche Diagnostics. Many newer instruments use fiberoptics as a medium to transport light signals from remote read stations back to a central monochromator detector box for analysis of these signals. The fiberoptic cables, or “light pipes” as they are sometimes called, are attached from multiple remote stations where the reaction mixtures reside to a centralized monochromator/detector unit that, in conjunction with the computer, sequences and analyzes a large volume of light signals from multiple reactions. The containers holding the reaction mixture also play a vital role in the measurement phase. In most discrete wet-chemistry analyzers, the cuvette used for analysis is also the reaction vessel in which the entire procedure has occurred. The reagent volume and, therefore, sample size, speed of analysis, and sensitivity of measurement are some aspects influenced by the method of analysis. A beam of light is focused through the container holding the reaction mixture. The amount of light that exits from the container is dictated primarily by the absorbance of light by the reaction mixture. The exiting light strikes a photodetector, which converts the light into electrical energy. Filters and light- focusing components permit the desired light wavelength to reach the photodetector. The photometer continuously senses the sample photodetector output voltage and, as is the process in most analyzers, compares it with a reference output voltage. The electrical impulses are sent to a readout device, such as a printer or computer, for storage and retrieval. Slide technology depends on reflectance spectrophotometry, as opposed to traditional transmittance photometry, to provide a quantitative result. The amount of chromogen in the indicator layer is read after light passes through the indicator layer, which is reflected from the bottom of a pigment-containing layer (usually the spreading layer) and is returned through the indicator layer to a light detector. For colorimetric determinations, the light source is a tungsten–halogen lamp. The beam focuses on a filter wheel holding up to eight interference filters, which are separated by a dark space. The beam is focused at a 45° angle to the bottomsurface of the slide, and a silicon photodiode detects the portion of the beam that reflects down. Three readings are taken for the computer to derive reflectance density. The three recorded signals taken are (1) the filter wheel blocking the beam, (2) reflectance of a reference white surface with the programmed filter in the beam, and (3) reflectance of the slide with the selected filter in the beam. After a slide is read, it is shuttled back in the direction from which it came, where a trap door allows it to drop into a waste bin. If the reading was the first for a two-point rate test, the trap door remains closed, and the slide reenters the incubator. The principles of automated immunoassays are discussed below, and the similarities between automated chemistry analyzers and automated immunoassay analyzers are worth noting. There are many fully automated, random-access immunoassay systems, which use chemiluminescence or electrochemiluminescence technology for reaction analysis. In chemiluminescence assays, quantification of an analyte is based on emission of light resulting from a chemical reaction. 9 The principles of chemiluminescent immunoassays are similar to those of radioimmunoassay and immunoradiometric assay, except that an acridinium ester is used as the tracer and paramagnetic particles are used as the solid phase. Sample, tracer, and paramagnetic particle reagent are added and incubated in disposable plastic cuvettes, depending on the assay protocol. After incubation, magnetic separation and washing of the particles are performed automatically. The cuvettes are then transported into a light-sealed luminometer chamber, where appropriate reagents are added to initiate the chemiluminescent reaction. On injection of the reagents into the sample cuvette, the system luminometer detects the chemiluminescent signal. Luminometers are like gamma counters in that they use a photomultiplier tube detector; however, unlike gamma counters, luminometers do not require a crystal to convert gamma rays to light photons. Light photons from the sample are detected directly, converted to electrical pulses, and then counted. Signal Processing and Data Handling Before results can be transmitted, ensuring accurate calibration of the test system is essential to obtaining accurate information. There are many variables that may enter into the use of calibration standards, and the matrices of the standards and unknowns may be different. Depending on the methodology, this may or may not present problems. Primary or secondary standards may be used for calibration purposes. A primary standard is a highly purified chemical that can be measured directly to produce an exact known concentration and purity. A secondary standard is made from a primary standard. If secondary standards are used to calibrate an instrument, the methods used to derive the standard’s constituent values should be known. Standards containing more than one analyte per vial may cause interference problems. Because there are no primary standards available for enzymes, either secondary standards or calibration factors based on the molar extinction coefficients of the products of the reactions may be used. The advantage of calibrating an automated instrument is the long-term stability of the standard curve, which only requires monitoring with controls on a daily basis. Some analyzers use low- and high-concentration standards at the beginning of each run and then use the absorbances of the reactions produced by the standards to produce a standard curve electronically for each run. Other instruments are self- calibrating after analyzing standard solutions. Slide technology requires more sophisticated calculations to produce results. The calibration materials require a protein- based matrix because of the necessity for the calibrators to behave as serum when reacting with the various layers of the slides. Calibrator fluids are bovine serum–based, and the concentration of each analyte is determined by reference methods. Endpoint reaction tests and enzymatic methods require three calibrators, while tests requiring a blank need four calibrators. Colorimetric tests use spline fits, which is a data analysis technique used to interpolate data when there are sudden slope changes in a curve, to produce the standardization. In enzyme analysis, a curve-fitting algorithm estimates the change in reflection density per unit time. This is converted to either absorbance or transmission- density change per unit time. Then, a quadratic equation converts the change in transmission density to volume activity (U/L) for each assay. All advanced automated instruments have some method of reporting results with a link to sample identification. In sophisticated systems, the demographic sample information is entered in the instrument’s computer together with the tests required. Then the sample identification is printed with the test results. Most automated instruments report results to the Laboratory Information System (LIS) using a bidirectional interface, meaning that the analyzer can read data from the patient sample barcode and also transmit laboratory results back to the LIS in electronic formats. Most laboratories use barcode labels that are generated from the LIS to identify samples. Barcode-labeled samples can be loaded directly on the analyzer without the need to enter identifying information, tests, or other information manually. Microprocessors control the tests, reagents, and timing, while verifying the barcode for each sample. This is the link between the results reported and the specimen identification. Even the simplest of systems sequentially number the test results to provide a connection with the samples. On most modern automated analyzers, computerized monitoring is available and flagged for the operator for such parameters as specimen integrity, assay linearity, quality control data with various options for statistical display and interpretation, short sample sensing, abnormal patient results, clot detection, reaction vessel or test chamber temperature, and reagent inventories. The monitors can also display patients’ results as well as various data flags previously mentioned to assist the operator in troubleshooting prior to reporting the patient results. Data flags should be investigated prior to releasing patient results into the LIS. Most instrument manufacturers offer computer software for preventive maintenance schedules and algorithms for troubleshooting. Some manufacturers also install phone modems on the analyzer for a direct communication link between the instrument and their service center for instant troubleshooting and technical service. CASE STUDY 5.1, PART 2 Remember Mía, who is training on the Roche Cobas? 1. Can Mía verify the patient results? Why or why not? 2. What action should Mía take? © Ariel Skelley/DigitalVision/Getty Images. CASE STUDY 5.2, PART 1 Miles is performing a noncompetitive immunoassay for IgG subclasses. The results are below: IgG 1490 mg/dL G1 713 mg/dL G2 101 mg/dL G3 60.7 mg/dL G4 30.7 mg/dL 1. What is the calculated sum of the IgG subclasses? 2. Comparing the calculated sum of the IgG subclasses to the total measured IgG, what action should be taken? © dotshock/Shutterstock. Additional Considerations for Automated Immunoassays Immunoassays were first developed by Dr. Rosalyn Yalow and colleagues in 1959, who developed a radioimmunoassay (RIA) for the measurement of insulin. 10 Dr. Yalow went on to share the Nobel Prize in physiology or medicine for this discovery in 1977. Today, immunoassays are one of the essential analytical techniques used in clinical chemistry and can be highly automated. The basis of all immunoassays is the binding of antibody (Ab) to antigen (Ag) for the specific and sensitive detection of an analyte. The design, label, and detection system combine to create many different assays, which enable the measurement of analytes including proteins, hormones, metabolites, therapeutic drugs, and drugs of abuse. Immunoassay Basics In an immunoassay, an Ab molecule recognizes and binds to an Ag. Antibodies are immunoglobulin (Ig) molecules with a functional domain known as F(ab) that specifically binds to an antigenic determinant or epitope of the antigen (i.e., the site on the antigen). This binding is related to the concentration of each reactant, the specificity of the Ab for the Ag, the affinity, avidity, and the environmental conditions. The degree of binding is an important consideration in any immunoassay. Affinity refers to the strength of binding between a single binding site on the Ab and its epitope. Under standard conditions, the affinity of an Ab is measured using a hapten (Hp) because the Hp is a lowmolecular-weight Ag considered to have only one epitope. The affinity for the Hp is related to the likelihood to bind or to the degree of complementary nature of each. The reversible reaction is summarized in Equation 5.1: The binding between an Hp and the Ab obeys the law of mass action and is expressed mathematically in Equation 5.2: Ka is the affinity or equilibrium constant and represents the reciprocal of the concentration of free Hp when 50% of the binding sites are occupied. The greater the affinity of the Hp for the Ab, the smaller is the concentration of Hp needed to saturate 50% of the binding sites of the Ab. For example, if the affinity constant of a monoclonal antibody (Mab, one specific antibody) is 3 × 10 11 L/mol, then it means that an Hp concentration of 3 × 10 –11 mol/L is needed to occupy half of the binding sites. Typically, the affinity constant of Abs used in immunoassay procedures ranges from 10 9 to 10 11 L/mol, whereas the affinity constant for transport proteins ranges from10 7 to 10 8 L/mol and the affinity for receptors ranges from 10 8 to 10 11 L/mol. As with all chemical reactions, the initial concentrations of the reactants and the products affect the extent of immune complex binding. In immunoassays, the reaction moves forward (to (Eq. 5.1) (Eq. 5.2) the right)(Eq. 5.1) when the concentration of reactants (Ag and Ab) exceeds the concentration of the product (Ag–Ab complex) and when there is a favorable affinity constant. The forces that bring an antigenic determinant and an Ab together are noncovalent, reversible bonds that result from the cumulative effects of hydrophobic, hydrophilic, hydrogen bonds, and van der Waals forces. The most important factor that affects the cumulative strength of bonding is the closeness of fit between the Ab and the Ag. The strength of most of these interactive forces is inversely related to the distance between the interactive sites. The closer the Ab and Ag can physically approach one another, the greater are the attractive forces. After the Ag–Ab complex is formed, the likelihood of separation (which is inversely related to the tightness of bonding) is referred to as avidity. The avidity refers to the cumulative strength of binding of all Ab–epitope pairs and exceeds the sum of single Ab– epitope binding. For example, IgG has only two epitope binding sites while an IgM pentamer has 10 epitope binding sites and therefore, a higher avidity. In general, affinity is a property of the Ag and avidity is a property of the Ab. The specificity of an Ab is most often described by the Ag that induced the Ab production, the homologous Ag. Ideally, this Ab would react only with that specific Ag. However, an Ab can react with an Ag that is structurally like the homologous Ag. This is referred to as crossreactivity. Considering that an antigenic determinant can be five or six amino acids or one immunodominant sugar, it is not surprising that Ag similarity is common. The greater the similarity between the cross- reacting Ag and the homologous Ag, the stronger is the bond with the Ab. 11 Reagent Ab production is achieved by polyclonal or monoclonal techniques. In polyclonal Ab production, the stimulating Ag is injected in an animal responsive to the Ag; the animal detects this foreign Ag and mounts an immune response to eliminate the Ag. If part of this immune response includes strong Ab production, then blood is collected and Ab is harvested, characterized, and purified to yield the commercial antiserum reagent. This polyclonal Ab reagent is a mixture of different Ab specificities. Some Abs react with the stimulating epitopes and some are endogenous to the host. Multiple Abs directed against the multiple epitopes on the Ag are present and can cross-link the multivalent Ag. Polyclonal Abs are often used as “capture” Abs in sandwich or indirect immunoassays. In contrast, an immortal cell line produces monoclonal Abs (mAbs); each line produces one specific Ab. This method developed as an extension of the hybridoma work published by Kohler and Milstein in 1975. 12 The process begins by selecting cells with the qualities that will allow the synthesis of a homogeneous Ab. First, a host (commonly, a mouse) is immunized with an Ag (the one to which an Ab is desired); later, the sensitized lymphocytes of the spleen are harvested. Second, an immortal cell line (usually a nonsecretory mouse myeloma cell line that is hypoxanthine guanine phosphoribosyltransferase deficient) is required to ensure that continuous propagation in vitro is viable. These cells are then mixed in the presence of a fusion agent, such as polyethylene glycol, which promotes the fusion of two cells to form a hybridoma. In a selective growth medium, only the hybrid cells will survive. B cells have a limited natural life span in vitro and cannot survive, and the unfused myeloma cells cannot survive due to their enzyme deficiency. If the viable fused cells synthesize Ab, then the specificity and isotype of the Ab are evaluated. Commercial mAb reagent is produced by growing the hybridoma in tissue culture or in compatible animals. An important feature of mAb reagent is that the Ab is homogeneous (a single Ab, not a mixture of Abs). Therefore, it recognizes only one epitope on a multivalent Ag and cannot cross-link a multivalent Ag. Turbidimetry and Nephelometry Turbidimetry (immunoturbidimetry) and nephelometry (immunonepholometry) are two related automated methods used to quantitate Ag–Ab complexes (see Chapter 4, Analytic Techniques). In general terms, turbidimetry measures the amount of light that can pass through a sample. As a sample becomes more turbid, more light is blocked by the particles in the sample, and less light passes through. Nephelometry is similar conceptually; however, the scattered light is measured by placing a detector at a defined angle (e.g., 90°) from the incident light. When Ag and Ab combine, immune complexes are formed that act as particles in suspension and thus can scatter light. The size of the particles determines the type of scatter that will dominate when the solution interacts with nearly monochromatic light. 13 When the particle, such as albumin or IgG, is relatively small compared with the wavelength of incident light, the particle will scatter light symmetrically, both forward and backward. A minimum of scattered light is detectable at 90° from the incident light. Larger molecules and Ag–Ab complexes have diameters that approach the wavelength of incident light and scatter light with a greater intensity in the forward direction. The wavelength of light is selected based on its ability to be scattered in a forward direction and the ability of the Ag–Ab complexes to absorb the wavelength of light. To recap, turbidimetry measures the light transmitted and nephelometry measures the light scattered. Turbidimeters (spectrophotometers or colorimeters) are designed to measure the light passing through a solution, and the photodetector is placed at an angle of 180° from the incident light. If light scattering is insignificant, turbidity can be expressed as “optical density,” which is directly related to the concentration of suspended particles and path length. Nephelometers measure light at an angle other than 180° from the incident light; most measure forward light scattered at 90° or less because the sensitivity is increased. Both methods can be performed in an endpoint or a kinetic mode. In the endpoint assay, a measurement is taken at the beginning of the reaction (the background signal) and one is taken at a set time later in the reaction (plateau or endpoint signal). The concentration is determined using a calibration curve. In kinetic assays, the rate of complex formation is continuously monitored, and the peak rate is determined. The peak rate is directly related to the concentration of the Ag, although this is not necessarily linear. Thus, a calibration curve is required to determine concentration in unknown samples. In general, turbidimetry is less sensitive than nephelometry. In turbidimetry, the decrease in the amount of light transmitted due to scattering of particles is measured relative to the intensity of the reference blank. Therefore, when there is no sample, there is 100% transmittance. If the sample concentration of the analyte is high, then a large precipitate is formed resulting in a significant decrease in the amount of light transmitted that can easily be measured by turbidimetry. If, however, the amount of precipitate is small due to a low sample concentration, the amount of light transmitted is decreased minimally and the instrument must measure the difference between two high intensity light signals. Because in nephelometry the amount of light is measured at an angle of 90°, the reference blank produces no signal (no light scattering). When a microprecipitate is formed, the light- scattered signal is more analytically discernible when measured against a dark reference baseline. TABLE 5.3 Labeled Immunoassays General Considerations In all labeled immunoassays, a reagent (Ag or Ab) is usually labeled by attaching a particle or molecule that will better detect lower concentrations of Ag–Ab complexes. Therefore, the label improves analytic sensitivity. All assays have a binding reagent, which can bind to the Ag or ligand. If the binding reagent is an Ab, the assay is an immunoassay. Immunoassays may be described based on the label, which reactant is labeled, the relative concentration and source of the Ab, the method used to separate free from bound labeled reagents, the signal that is measured, and the method used to assign the concentration of the analyte in the sample. Immunoassay design has many variables to consider, leading to diverse assays. Labels The simplest way to identify an assay is by the label used. Table 5.3 lists the commonly used labels and the methods used to detect the label. Labels and Detection Methods Immunoassay Common Label Detection Method RIA 3H Liquid scintillation counter 125 I Gamma counter EIA Horseradish peroxidase Photometer, fluorometer, luminometer Alkaline phosphatase Photometer, fluorometer, luminometer β-D- Galactosidase Fluorometer, luminometer Glucose-6-phosphate dehydrogenase Photometer, luminometer CLIA Isoluminol derivative Luminometer Acridinium esters Luminometer FIA Fluorescein Fluorometer Europium Fluorometer Phycobiliproteins Fluorometer Rhodamine B Fluorometer Umbelliferone Fluorometer RIA, Radioimmunoassay; EIA, enzyme immunoassay; CLIA, chemiluminescent immunoassay; FIA, fluorescent immunoassay © Jones & Bartlett Learning. Radioactive Labels As discussed previously, radioimmunoassay was the first immunoassay developed, and the developers won the Nobel Prize in Medicine in 1977 for their innovation. These original immunoassays used radioactive isotopes ( 125 I, 131 I or 3H) to label the analyte. The emitted gamma rays were measured using a scintillation counter. Due to safety concerns over of radioactive reagents and waste, it has been replaced by safer methods in the clinical laboratory. Enzyme Labels Enzymes are commonly used to label the Ag/Hp or Ab. 14,15 Horseradish peroxidase (HRP), alkaline phosphatase (ALP), and glucose-6- phosphate dehydrogenase are used most often. Enzymes are biologic catalysts that increase the rate of conversion of substrate to product and are not consumed by the reaction. As such, an enzyme can catalyze many substrate molecules, amplifying the amount of product generated. The enzyme activity may be monitored directly by measuring the product formed or by measuring the effect of the product on a coupled reaction. Depending on the substrate used, the product can be photometric, fluorometric, or chemiluminescent. For example, a typical photometric reaction using HRP-labeled Ab (Ab- HRP) and the substrate (a peroxide) generates a product (oxygen). The oxygen can then oxidize a reduced chromogen (such as reduced orthophenylenediamine [OPD]) to produce a colored compound (oxidized OPD), which is measured using a photometer: Fluorescent Labels Fluorescent labels (fluorochromes or fluorophores) are compounds that absorb radiant energy of one wavelength and emit radiant energy of a longer wavelength in less than 10 –4 seconds. Generally, the emitted light is detected at an angle of 90° from the path of excitation light using a fluorometer or a modified spectrophotometer. The difference between the excitation wavelength and emission wavelength (Stokes shift) usually ranges between 20 and 80 nm for most fluorochromes. Some fluorescence immunoassays simply substitute a fluorescent label (such as fluorescein) for an enzyme label and quantitate the fluorescence. 16 Another approach, time-resolved fluorescence immunoassay, uses a highly efficient fluorescent label such as europium chelate, 17 which fluoresces approximately 1000 times slower than the natural background fluorescence and has a wide Stokes shift. The delay allows the fluorescent label to be detected with minimal interference from background fluorescence. The long Stokes shift facilitates measurement of emission radiation while excluding the excitation radiation. This assay shows high sensitivity with minimized background fluorescence. Luminescent Labels Luminescent labels emit a photon of light as the result of an electrical, biochemical, or chemical reaction. 18,19 Some organic compounds become excited when oxidized and emit light as they revert to the ground state. Oxidants include hydrogen peroxide, hypochlorite, or oxygen. (Eq. 5.3) Sometimes, a catalyst is needed, such as peroxidase, ALP, or metal ions. Luminol, the first chemiluminescent label used in immunoassays, is a cyclic diacylhydrazide that emits light energy under alkaline conditions in the presence of peroxide and peroxidase. Because peroxidase can serve as the catalyst, assays may use this enzyme as the label; the chemiluminogenic substrate, luminol, will produce light that is directly proportional to the amount of peroxidase present (Eq. 5.4): A popular chemiluminescent label, acridinium esters, is a triple-ringed organic molecule linked by an ester bond to an organic chain. In the presence of hydrogen peroxide and under alkaline conditions, the ester bond is broken and an unstable molecule (N-methylacridone) remains. Light is emitted as the unstable molecule reverts to its more stableground state: ALP commonly conjugated to an Ab has been used in automated immunoassay analyzers to produce some of the most sensitive chemiluminescent assays. ALP catalyzes adamantyl 1,2- dioxetane aryl phosphate substrates to release light at 477 nm. The detection limit approaches 1 μmol, or approximately 602 enzyme molecules. 20,21 Assay Design Competitive Immunoassays In a competitive immunoassay, labeled Ag (Ag*) in the reagent competes with Ag in the patient sample for a limited number of Ab binding sites (Figure 5.10). In the competitive assay, the Ag* concentration is constant and limited. As the concentration of patient Ag increases it competes with binding to the Ab resulting in less Ag* bound. Thus, the signal is inversely proportional to analyte concentration. (Eq. 5.4) (Eq. 5.5) Figure 5.10 Examples of competitive immunoassays. (A) Labeled antigen and antigen in patient sample compete for binding to antibody. (B) Immobilized Ag and Ag in the patient sample compete for binding to labeled antibody. © Jones & Bartlett Learning. Description The Ag–Ab reaction can be accomplished in one step when labeled antigen (Ag*), unlabeled antigen (Ag), and reagent antibody (Ab) are simultaneously incubated together to yield bound, labeled complex (Ag*Ab), bound, unlabeled complex (AgAb), and free labeled Ag (Ag*), as shown in Figure 5.10A and Equation 5.6: Alternatively, the competitive assay may be accomplished in sequential steps. First, patient sample containing the Ag to be measured is incubated with the reagent Ab and then labeled Ag is added. After a longer incubation time and a separation step, the bound, labeled Ag is measured. This approach increases the analytic sensitivity of the assay. Consider the example in Table 5.4. A relatively small, yet constant, number of Ab combining sites is available to combine with a relatively large, constant amount of Ag* (tracer) and calibrators with known Ag concentrations. Because the amount of tracer and Ab is constant, the only variable in the test system is the amount of unlabeled Ag. As the concentration of unlabeled Ag increases, the concentration (or percentage) of free tracer increases. (Eq. 5.6) TABLE 5.4 Competitive Binding Assay Example Description By using multiple calibrators, a dose–response curve is established. As the concentration of unlabeled Ag increases, the concentration of tracer that binds to the Ab decreases. In the example presented in Table 5.4, if the amount of unlabeled Ag is zero, maximum tracer will combine with the Ab. When no unlabeled Ag is present, maximum binding by the tracer is possible; this is referred to as B0 , Bmax , maximum binding, or the zero standard. When the amount of unlabeled Ag is the same as the tracer, each will bind equally to the Ab. As the concentration of Ag increases in a competitive assay, the amount of tracer that complexes with the binding reagent decreases. If the tracer is of low molecular weight, free tracer is often measured. If the tracer is of high molecular weight, the bound tracer is measured. The data may be plotted in one of three ways: bound/free versus the arithmetic dose of unlabeled Ag, percentage bound versus the log dose of unlabeled Ag, and logit bound/B0 versus the log dose of the unlabeled Ag (Figure 5.11). Figure 5.11 Dose–response curves in a competitive assay. B, bound labeled antigen; F, free labeled antigen; B0 , maximum binding; % B, B/B0 × 100. © Wolters Kluwer. Description The bound fraction can be expressed in several different formats. Bound/free is counts per minute (CPM) of the bound fraction compared with the CPM of the free fraction. Percent bound (% B) is the CPM of the bound fraction compared with the CPM of maximum binding of the tracer (B0 ) multiplied by 100. Logit B/B0 transformation is the natural log of (B/B0 )/ (1 – B/B0 ). When B/B0 is plotted on the ordinate and the log dose of the unlabeled Ag is plotted on the abscissa, a straight line with a negative slope is produced using linear regression. It is important to remember that the best type of curve-fitting technique is determined by experiment and that there is no assurance that a logit-log plot of the data will always generate a straight line. To determine the best method, several different methods of data plotting should be tried when a new assay is introduced. Every time the assay is performed, a dose–response curve should be prepared to check the performance of the assay. The relative error for all RIA dose– response curves is minimal when B/B0 = 0.5 and increases at both high and low concentrations of the plot. As shown in the plot of B/B0 versus log of the Ag concentration (Figure 5.11), a relatively large change in the concentration at either end of the curve produces little change in the B/B0 value. Patient values derived from a B/B0 value greater than 0.9 or less than 0.1 should be interpreted with caution. When the same data are displayed using the logitlog plot, it is easy to overlook the error at either end of thestraight line. Noncompetitive Immunoassays Noncompetitive immunoassays, also known as sandwich assays, use a labeled reagent Ab to detect the Ag. Excess labeled Ab is required to ensure that the labeled Ab reagent does not limit the reaction. The concentration of the Ag is directly proportional to the bound labeled Ab as shown in Figure 5.12. The relationship is linear up to a limit and then may be subject to the high-dose hook effect. The hook effect (also sometimes known as prozone) is an immunologic phenomenon that occurs when excess analyte overwhelms the test system, causing a false result. Figure 5.12 Dose–response curve in a noncompetitive immunoassay. CPM, counts per minute. © Wolters Kluwer. In the sandwich assay to detect Ag, immobilized unlabeled Ab captures the Ag. After washing to remove unreacted molecules, the labeled detector Ab is added. After another washing to remove free labeled detector Ab, the signal from the bound labeled Ab is proportional to the Ag captured. This format relies on the ability of the Ab reagent to react with a single epitope on the Ag. The specificity and quantity of mAbs have allowed the rapid expansion of diverse assays. A schematic is shown in Figure 5.13. Figure 5.13 Two-site noncompetitive sandwich assay to detect antigen. Immobilized antibody captures the antigen. Then, labeled antibody is added, binds to the captured antigen, and is detected. © Jones & Bartlett Learning. Description Separation Techniques All automated immunoassays require that free labeled reactant be distinguished from bound labeled reactant. The most common separation technique is the use of paramagnetic particles that can quickly be immobilized to a solid phase by application of a magnetic field. Separation is accomplished by wash steps that occur while the magnetic particles are immobilized by a magnet. Coated microwells may also be used, but rarely. In heterogeneous assays, physical separation is necessary and is achieved by interaction with a solid phase. The better the separation of bound from free reactant, the more reliable the assay will be. The labeled, unbound analyte is separated or washed away, and the remaining labeled, bound analyte is measured. In contrast, homogeneous assays, in which the activity or expression of the label depends on whether the labeled reactant is free or bound, do not require a physical separation step. Therefore, no wash step is required. Adsorption The binding of the capture Ab to paramagnetic particles is the most common method used by automated immunoassay analyzers. Separation takes place by applying a powerful magnet, thereby adhering bound analytes to each reaction chamber (Figure 5.14). Unbound constituents and labels are removed by aspiration. Thereafter, the magnet is removed that enables the pellet to reconstitute and additional reagents added in order to generate the analytical signal (usually a chemiluminescent reaction). Figure 5.14 Separation by paramagnetic particles. (A) Addition of sample containing the analyte (circle) and other constituent (rhomboid) to a capture antibody containing a paramagnetic particle (solid triangle). (B) Application of a magnet to adhere capture antibodies and analyte to the side of the reaction chamber. The unbound constituent is removed by aspiration. Removal of the magnet and addition of signal antibody facilitate measurement (not shown). © Wolters Kluwer. Solid Phase The use of a solid phase to immobilize reagent Ab or Ag provides a method to separate free from bound labeled reactant after washing. The solid-phase support is an inert surface to which reagent Ag or Ab is attached. The solid-phase support may be, but is not limited to, polystyrene surfaces, membranes, and magnetic beads. The immobilized Ag or Ab may be adsorbed or covalently bound to the solid-phase support; covalent linkage prevents spontaneous release of the immobilized Ag or Ab. Immunoassays using solid-phase separation are easier to perform and to automate and require less manipulation and time to perform than other immunoassays. However, a relatively large amount of reagent Ab or Ag is required to coat the solid-phase surface, and consistent coverage of the solid phase is difficult to achieve. Interferences with Sandwich Immunoassays While an advantage of sandwich-type immunoassays is the production of linear calibration curves, the disadvantage is that these assays are subjected to false- positive and false-negative interferences. Normally, the target analyte is required for the production of a positive analytical signal (Figure 5.15A). However, if the sample contains unusual Abs, such as human anti-mouse antibodies (HAMA) or heterophile Abs, they can bind to both the capture and labeled Abs, producing an analytical signal in the absence of the analyte (Figure 5.15B). Individuals who are exposed to mouse Ags can develop HAMA Abs that recognize monoclonal Abs derived frommurine cell lines as Ags. 22 Heterophile Abs are formed from patients who have autoimmune disease and other disorders. 23 Although the principle is different, the hook effect is similar to the prozone effect in that excess concentrations of the Ag from the sample reduce the analytical signal. 24 As shown in Figure 5.16, excess Ag binds to free labeled Ab, prohibiting the labeled Ab to bind to the capture Ab (via the analyte), and the resulting signal is reduced after the wash step. Analytes that are present in very low and high concentrations are subject to the hook effect. Commercial assays must be designed to either be immune to the hook effect or produce a warning flag to alert the analyst that the sample must be diluted in order to obtain an accurate result. In anticipation of the need for sample dilution, some laboratories automatically performdilutions of a sample from a patient known to have high analyte values. The hook effect is less common in contemporary immunoassays due to better design of assays. Figure 5.15 Heterophile or human anti-mouse interference. (A) Analytical signal in the presence of the analyte. The capture antibody (left), attached to a solid support, binds to the analyte (circle) at one epitope of the analyte. The labeled antibody (right) is added and binds to the analyte at a second analyte epitope. (B) False-positive signal in the presence of an interfering antibody. The interfering antibody (gray middle) binds to both the capture (left) and label (right) antibodies, forming a “sandwich” in the absence of the analyte causing a false-positive signal. © Wolters Kluwer. Description Figure 5.16 The hook effect. (A) Analytical signal in the presence of the analyte at a concentration that is within the dynamic range of the assay. There is an excess amount of capture (left) and labeled (right) antibodies such that all analytes are bound (none are free in solution). Excess free unbound labeled antibody is washed away, and the resulting signal is proportional to the number of captured labeled antibody (4 units in this example). (B) Analytical signal in the presence of the analyte at a concentration that is above the dynamic range of the assay. All of the capture antibody is bound with the analyte. The excess antigen is found free in solution and binds to excess labeled antibody found free in solution. These labeled antibodies cannot bind to the analyte-bound capture antibody because the site is already occupied with the analyte. The free analyte-bound labeled antibody is washed away. This leaves 4 units remaining, the same number as in example A, despite the much higher analyte concentration. © Wolters Kluwer. Description Examples of Labeled Immunoassays Particle-enhanced turbidimetric inhibition immunoassay is a homogeneous competitive immunoassay in which low molecular weight Hps bound to particles compete with unlabeled analyte for the specific Ab. The extent of particle agglutination is inversely proportional to the concentration of unlabeled analyte and is assessed by measuring the change in transmitted light. 25 CASE STUDY 5.2, PART 2 Miles begins troubleshooting, starting his investigation into the discrepant results by looking at the raw data from the analyzer, which are shown here: Raw Data Dilution Result IgG 1:400 1490 mg/dL 1:2000 (1450 mg/dL) G1 1:100 713 mg/dL 1:400 (242 mg/dL) 1:2000 737 mg/dL 1:8000 (702 mg/dL) 5. Does the total IgG dilutions match within 10%? 6. Does the G4 dilutions match within 10%? 7. What is a possible explanation for these results? © dotshock/Shutterstock. Enzyme-linked immunosorbent assays (ELISAs), a popular group of heterogeneous immunoassays, have an enzyme label and use a solid phase as the separation technique. Four formats are available: a competitive assay using labeled Ag, a competitive assay using labeled Ab, a noncompetitive assay to detect Ag, and a noncompetitive assay to detect Ab. ELISAs are widely used in clinical research as there are commercial assays available to hundreds of analytes. If the analyte has clinical value, an automated version would be made available to the clinical laboratorians. If the assay is only used for research purposes, for example, cytokine analysis, then ELISAs are the technique of choice because they can be easily produced by the manufacturers and most research laboratories have ELISA plate readers. The disadvantage of ELISAs is that they typically use enzyme or fluorescence detection, which is not as sensitive as chemiluminescence or radiodetection. The assays are more labor intensive than modern clinical assays, although some laboratories have automated plate washing and reading stations to improve workflow if a high volume of testing is needed. One of the earliest homogeneous assays was enzyme multiplied immunoassay technique (EMIT), an enzyme immunoassay (Siemens Healthcare Corp). 26 As shown in Figure 5.17, the reactants in most test systems include an enzyme-labeled Ag (commonly, a low molecular weight analyte, such as a drug), an Ab directed against the Ag, the substrate, and test Ag. The enzyme is catalytically active when the labeled Ag is free (not bound to the Ab). It is thought that when the Ab combines with the labeled Ag, the Ab sterically hinders the enzyme. The conformational changes that occur during Ag–Ab interaction inhibit the enzyme activity. In this homogeneous assay, the unlabeled Ag in the sample competes with the labeled Ag for the Abbinding sites; as the concentration of unlabeled Ag increases, less enzyme labeled Ag can bind to the Ab. Therefore, more labeled Ag is free, and the enzymatic activity is greater. Figure 5.17 Enzyme-multiplied immunoassay technique. (A) When enzyme-labeled antigen is bound to the antibody, the enzyme activity is inhibited. (B) Free patient antigen binds to the antibody and prevents antibody binding to the labeled antigen. The substrate indicates the amount of free labeled antigen. © Wolters Kluwer. Description Cloned enzyme donor immunoassays (CEDIAs) are competitive, homogeneous assays in which the genetically engineered label is β-galactosidase (Microgenics Corp). 27 The enzyme is in two inactive pieces: the enzyme acceptor and the enzyme donor. When these two pieces bind together, enzyme activity is restored. In the assay, the Ag labeled with the enzyme donor and the unlabeled Ag in the sample compete for specific Ab-binding sites. When the Ab binds to the labeled Ag, the enzyme acceptor cannot bind to the enzyme donor; therefore, the enzyme is not restored and the enzyme is inactive. More unlabeled Ag in the sample results in more enzyme activity. Fluorescence excitation transfer immunoassay is a competitive, homogeneous immunoassay using two fluorophores (such as fluorescein and rhodamine). 28 When the two labels are in close proximity, the emitted light from fluorescein will be absorbed by rhodamine. Thus, the emission from fluorescein is quenched. Fluorescein-labeled Ag and unlabeled Ag compete for rhodaminelabeled Ab. More unlabeled Ag lessens the amount of fluorescein-labeled Ag that binds; therefore, more fluorescence is present (less quenching). Fluorescence polarization immunoassay (FPIA) is another assay that uses a fluorescent label. 29 This homogeneous immunoassay uses polarized light to excite the fluorescent label. Polarized light is created when light passes through special filters and consists of parallel light waves oriented in one plane. When polarized light is used to excite a fluorescent label, the emitted light could be polarized or depolarized. Small molecules, such as free fluorescent labeled Hp, rotate rapidly and randomly, interrupting the polarized light. Larger molecules, such as those created when the fluorescent-labeled Hp binds to an Ab, rotate more slowly and emit polarized light parallel to the excitation polarized light. The polarized light is measured at a 90° angle compared with the path of the excitation light. In a competitive FPIA, fluorescent- labeled Hp and unlabeled Hp in the sample compete for limited Ab sites. When no unlabeled Hp is present, the labeled Hp binds maximally to the Ab, creating large complexes that rotate slowly and emit a high level of polarized light. When Hp is present, it competes with the labeled Hp for the Ab sites; as the Hp concentration increases, more labeled Hp is displaced and is free. The free labeled Hp rotates rapidly and emits less polarized light. The degree of labeled Hp displacement is inversely related to the amount of unlabeled Hp present. Dissociation-enhanced lanthanide fluoroimmunoassay (DELFIA) is an automated system(Thermo Fisher Scientific) that measures time-delayed fluorescence from the label europium. The assay can be designed as a competitive, heterogeneous assay or a noncompetitive (sandwich), heterogeneous assay. 30 Total Laboratory Automation Automated analyzers are now commonplace in clinical laboratories, and the current focus and rapidly progressing areas in automation are nonanalytic automation and automating laboratory workflows from sample input to final result. 31,32 Automation of the pre-analytic, analytic, and post-analytic phase is referred to as total laboratory automation (TLA). While most TLA is still vendor-specific, some automation equipment vendors are developing open architecture components that provide more flexibility in automation implementation. 33 An example of a commercial TLA system is shown in Figure 5.18. Figure 5.18 Schematic of total laboratory automation system. Courtesy of Cerner Labotix. Preanalytic Phase Preparation of the sample for analysis has been and remains a manual process in most laboratories. The clotting time (if using serum), centrifugation, and the transferring of the sample to an analyzer cup (unless using primary tube sampling) can cause delays and expenses in the testing process. One alternative to manual preparation is to automate this process by using robotics, or front-end automation, to “handle” the specimen through these steps and load the specimen onto the analyzer. Automated processes are gradually replacing manual handling and presentation of the sample to the analyzer. Increasing efficiency while decreasing costs has been a major impetus for laboratories to start integrating some aspects of TLA into their operations. Conceptually, TLA refers to automated devices and robots integrated with existing analyzers to perform all phases of laboratory testing. Most attention to date has been devoted to development of the front-end systems that can identify and label specimens, centrifuge the specimen and prepare aliquots, and sort and deliver samples to the analyzer or to storage. 34 Back-end systems may include removal of specimens from the analyzer and transport to storage, retrieval from storage for retesting, realiquoting, or disposal, as well as comprehensive management of the data fromthe analyzer and interfacing with the LIS. Dr. Masahide Sasaki installed the first fully automated clinical laboratory in the world at Kochi Medical School in Japan 35 ; since then, the concept has gradually, but steadily, become a reality in the United States. The University of Nebraska and the University of Virginia have been pioneers for TLA system development. In 1992, a prototype of a laboratory automation platform was developed at the University of Nebraska, the key components being a conveyance system, bar-coded specimens, a computer software package to control specimen movement and tracking, and coordination of robots with the instruments as work cells. 36 Some of the first automated laboratories in the United States have reported their experiences with front-end automation with a wealth of information for others interested in the technology. 37,38 The first hospital laboratory to install an automated system was the University of Virginia Hospital in Charlottesville in 1995. Their Medical Automation Research Center cooperated with Johnson & Johnson and Coulter Corporation to use a VITROS 950 attached to a Coulter/IDS “U” lane for direct sampling from a specimen conveyor without using intervening robotics. 39 The first commercially available turnkey system was the Hitachi Clinical Laboratory Automation System(Boehringer-Mannheim Diagnostics; now Roche Diagnostics). It couples the Hitachi line of analyzers to a conveyor belt system to provide a completely operational system with all interfaces. 40 Robotics and front-end automation are changing the face of the clinical laboratory. 41 Much of the benefit derivable from TLA can be realized merely by automating the front end. The planning, implementation, and performance evaluation of an automated transport and sorting system by a large reference laboratory have been described in detail. 42,43 Several instrument manufacturers are currently working on or are already marketing interfacing front-end devices together with software for their own chemistry analyzers. Johnson & Johnson introduced the VITROS 950 AT (Automation Technology) system in 1995 with an open architecture design to allow laboratories to select from many front-end automation systems rather than being locked into a proprietary interface. A Lab-Track interface is now available on the Dimension RxL (Siemens) that is compatible with major laboratory automation vendors and allows for direct sampling from a track system. Also, the technology now exists for microcentrifugal separators to be integrated into clinical chemistry analyzers. 44 Several other systems are now on the market, including the Advia LabCell system (Siemens), which uses a modular approach to automation. The Power Processor Core System (Beckman Coulter) performs sorting, centrifugation, and cap removal. The enGen Series Automation System (Ortho- Clinical Diagnostics) provides sorting, centrifugation, uncapping, and sample archiving functions and interface directly with a VITROS 950 AT analyzer. The instruments listed in Table 5.5 are examples of current TLA solutions offered commercially. Much of the benefit of TLA is derived from automation of the front-end processing steps. Therefore, several manufacturers have developed stand-alone, automated front-end processing systems. The Genesis FE500 (Tecan) TABLE 5.5 is an example of a stand-alone front-end system that can centrifuge, uncap, aliquot into a labeled pour-off tube, and sort into analyzer racks. Systems with similar functionality are available from Labotix, Motoman, and PVT. An example of one such system is shown in Figure 5.19. Stand-alone automated sample uncappers and recappers are available from PVT and Sarstedt. These latter devices are less flexible than the complete stand-alone front-end systems and require samples to be presented to them in racks that will work with a single analyzer. Some laboratories have taken a modular approach with devices for only certain automated functions. Ciba-Corning Clinical Laboratories installed Coulter/IDS robotic systems in several regional laboratories. 39 Recently, a thawing–mixing work cell that is compatible with a track system in a referral laboratory has been described. 45 The bottom line is that robotics and front-end automation are here to stay. As more and more clinical laboratories reengineer for TLA, they are building core laboratories containing all of their automated analyzers as the necessary first step to link the different instruments more easily into one TLA system. 46 Summary of Features for Selected Laboratory Automation Systems and Work Cells Description Figure 5.19 Schematic of pre-analytic automation system. Courtesy of Cerner Labotix. Analytic Phase There have been changes and improvements that are now common to many automated chemistry and immunoassay analyzers. They include ever smaller microsampling and reagent dispensing with multiple additions possible from randomly replaced reagents; expanded onboard and total test menus, especially drugs and hormones; accelerated reaction times with chemistries for faster throughput and lower dwell time; higher resolution optics with grating monochromators and diode arrays for polychromatic analysis; improved flow through electrodes; enhanced user-friendly interactive software for quality control, maintenance, and diagnostics; integrated modems for online troubleshooting; LIS-interfacing data management systems; reduced frequencies of calibration and controls; automated modes for calibration, dilution, rerun, and maintenance; as well as ergonomic and physical design improvements for operator ease, serviceability, and maintenance reduction. The features and specifications of five mid-volume and five high-volume systems are summarized in Table 5.1. In addition to the improvements in automated analyses listed above, significant efficiency can be gained through modular analytics consisting of either multiple chemistry analyzers or connected chemistry and immunoassay analyzers. This functionality removes the need to split samples, performs onboard dilutions, and reprograms repeat testing, which reduces the need for operator intervention during testing. Postanalytic Phase Specimen Storage and Retrieval Post-analytic specimen storage may be integrated into total laboratory automation. Refrigerated storage units capable of holding thousands of specimen tubes can function as a post-analytic holding area for specimens prior to being discarded. Bidirectional track systems between the analyzers and storage units make the process of physician “add-ons,” where laboratory tests are added to the original order after initial results are obtained, an automated process. Specimens can also be automatically pulled from the storage unit for repeat or reflex testing using the automation/LIS software. Data Management Although most of the attention in recent years in TLA concept has been devoted to front-end systems for sample handling, several manufacturers have been developing and enhancing backend handling of data. Bidirectional communication between the analyzer(s) and the host computer or LIS has become an absolutely essential link to request tests and enter patient demographics, automatically transfer this customized information to the analyzer(s), as well as post the results in the patient’s record. Evaluation and management of data from the time of analysis until posting have become more sophisticated and automated with the integration of work station managers into the entire communication system. 47 Most data management devices are personal computer–based modules with manufacturers’ proprietary software that interfaces with one or more of their analyzers and the host LIS. They offer automated management of quality control data with storage and evaluation of quality control results against the laboratory’s predefined quality control perimeters with multiple plotting, displaying, and reporting capabilities. Review and editing of patient results before verification and transmission to the host are enhanced by user-defined perimeters for reportable range limits, panic value limits, delta checks, and quality control comparisons for clinical change, repeat testing, and algorithmanalysis. Reagent inventory and quality control, along with monitoring of instrument functions, are also managed by the workstation’s software. Most LIS vendors have interfacing software available for all the major chemistry analyzers. Some data handling needs associated with automation cannot be adequately handled by most current LISs and require specialized middleware. For example, most current analyzers are capable of assessing the degree of sample hemolysis (H), icterus (I), and lipemia (L) (see Table 5.1). The use of these automated serum indices to assess hemolysis, icterus, and lipemia on automated analyzers and determine specimen acceptability based on logic in the middleware data management systems has revolutionized automated assessment of specimen integrity. However, making this information available and useful to the laboratorian in an automated fashion requires additional manipulation of the data. Ideally, the tests ordered on the sample, the threshold for interference of each test by each of the three agents, and whether the interference is positive or negative need to be determined. In the case of lipemia, the results for affected tests need to be held until the sample can be clarified and the tests rerun. One company, Data Innovations, has developed a middleware system called Instrument Manager, which links the analyzer to the LIS and provides the ability for the user to define rules for release of information to the LIS. In addition, flags can be displayed to the instrument operator to perform additional operations, such as sample clarification and reanalysis. The ability to fully automate data review using rules-based analysis is a key factor in moving toward TLA. The use of “autoverification” capabilities found with many post-analytic LISs has contributed to a significant reduction in result turnaround time. CASE STUDY 5.1, PART 3 Remember Mía who is training on the Roche Cobas. 2. What does I.H stand for? What does I.L stand for? Mía referenced the SOP to look up the AST assay. Under interpretation, the Abs flag was explained and stated if the AST results > 700 U/L and < 7000 U/L, then the analyzer will automatically dilute X10. AST 1078H > Abs AST 1314H 3. What AST value should be reported? © Ariel Skelley/DigitalVision/Getty Images. Future Trends in Automation Total laboratory automation continues to evolve at a rapid pace in the 21st century. With most of the same forces driving the automation market as those discussed in this chapter, analyzers will continue to perform more cost effectively and efficiently. Effective communications among all automation stakeholders for a given project are key to successful implementation. 48 In the coming years, more system and workflow integration will occur with robotics and data management for more inclusive TLA. 49 The incorporation of artificial intelligence and machine learning into analytic systems will likely evolve and expand within the clinical laboratory. 50,51 This will greatly advance the technologies of robotics, digital processing of data, computer-assisted diagnosis, and data integration with electronic patient records. W R A P - U P To support your learning, review the chapter learning objectives and complete the online activities. The Navigate 2 Advantage Access included with each new print copy of this book offers a wealth of resources. These include practical learning activities and study tools such as flashcards, math practice, an eBook with interactive questions, and more! R E F E R E N C E S 1. Hodnett J. Automated analyzers have surpassed the test of time. Adv Med Lab. 1994;6:8. 2. Eisenwiener H, Keller M. Absorbance measurement in cuvettes lying longitudinal to the light beam. Clin Chem. 1979;25:117– 121. 3. Schoeff LE, Williams RH. Principles of Laboratory Instruments. St. Louis, MO: Mosby–Yearbook; 1993. 4. Jacobs E, Simson E. Point of care testing and laboratory automation: the total picture of diagnostic testing at the beginning of the next century. Clin Lab News. 1999;25(12): 12–14. 5. Boyce N. Why hospitals are moving to core labs. Clin Lab News. 1996;22:1–2. 6. Boyce N. Why labs should discourage routine testing. Clin Lab News. 1996;22:1, 9. 7. Burtis CA. Factors influencing evaporation from sample cups, and assessment of their effect on analytic error. Clin Chem. 1975;21:1907–1917. 8. Burtis CA, Watson JS. Design and evaluation of an antievaporative cover for use with liquid containers. Clin Chem. 1992;38:768–775. 9. Dudley RF. Chemiluminescence immunoassay: an alternative to RIA. Lab Med. 1990;21:216. 10. Berson SA, Yalow RS. Assay of plasma insulin in human subjects by immunological methods. Nature. 1959;184: 1648–1649. 11. Sheehan C. An overview of antigen–antibody interaction and its detection. In: Sheehan C, ed. Clinical Immunology: Principles and Laboratory Diagnosis. 2nd ed. Philadelphia, PA: Lippincott-Raven Publishers; 1997:109. 12. Kohler G, Milstein C. Continuous cultures of fused cells secreting antibody of predefined specificity. Nature. 1975; 256:495. 13. Kusnetz J, Mansberg HP. Optical considerations: nephelometry. In: Ritchie RF, ed. Automated Immunoanalysis. Part 1. New York: Marcel Dekker; 1978. 14. Engvall E, Perlmann P. Enzyme-linked immunosorbent assay (ELISA). Quantitative assay of immunoglobulin G. Immunochemistry. 1971;8:871. 15. Van Weemen BK, Schuurs AHWM. Immunoassay using antigen-enzyme conjugates. FEBS Lett. 1971;15:232. 16. Nakamura RM, Bylund DJ. Fluorescence immunoassays. In: Rose NR, deMarcario EC, Folds JD, et al., eds. Manual of Clinical Laboratory Immunology. 5th ed. Washington, DC: ASM Press; 1997:39. 17. Diamandis EP, Evangelista A, Pollack A, et al. Time-resolved fluoroimmunoassays with europium chelates as labels. Am Clin Lab. 1989;8(8):26. 18. Kricka LJ. Chemiluminescent and bioluminescent techniques. Clin Chem. 1991;37:1472. 19. Kricka LJ. Selected strategies for improving sensitivity and reliability of immunoassays. Clin Chem. 1994;40:347. 20. Bronstein I, Juo RR, Voyta JC. Novel chemiluminescent adamantyl 1,2-dioxetane enzyme substrates. In: Stanley PE, Kricka LJ, eds. Bioluminescence and Chemiluminescence: Current Status. Chichester: Wiley; 1991:73. 21. Edwards B, Sparks A, Voyta JC, et al. New chemiluminescent dioxetane enzyme substrates. In: Campbell AK, Kricka LJ, Stanley PE, eds. Bioluminescence and Chemiluminescence: Fundamentals and Applied Aspects. Chichester: Wiley; 1994:56–59. 22. Kricka LJ. Human anti-animal antibody interferences in immunological assays. Clin Chem. 1999;45:942. 23. Kaplan IV, Levinson SS. When is a heterophile antibody not a heterophile antibody? When it is an antibody against a specific immunogen. Clin Chem. 1999;45:616. 24. Butch AW. Dilution protocols for detection of hook effects/ prozone phenomenon. Clin Chem. 2000;46:1719. 25. Litchfield WJ. Shell-core particles for the turbidimetric immunoassays. In: Ngo TT, ed. Nonisotopic Immunoassay. New York, NY: Plenum Press; 1988. 26. Rubenstein KE, Schneider RS, Ullman EF. “Homogeneous” enzyme immunoassay. Anew immunochemical technique. Biochem Biophys Res Commun. 1972;47:846. 27. Henderson DR, Freidman SB, Harris JD, et al. CEDIA, a new homogeneous immunoassay system. Clin Chem. 1986;32:1637. 28. Ullman EF, Schwartzberg M, Rubinstein KD. Fluorescent excitation transfer assay: a general method for determination of antigen. J Biol Chem. 1976;251:4172. 29. Dandliker WB, Kelly RJ, Dandiker BJ, et al. Fluorescence polarization immunoassay: theory and experimental methods. Immunochemistry. 1973;10:219. 30. Diamandis EP. Immunoassays with time-resolved fluorescence spectroscopy: principles and applications. Clin Biochem. 1988;21:139. 31. Armbruster, D.A., Overcash, D.R., Reyes, J. Clinical chemistry laboratory automation in the 21st century – Amat Victoria curam (Victory loves careful preparation). Clin Biochem Rev. 2014;35:143–153. 32. Hawker, C.D. Nonanalytic laboratory automation: Aquarter century of progress. Clin Chem. 2017;63:1074–1082. 33. Douglas L. Redefining automation: perspectives on today’s clinical laboratory. Clin Lab News. 1997;23:48–49. 34. Felder R. Front-end automation. In: Kost G, ed. Handbook of Clinical Laboratory Automation and Robotics. New York, NY: Wiley; 1995. 35. Sasaki M. Afully automated clinical laboratory. Lab Inform Mgmt. 1993;21:159–168. 36. Markin R, Sasaki M. Alaboratory automation platform: the next robotic step. MLO Med Lab Obs. 1992;24:24–29. 37. Bauer S, Teplitz C. Total laboratory automation: a view of the 21st century. MLO Med Lab Obs. 1995;27:22–25. 38. Bauer S, Teplitz C. Laboratory automation, part 2. Total lab automation: system design. MLO Med Lab Obs. 1995;27: 44–50. 39. Felder R. Cost justifying laboratory automation. Clin Lab News. 1996;22:10–11, 17. 40. Felder R. Laboratory automation: strategies and possibilities. Clin Lab News. 1996;22:10–11. 41. Boyd J, Felder R, Savory J. Robotics and the changing face of the clinical laboratory. Clin Chem. 1996;42:1901–1910. 42. Hawker CD, Garr SB, Hamilton LT, et al. Automated transport and sorting system in a large reference laboratory: part 1. Evaluation of needs and alternatives and development of a plan. Clin Chem. 2002;48:1751–1760. 43. Hawker CD, Roberts WL, Garr SB, et al. Automated transport and sorting system in a large reference laboratory: part 2. Implementation of the system and performance measures over three years. Clin Chem. 2002;48:1761–1767. 44. Richardson P, Molloy J, Ravenhall R, et al. High speed centrifugal separator for rapid online sample clarification in biotechnology. J Biotechnol. 1996;49:111–118. 45. Hawker CD, Roberts WL, DaSilva A, et al. Development and validation of an automated thawing and mixing workcell. Clin Chem. 2007;53:2209–2211. 46. Zenie F. Re-engineering the laboratory. J Automat Chem. 1996;18:135–141. 47. Saboe T. Managing laboratory automation. J Automat Chem. 1995;17:83–88. 48. Fisher JA. Laboratory automation: communicating with all stakeholders is the key to success. Clin Lab News. 2000; 26(7):38– 40. 49. Brzezicki L. Workflow integration: does it make sense for your lab? Adv Lab. 1996;23:57–62. 50. Place J, Truchaud A, Ozawa K, et al. Use of artificial intelligence in analytical systems for the clinical laboratory. J Automat Chem. 1995;17:1–15. 51. Boyce N. Neural networks in the lab: new hope or just hype? Clin Lab News. 1997;23:2–3.