UNIT-2 ANALYTICAL METHODS PDF

Summary

This document discusses analytical methods, focusing on errors in chemical analyses. It covers various aspects of measurement uncertainties and accuracy, providing detailed explanations of concepts such as replicates, precision, and accuracy.

Full Transcript

Chapter 5: Errors in Chemical Analyses Source:  Measurements invariably involve errors and uncertainties.  it is impossible to perform a chemical analysis that is totally free of errors or uncertainties  We can only hope to minimize errors and estimate their size with acceptable ac...

Chapter 5: Errors in Chemical Analyses Source:  Measurements invariably involve errors and uncertainties.  it is impossible to perform a chemical analysis that is totally free of errors or uncertainties  We can only hope to minimize errors and estimate their size with acceptable accuracy  Errors are caused by faulty calibrations or standardizations or by random variations and uncertainties in results.  Frequent calibrations, standardizations, and analyses of known samples can sometimes be used to lessen all but the random errors and uncertainties. The term error has two slightly different meanings. 1) error refers to the difference between a measured value and the “true” or “known” value. 2) error often denotes the estimated uncertainty in a measurement or experiment. “ We can only hope to minimize errors and estimate their size with acceptable accuracy” Every measurement is influenced by many uncertainties, which combine to produce a scatter of results. Figure 5-1 Results from six replicate determinations of iron in aqueous samples of a standard solution containing 20.0 ppm iron(III). Note that the results range from a low of 19.4 ppm to a high of 20.3 ppm of iron. The average, or mean value, x , of the data is 19.78 ppm, which rounds to 19.8 ppm Because measurement uncertainties can never be completely eliminated, measurement data can only give us an estimate of the “true” value. However, the probable magnitude of the error in a measurement can often be evaluated. It is then possible to define limits within which the true value of a measured quantity lies with a given level of probability. Before beginning an analysis ask, “What maximum error can be tolerated in the result?” The answer to this question often determines the method chosen and the time required to complete the analysis. 5A Some important terms To improve the reliability and to obtain information about the variability of results, two to five portions (replicates) of a sample are usually carried through an entire analytical procedure. Replicates are samples of about the same size that are carried through an analysis in exactly the same way.  Individual results from a set of measurements are seldom the same  Usually, the “best” estimate is considered to be the central value for the set.  The central value of a set should be more reliable than any of the individual results.  Usually, the mean or the median is used as the central value for a set of replicate measurements. An analysis of the variation in the data allows us to estimate the uncertainty associated with the central value. The Mean and the Median N The mean, also called the arithmetic mean or the average, is obtained x i by dividing the sum of replicate measurements by the number of x i 1 N measurements in the set:  The symbol xi means to add all of the values xi for the replicates; xi represents the individual values of x making up the set of N replicate measurements.  The median is the middle value in a set of data that has been arranged in numerical order.  The median is used advantageously when a set of data contain an outlier. An outlier is a result that differs significantly from others in the set.  An outlier can have a significant effect on the mean of the set but has no effect on the median. Precision * Precision describes the agreement among several results obtained in the same way. Describes the reproducibility of measurements. * Precision is readily determined by simply repeating the measurement on replicate samples. * Precision of a set of replicate data may be expressed as standard deviation, variance, and coefficient of variation. * di, deviation from mean, is how much xi, the individual result, deviates from the mean. d  x x i i Accuracy Accuracy indicates the closeness of the measurement to the true or accepted value and is expressed by the error. Accuracy measures agreement between a result and the accepted value. Accuracy is often more difficult to determine because the true value is usually unknown. An accepted value must be used instead. Accuracy is expressed in terms of either absolute or relative error. Figure 5-2 Note that we can have very precise results (upper right) with a mean that is not accurate and an accurate mean (lower left) with data points that are imprecise. Absolute Error * The absolute error of a measurement is the difference between the measured value and the true value. If the measurement result is low, the sign is negative; if the measurement result is high, the sign is positive. E  xi  xt Relative Error The relative error of a measurement is the absolute error divided by the true value. Relative error may be expressed in percent, parts per thousand, or parts per million, depending on the magnitude of the result. xi  xt Er   100% xt Types of Errors in Experimental Data Results can be precise without being accurate and accurate without being precise. Each dot represents the error associated with a single determination. Each vertical line labeled (xi - xt) is the absolute average deviation of the set from the true value. Figure 5-3 Absolute error in the micro-Kjeldahl determination of nitrogen. Chemical analyses are affected by at least two types of errors: 1. Systematic (or determinate) error, causes the mean of a data set to differ from the accepted value. 2. Random (or indeterminate) error, causes data to be scattered more or less symmetrically around a mean value. A third type of error is gross error.  These differ from indeterminate and determinate errors.  They usually occur only occasionally, are often large, and may cause a result to be either high or low.  They are often the product of human errors.  Gross errors lead to outliers, results that appear to differ markedly from all other data in a set of replicate measurements. 5B Systematic errors  Systematic errors  have a definite value, an assignable cause, and are of the same magnitude for replicate measurements made in the same way.  They lead to bias in measurement results. There are three types of systematic errors: Instrumental errors. Method errors Personal errors Instrumental Errors  are caused by nonideal instrument behavior, by faulty calibrations, or by use under inappropriate conditions  Pipets, burets, and volumetric flasks may hold or deliver volumes slightly different from those indicated by their graduations.  Calibration eliminates most systematic errors of this type.  Electronic instruments can be influenced by noise, temperature, pH and are also subject to systematic errors.  Errors of these types usually are detectable and correctable. Method Errors  The nonideal chemical or physical behavior of the reagents and reactions on which an analysis is based often introduce systematic method errors.  Such sources of nonideality include the slowness of some reactions, the incompleteness of others, the instability of some species, the lack of specificity of most reagents, and the possible occurrence of side reactions that interfere with the measurement process.  Errors inherent in a method are often difficult to detect and hence, these errors are usually the most difficult to identify and correct. Personal Errors  result from the carelessness, inattention, or personal limitations of the experimenter.  Many measurements require personal judgments. Examples include estimating the position of a pointer between two scale divisions, the color of a solution at the end point in a titration, or the level of a liquid with respect to a graduation in a pipet or buret. Judgments of this type are often subject to systematic, unidirectional errors. A universal source of personal error is prejudice, or bias. Number bias is another source of personal error that varies considerably from person to person. The most frequent number bias encountered in estimating the position of a needle on a scale involves a preference for the digits 0 and 5.  Also common is a prejudice favoring small digits over large and even numbers over odd.  Digital and computer displays on ph meters, laboratory balances, and other electronic instruments eliminate number bias because no judgment is involved in taking a reading. 5B-2 The Effect of Systematic Errors on Analytical Results  Systematic errors may be either constant or proportional. Constant Errors  The magnitude of a constant error stays essentially the same as the size of the quantity measured is varied.  With constant errors, the absolute error is constant with sample size, but the relative error varies when the sample size is changed  One way of reducing the effect of constant error is to increase the sample size until the error is acceptable. The excess of reagent needed to bring about a color change during a titration is another example of constant error.  This volume, usually small, remains the same regardless of the total volume of reagent required for the titration. Again, the relative error from this source becomes more serious as the total volume decreases.  One way of reducing the effect of constant error is to increase the sample size  Proportional Errors  Proportional errors decrease or increase in proportion to the size of the sample.  A common cause of proportional errors is the presence of interfering contaminants in the sample.  For example, a widely used method for the determination of copper is based on the reaction of copper(II) ion with potassium iodide to give iodine. The quantity of iodine is then measured and is proportional to the amount of copper. Iron(III), if present, also liberates iodine from potassium iodide. Unless steps are taken to prevent this interference, high results are observed for the percentage of copper because the iodine produced will be a measure of the copper(II) and iron(III) in the sample.  The size of this error is fixed by the fraction of iron contamination, which is independent of the size of sample taken. If the sample size is doubled, for example, the amount of iodine liberated by both the copper and the iron contaminant is also doubled. Thus, the magnitude of the reported percentage of copper is independent of sample size. slideplayer.com/Fundamentals of Analytical Chemistry, F.J. Holler, S.R.Crouch 5B-3 Detection and Elimination of Systematic (Instrumental and Personal) Errors 1. Periodic calibration of equipment is always desirable because the response of most instruments changes with time as a result of component aging, corrosion, or mistreatment. 2. Most personal errors can be minimized by careful, disciplined laboratory work. 3. It is a good habit to check instrument readings, notebook entries, and calculations systematically. 4. Errors due to limitations of the experimenter can usually be avoided by carefully choosing the analytical method or using an automated procedure. 5B-4 Detection of Systematic (Method) Errors Bias in an analytical method is particularly difficult to detect.  The best way to estimate the bias of an analytical method is by analyzing Standard reference materials (SRMs).  Analysis of Standard Samples The overall composition of a synthetic standard material must closely approximate the composition of the samples to be analyzed. Great care must be taken to ensure that the concentration of analyte is known exactly. A synthetic standard may not reveal unexpected interferences so that the accuracy of determinations may not be known. Independent Analysis - If standard samples are not available, a second independent and reliable analytical method can be used in parallel. - The independent method should differ as much as possible from the one under study. - This practice minimizes the possibility that some common factor in the sample has the same effect on both methods. - Again, a statistical test must be used to determine whether any difference is a result of random errors in the two methods or due to bias in the method under study. Blank Determinations A blank contains the reagents and solvents used in a determination, but no analyte. Often, many of the sample constituents are added to simulate the analyte environment, which is called the sample matrix. In a blank determination, all steps of the analysis are performed on the blank material. The results are then applied as a correction to the sample measurements. Blank determinations reveal errors due to interfering contaminants from the reagents and vessels employed in the analysis. Blanks are also used to correct titration data for the volume of reagent needed to cause an indicator to change color. Variation in Sample Size As the size of a measurement increases, the effect of a constant error decreases. Thus, constant errors can often be detected by varying the sample size. Suggested Problems 5.1, 5.3, 5.11, 5.12(a-d-f), 5.13(a-b-c) Chapter 8: Sampling, Standardization, and Calibration A chemical analysis uses only a small fraction of the available sample, the process of sampling is a very important operation. Knowing how much sample to collect and how to further subdivide the collected sample to obtain a laboratory sample is vital in the analytical process. Statistical methods are used to aid in the selection of a representative sample. The analytical sample must be processed in a dependable manner that maintains sample integrity without losing sample or introducing contaminants. Many laboratories use the automated sample handling methods. 8A Analytical Samples and Methods Types of Samples and Methods Quantitative methods are traditionally classified as gravimetric methods, volumetric methods, and instrumental methods. Other methods are based on the size of the sample and the level of the constituents. Sample Size Techniques for handling very small samples are quite different from those for treating macro samples. Constituent Types  In some cases, analytical methods are used to determine major constituents, which are those present in the range of 1 to 100% by mass.  Species present in the range of 0.01 to 1% are usually termed minor constituents.  Those present in amounts between 100 ppm (0.01%) and 1 ppb are called trace constituents.  Components present in amounts lower than 1 ppb are usually considered to be ultratrace constituents. A general problem in trace procedures is that the reliability of results usually decreases dramatically with a decrease in analyte level.  The relative standard deviation between laboratories increases as the level of analyte decreases.  At the ultratrace level of 1 ppb, interlaboratory error (%RSD) is nearly 50%. At lower levels, the error approaches 100%. Figure 8-3 Inter-laboratory error as a function of analyte concentration. Real Samples  The analysis of real samples is complicated by the presence of the sample matrix.  The matrix can contain species with chemical properties similar to the analyte.  If the interferences are caused by extraneous species in the matrix, they are often called matrix effects.  Such effects can be induced not only by the sample itself but also by the reagents and solvents used to prepare the samples for the determination. Samples are analyzed, but constituents or concentrations are determined. 8B Sampling  The process by which a representative fraction is acquired from a material of interest is termed sampling. ( e.g. a few milliliters of water from a polluted lake)  It is often the most difficult aspect of an analysis.  Sampling for a chemical analysis necessarily requires the use of statistics because conclusions will be drawn about a much larger amount of material from the analysis of a small laboratory sample. 8B-1 Obtaining a Representative Sample  The items chosen for analysis are often called sampling units or sampling increments.  The collection of sampling units or increments is called the gross sample.  For laboratory analysis, the gross sample is usually reduced in size and homogenized to create the laboratory sample.  The composition of the gross sample and the laboratory sample must closely resemble the average composition of the total mass of material to be analyzed. Figure 8-4 Steps in obtaining a laboratory sample. The laboratory sample consists of a few grams to at most a few hundred grams. It may constitute as little as 1 part in 107 -108 of the bulk material. Statistically, the goals of the sampling process are: 1. To obtain a mean analyte concentration that is an unbiased estimate of the population mean. This goal can be realized only if all members of the population have an equal probability of being included in the sample. 2. To obtain a variance in the measured analyte concentration that is an unbiased estimate of the population variance so that valid confidence limits can be found for the mean, and various hypothesis tests can be applied. This goal can be reached only if every possible sample is equally likely to be drawn. Both goals require obtaining a random sample. A randomization procedure may be used wherein the samples are assigned a number and then a sample to be tested is selected from a table of random numbers. For example, suppose our sample is to consist of 10 pharmaceutical tablets to be drawn from 1000 tablets off a production line. One way to ensure the sample is random is to choose the tablets to be tested from a table of random numbers. These can be conveniently generated from a random number table or from a spreadsheet as is shown in Figure 8-5. Here, we would assign each of the tablets a number from 1 to 1000 and use the sorted random numbers in column C of the spreadsheet to pick tablet 16, 33, 97, etc. for analysis. Figure 8-5 10 random numbers are generated from 1 to 1000 using a spreadsheet. The random number function in Excel [=RAND()] generates random numbers between 0 and 1. 8B-2 Sampling Uncertainties  Systematic errors can be eliminated by exercising care, by calibration, and by the proper use of standards, blanks, and reference materials.  Random errors, which are reflected in the precision of data, can generally be kept at an acceptable level by close control of the variables that influence the measurements.  Errors due to invalid sampling are unique in the sense that they are not controllable by the use of blanks and standards or by closer control of experimental variables.  For random and independent uncertainties, the overall standard deviation so for an analytical measurement is related to the standard deviation of the sampling process ss and to the standard deviation of the method sm by the relationship so2 = ss2 + sm2 An analysis of variance can reveal whether the between samples variation (sampling plus measurement variance) is significantly greater than the within samples variation (measurement variance). When sm ≤ ss/3, there is no point in trying to improve the measurement precision. This result suggests that, if the sampling uncertainty is large and cannot be improved, it is often a good idea to switch to a less precise but faster method of analysis so that more samples can be analyzed in a given length of time. Since the standard deviation of the mean is lower by a factor of √N, taking more samples can improve precision. 8B-3 The Gross Sample Ideally, the gross sample is a miniature replica of the entire mass of material to be analyzed. It is the collection of individual sampling units. It must be representative of the whole in composition and in particle-size distribution. Size of the Gross Sample is determined by (1) the uncertainty that can be tolerated between the composition of the gross sample and that of the whole, (2) the degree of heterogeneity of the whole, and (3) the level of particle size at which heterogeneity begins.  The number of particles, N, required in a gross sample ranges from a few particles to 1012 particles.  The magnitude of this number depends on the uncertainty that can be tolerated and how heterogeneous the material is.  The need for large numbers of particles is not necessary for homogeneous gases and liquids.  The laws of probability govern the composition of a gross sample removed randomly from a bulk of material. As an idealized example, - let us presume that a pharmaceutical mixture contains just two types of particles: * type A particles containing the active ingredient and * type B particles containing only an inactive filler material. All particles are the same size. We wish to collect a gross sample that will allow us to determine the percentage of particles containing the active ingredient in the bulk material. --Assume that the probability of randomly drawing an A type particle is p and that of randomly drawing a B type particle is (1 - p). -- If N particles of the mixture are taken, the most probable value for the number of A type particles is pN, while the most probable number of B type part is (1 – p)N. -- For such a binary population, the Bernoulli equation can be used to calculate the standard deviation of the number of A particles drawn, σA σ A = Np (1 − p ) σA 1− p The relative standard deviation σr of drawing A type particles is, σr = = Np Np Thus, the number of particles needed is, 1− p N= pσ r2 Thus, for example, if 80% of the particles are type A (p 5 0.8) and the desired relative standard deviation is 1% (σr = 0.01), the number of particles making up the gross sample should be N=1-0.8/0.8(0.01)2 = 2500  To determine the number of particles and thus what mass we should ensure that we have a sample with the overall average percent of active ingredient P with a sampling relative standard deviation of σr d A d B 2 PA − PB 2 N = p (1 − p )( ) ( ) d 2 σrP  The degree of heterogeneity as measured by PA - PB has a large influence on the number of particles required since N increases with the square of the difference in composition of the two components of the mixture. Rearranging the equation to calculate the relative standard deviation of sampling, σr we get PA − PB d Ad B p (1 − p ) σr = × P d2 N If we make the assumption that the sample mass m is proportional to the number of particles and the other quantities are constant, the product of m and σ r should be a constant. This constant Ks is called the Ingamells sampling constant. Ks = m × (σr × 100)2 where the term σr × 100% is the percent relative standard deviation.  To simplify the problem of defining the mass of a gross sample of a multi-component mixture, assume that the sample is a hypothetical two-component mixture.  The problem of variable particle size can be handled by calculating the number of particles that would be needed if the sample consisted of particles of a single size.  The gross sample mass is then determined by taking into account the particle-size distribution.  One approach is to calculate the necessary mass by assuming that all particles are the size of the largest.  This procedure is not very efficient because it usually calls for removal of a larger mass of material than necessary.  The mass of the sample increases directly as the volume (or as the cube of the particle diameter) so that reduction in the particle size of a given material has a large effect on the mass required for the gross sample. Sampling Homogeneous Solutions of Liquids and Gases  Well-mixed solutions of liquids and gases require only a very small sample because they are homogeneous down to the molecular level. Gases can be sampled by several methods. Ex., a sampling bag is simply opened and filled with the gas or gases can be trapped in a liquid or adsorbed onto the surface of a solid. Sampling Metals and Alloys  Samples of metals and alloys are obtained by sawing, milling, or drilling. It is not safe to assume that chips of the metal removed from the surface are representative of the entire bulk. Solid from the interior must be sampled as well. With some materials, a representative sample can be obtained by sawing across the piece at random intervals. Sampling Particulate Solids  It is often difficult to obtain a random sample from a bulky particulate material.  Random sampling can best be accomplished while the material is being transferred.  Mechanical devices have been developed for handling many types of particulate matter. Figure 8-6 Sampling Particulate Solids 8B-4 Preparing a Laboratory Sample For heterogeneous solids, the mass of the gross sample may range from hundreds of grams to kilograms or more. Reduction of the gross sample to a finely ground and homogeneous laboratory sample, of at most a few hundred grams, is necessary. this process involves a cycle of operations that includes crushing and grinding, sieving, mixing, and dividing the sample (often into halves) to reduce its mass. Number of Laboratory Samples  The number, of samples, depends on the required confidence interval and the desired relative standard deviation of the method.  If the sampling standard deviation σs is known, we can use values of z from tables, to get: zσ s CIforµ = x ± N ts s  Usually, an estimate of σs is used with t instead of z CIforµ = x ± N If we divide this term by the mean value x, we can calculate the relative uncertainty σr that is tolerable at a given confidence level: ts s σr = x N If we solve Equation 8-8 for the number of samples N, we obtain t 2 s s2 N= σ r2 x 2 8 C Automated sample handling Automated sample handling can lead to higher throughput (more analyses per unit time), higher reliability, and lower costs than manual sample handling. Discrete (Batch) Methods These often mimic the operations that would be performed manually. Some discrete sample processors automate only the measurement step of the procedure or a few chemical steps and the measurement step. Continuous Flow Methods The sample is inserted into a flowing stream where a number of operations can be performed prior to transporting it to a flow-through detector. These methods can perform not only sample processing operations but also the final measurement step. Two types of continuous flow analyzers are * the segmented flow analyzer and * the flow injection analyzer. The segmented flow analyzer divides the sample into discrete segments separated by gas bubbles. the gas bubbles provide barriers to prevent the sample from spreading out along the tube due to dispersion processes. Dispersion is a band-spreading or mixing phenomenon that results from the coupling of fluid flow with molecular diffusion. Diffusion is mass transport due to a concentration gradient. Figure 8-7 Segmented continuous flow analyzer. The segmented sample is shown in more detail in (b).The analyte concentration profiles at the sampler and at the detector are shown in (c). Normally the height of a sample peak is related to the concentration of the analyte. Figure 8-8 Flow injection analyzer. Samples can be processed with FIA at rates varying from 60 to 300 samples per hour. The valve, shown in the load position, also has a second inject position shown by the dotted lines. When switched to the inject position, the stream containing the reagent flows through the sample loop. Sample and reagent are allowed to mix and react in the mixing coil before reaching the detector. In this case, the sample plug is allowed to disperse prior to reaching the detector (b). The resulting concentration profile (detector response) depends on the degree of dispersion. 8D Standardization and calibration - Calibration determines the relationship between the analytical response and the analyte concentration, which is usually determined by the use of chemical standards prepared from purified reagents. - To reduce interferences from other constituents in the sample matrix, called concomitants, standards are added to the analyte solution (internal standard methods or standard addition methods) or matrix matching or modification is done. - Almost all analytical methods require calibration with chemical standards. - Gravimetric methods and some coulometric methods are absolute methods that do not rely on calibration with chemical standards. 8D-1 Comparison with Standards Two types of comparison methods are: - direct comparison techniques - titration procedures. Direct Comparison - Some analytical procedures involve comparing a property of the analyte with standards such that the property being tested matches or nearly matches that of the standard. This is called null comparison or isomation methods. -Some modern instruments use a variation of this procedure to determine if an analyte concentration exceeds or is less than some threshold level. Such a comparator can be used to determine whether the threshold has been exceeded. -e.g. A comparator to determine whether aflatoxin levels in a sample exceeds the threshold level that would indicate a toxic situation. Titrations: - Titrations are one of the most accurate of all analytical procedures. - In a titration, the analyte reacts with a standardized reagent (the titrant) in a known stoichiometric manner. - The amount of titrant is varied until chemical equivalence is reached as indicated by the color change of a chemical indicator or by the change in an instrument response. This is called the end point. - The amount of the standardized reagent needed to achieve chemical equivalence can then be related to the amount of analyte present by means of the stoichiometry. - Titration is thus a type of chemical comparison. 8D-2- External Standard Calibration - A series of standard solutions is prepared separately from the sample. - The standards are used to establish the instrument calibration function, which is obtained from analysis of the instrument response as a function of the known analyte concentration. - The calibration function can be obtained graphically or in mathematical form. - Generally, a plot of instrument response versus known analyte concentrations is used to produce a calibration curve, sometimes called a working curve. Figure 8-9 Calibration curve of absorbance versus analyte concentration for a series of standards. -The calibration curve is used in an inverse fashion to obtain the concentration of an unknown with an absorbance of 0.505. - The absorbance is located on the line, and then the concentration corresponding to that absorbance is obtained by extrapolating to the x-axis. External Standard Calibration The Least-Squares Method Statistical methods, such as the method of least squares, are routinely used to find the mathematical equation describing the calibration function. Two assumptions are made: 1.There is actually a linear relationship between the measured response y (absorbance) and the standard analyte concentration x. The mathematical relationship that describes this assumption is called the regression model, which may be represented as y = mx + b where, b is the y intercept (the value of y when x is zero), and m is the slope of the line. Figure 8-10 The slope-intercept form of a straight line. 2. We also assume that any deviation of the individual points from the straight line arises from error in the measurement. That is, we assume there is no error in x values of the points (concentrations). Whenever there is significant uncertainty in the x data, basic linear least-squares analysis may not give the best straight line in which case, a more complex correlation analysis may be used. It may be necessary to apply different weighting factors to the points and perform a weighted least-squares analysis. Finding the least-Squares line The least-squares method finds the sum of the squares of the residuals SSresid and minimizes the sum using calculus. 2 SS resid = ∑ [y i − (b + mxi )] N i =1 The slope and the intercept are defined as: (∑ xi ) 2 S xx = ∑ ( xi − x) = ∑ x − 2 2 i N (∑ y i ) 2 S yy = ∑ ( y i − y ) = ∑ y − 2 2 i N S xy = ∑ ( xi − x)( y i − y ) = ∑ xi y i − ∑x ∑y i i N where xi and yi are individual pairs of data for x and y, N is the number of pairs, and x and y are the average values for x and y. From these values, one can derive the S xy (1) Slope of the line, m= S xx (2) Intercept, b = y − mx S yy − m 2 S xx sr = (3) Standard deviation about regression N −2 s r2 sm = S xx (4) Standard deviation of the slope sb = s r ∑ i x 2 N ∑ xi2 − (∑ xi ) 2 (5) Standard deviation of the intercept = sr 1 N − (∑ xi ) 2 /(∑ xi2 ) (6) Standard deviation for results obtained from sr 1 1 ( yc − y) 2 the calibration curve sc = + + m M N m 2 S xx The standard deviation about regression, also called the standard error of the estimate or just the standard error, is a rough measure of the magnitude of a typical deviation from the regression line. N ∑ [y − (b + mxi )] 2 i SS resid sr = i =1 = N −2 N −2 Interpretation of least-Squares results The sum of the squares of the residuals, SSresid, measures the variation in the observed values of the dependent variable (y values) that are not explained by the presumed linear relationship between x and y. N 2 SS resid = ∑ [ yi − (b + mxi )] i =1 ( ∑ yi ) 2 SS tot = S yy = ∑ ( yi − y ) = ∑ 2 yi2 − N The coefficient of determination (R2) measures the fraction of the observed variation in y that is explained by the linear relationship. SS R 2 = 1 − resid SS tot The difference between SStot and SSresid is the sum of the squares due to regression, SSregr. SS regr = SS tot − SS resid * A significant regression is one in which the variation in the y values due to the presumed linear relationship is large SS regr compared to that due to error (residuals). R = 2 * The F value gives us an indication of the significance of the SS tot regression. When the regression is significant, a large value of F occurs. Transformed Variables Linear least squares gives best estimates of the transformed variables, but these may not be optimal when transformed back to obtain estimates of the original parameters. For the original parameters, nonlinear regression methods may give better estimates. Errors in External Standard Calibration  When external standards are used, it is assumed that, when the same analyte concentration is present in the sample and in the standard, the same response will be obtained.  The raw response from the instrument is usually not used.  Instead, the raw analytical response is corrected by measuring a blank. The ideal blank is identical to the sample but without the analyte.  A real blank is either a solvent blank, containing the same solvent in which the sample is dissolved, or a reagent blank, containing the solvent plus all the reagents used in sample preparation.  Systematic errors can also occur during the calibration process.  To avoid systematic errors in calibration, standards must be accurately prepared, and their chemical state must be identical to that of the analyte in the sample.  The standards should be stable in concentration, at least during the calibration process.  Random errors can also influence the accuracy of results obtained from calibration curves. Figure 8-11 Shown here is a calibration curve with confidence limits. Measurements made near the center of the curve will give less uncertainty in analyte concentration than those made at the extremes. Minimizing Errors in Analytical Procedures The overall accuracy and precision of an analysis is not limited to the measurement step and might instead be limited by factors such as sampling, sample preparation, and calibration. Separations Sample cleanup by separation methods is an important way to minimize errors from possible interferences in the sample matrix. Techniques such as filtration, precipitation, dialysis, solvent extraction, volatilization, ion exchange, and chromatography can be used. In most cases, separations may be the only way to eliminate an interfering specimen. Saturation, Matrix Modification, and Masking *The saturation method involves adding the interfering species to all the samples, standards, and blanks so that the interference effect becomes independent of the original concentration of the interfering species in the sample. *A matrix modifier is a species, not itself an interfering species, added to samples, standards, and blanks in sufficient amounts to make the analytical response independent of the concentration of the interfering species. *Sometimes, a masking agent is added that reacts selectively with the interfering species to form a complex that does not interfere. Dilution and Matrix Matching *The dilution method can sometimes be used if the interfering species produces no significant effect below a certain concentration level. *The matrix-matching method attempts to duplicate the sample matrix by adding the major matrix constituents to the standard and blank solutions. *Errors in procedures can be minimized by saturating with interfering species, by adding matrix modifiers or masking agents, by diluting the sample, or by matching the matrix of the sample. Internal Standard Methods *An internal standard is a reference species, chemically and physically similar to the analyte, that is added to samples, standards, and blanks. *The ratio of the response of the analyte to that of the internal standard is plotted versus the concentration of analyte. *In the internal standard method, a known amount of a reference species is added to all the samples, standards, and blanks. *The response signal is then not the analyte signal itself but the ratio of the analyte signal to the reference species signal. *A calibration curve is prepared where the y-axis is the ratio of responses and the x-axis is the analyte concentration in the standards as usual. *This method can compensate for certain types of errors if these influence both the analyte and the reference species to the same proportional extent. *The calibration curve plots the ratio of the analyte signal to the internal standard signal against the concentration of the analyte. Figure 8-12 Illustration of the internal standard method. Standard Addition Methods *The method of standard additions is used when it is difficult or impossible to duplicate the sample matrix. *A known amount of a standard solution of analyte is added to one portion of the sample. *The responses before and after the addition are measured and used to obtain the analyte concentration. *Alternatively, multiple additions are made to several portions of the sample. *The standard additions method assumes a linear response. *Linearity should always be confirmed, or the multiple additions method used to check linearity. *The method of standard additions is quite powerful so long as there is a good blank measurement so that extraneous species do not contribute to the analytical response. * Second, the calibration curve for the analyte must be linear in the sample matrix. 8E Figures of merit for analytical methods Analytical procedures are characterized by a number of figures of merit such as: - accuracy, precision, sensitivity, detection limit, and dynamic range. 8E-1 Sensitivity and Detection Limit The definition of sensitivity most often used is the calibration sensitivity, or the change in the response signal per unit change in analyte concentration. The calibration sensitivity is thus the slope of the calibration curve. The calibration sensitivity does not indicate what concentration differences can be detected. Noise in the response signals must be taken into account in order to be quantitative about what differences can be detected. For this reason, the term analytical sensitivity is sometimes used. The analytical sensitivity is the ratio of the calibration curve slope to the standard deviation of the analytical signal at a given analyte concentration. The analytical sensitivity is usually a strong function of concentration. The detection limit, DL, is the smallest concentration that can be reported with a certain level of confidence. Figure 8-14 Calibration curve of response R versus concentration c. The slope of the calibration curve is called the calibration sensitivity m. The detection limit, DL, designates the lowest concentration that can be * Every analytical technique has a detection limit. * It is the analyte concentration that produces a response equal to k times the standard deviation of the blank σb ksb DL = m - where k is called the confidence factor and m is the calibration sensitivity. The factor k is usually chosen to be 2 or 3. A k value of 2 corresponds to a confidence level of 92.1%, while a k value of 3 corresponds to a 98.3% confidence level. Linear Dynamic Range * The linear dynamic range of an analytical method most often refers to the concentration range over which the analyte can be determined using a linear calibration curve. * The lower limit is generally considered to be the detection limit. * The upper end is usually taken as the concentration at which the analytical signal or the slope of the calibration curve deviates by a specified amount. * Usually a deviation of 5% from linearity is considered the upper limit. Quality Assurance of Analytical Results Control Charts * A control chart is a sequential plot of some quality characteristic that is important in quality assurance. * The chart also shows the statistical limits of variation, the upper control limit (UCL) and lower control limit (LCL), that are permissible for the characteristic being measured. 3σ 3σ Where µ is the population mean UCL = µ + LCL = µ + σ is the population standard deviation and N N N, number of replicates for each sample. Mass data were collected on twenty-four consecutive days for a 20.000-g standard mass certified by the National Institute of Standards and Technology. On each day, five replicate determinations were made. From independent experiments, estimates of the population mean and standard deviation were found to be µ= 20.000 g and σ=0.00012 g, respectively. For the mean of five measurements, 3x0.00012/√5 = 0.00016. Hence, the UCL value = 20.00016 g, and the Figure 8-15 A control chart for a modern analytical balance. As long as the mean mass remains between the LCL value = 19.99984 g. LCL and the UCL, the balance is said to be in statistical control. Figure 8-16 A control chart for monitoring the concentration of benzoyl peroxide in a commercial acne preparation. The manufacturing process became out of statistical control with sample 83 and exhibited a systematic change in the mean concentration. Validation  Validation determines the suitability of an analysis for providing the sought-for information and can apply to samples, to methodologies, and to data.  Validation is often done by the analyst, but it can also be done by supervisory personnel.  There are several different ways to validate analytical methods. The most common methods include:  analysis of standard reference materials when available, analysis by a different analytical method, analysis of “spiked” samples, and analysis of synthetic samples approximating the chemical composition of the test samples.  Individual analysts and laboratories often must periodically demonstrate the validity of the methods and techniques used.  Data validation is the final step before release of the results. This process starts with validating the samples and methods used. Then, the data are reported with statistically valid limits of uncertainty after a thorough check has been made to eliminate blunders in sampling and sample handling, mistakes in performing the analysis, errors in identifying samples, and mistakes in the calculations used. Reporting Analytical Results Analytical results should be reported as the mean value and the standard deviation. Sometimes, the standard deviation of the mean is reported instead of that of the data A confidence interval for the mean, the interval and its confidence level should be explicitly reported.  The results of various statistical tests on the data should also be reported when appropriate, as should the rejection of any outlying results along with the rejection criterion. Significant figures are quite important when reporting results and should be based on statistical evaluation of the data. Whenever possible graphical presentation should include error bars on the data points to indicate uncertainty. Suggested Problems 8.4, 8.10, 8.13, 8.17, 8.20, 8-23

Use Quizgecko on...
Browser
Browser