Measurement Systems in Environmental Management PDF

Summary

This document covers measurement systems in environmental management, encompassing sensors, transducers, variable conversion, signal processing, and data recording. It delves into different types of instruments, their characteristics, and the significance of calibration. The text discusses static and dynamic characteristics of instruments, as well as cost, durability and maintenance considerations.

Full Transcript

3 Measurement Systems in Environmental Management Parameters associated with the operation of an EMS must be measured and recorded to the degree of accuracy specified in the EMS manual. As explained in the last chapter, the level of accuracy specified is set according to th...

3 Measurement Systems in Environmental Management Parameters associated with the operation of an EMS must be measured and recorded to the degree of accuracy specified in the EMS manual. As explained in the last chapter, the level of accuracy specified is set according to the requirements of the EMS. In some cases, high levels of accuracy will be required but, in other cases, the accuracy requirement will be quite modest. To achieve the specified level of measurement accuracy, the measurement system and the measuring instruments used within it must be carefully designed. The principal components in a measuring system are shown in Figure 3.1. The primary component is a sensor or transducer that cap- tures the information about the magnitude of the variable measured. This is often followed by a variable conversion element that translates the output measurement into a more convenient form. After this, various signal-processing operations are applied that improve the quality of the measurement. The measurement then passes via a signal-transmission system to a data recorder. Before leaving this discussion on measuring system components, it should be mentioned that a particular measurement system will not necessarily contain all of the components identified in Figure 3.1. For example, variable conversion, signal processing or signal transmis- sion may not be needed in particular cases. It should also be noted that many commercial instruments combine several measurement system elements within one casing. Furthermore, intelligent instruments contain additional sensors/transducers to measure and compensate for disturbances in the environmental conditions of measurement. Several conditions must be satisfied to achieve the quality of measurements spec- ified in the EMS. Firstly, suitable measuring instruments must be chosen that have static and dynamic characteristics that are appropriate to the needs of the measure- ment situation, as discussed in the first part of this chapter. Secondly, the conditions ISO 14000 Environmental Management Standards: Engineering and Financial Aspects. Alan S. Morris. © 2004 John Wiley & Sons, Ltd ISBN 0-470-85128-7 42 ISO 14000 Environmental Management Standards Figure 3.1 Principal components in a measurement system. in which instruments will have to operate must be assessed and suitable instruments will have to be chosen that are as insensitive as possible to the operating environ- ment. Thirdly, every measuring instrument should have a designated person respon- sible for it, who must ensure that the instrument is calibrated at the correct times by approved personnel, so that its measuring characteristics are guaranteed when it is used under specified environmental conditions (approved personnel means either staff within the company who have attended all the necessary courses relevant to the calibration duties, or subcontractors outside the company who are verified as being able to provide calibration services satisfactorily). Fourthly, having eliminated cali- bration errors, all other error sources in the measurement system must be identified and dealt with, as discussed in Chapter 4. Finally, the effect on accuracy must be considered of all other processes undergone by the measurements, including any variable conversion elements applied, signal processing, signal transmission and data recording, as discussed in Chapter 5. 3.1 Choosing Suitable Measuring Instruments When choosing measuring instruments for a particular measurement situation, it is necessary to ensure that the instrument will satisfy the requirements specified by the EMS and, in particular, will not be adversely affected by the conditions in which it has to operate. The necessary background for this is an awareness of the nature of different kinds of instrument and knowledge of the various static and dynamic char- acteristics that govern the suitability of instruments in different applications. 3.1.1 Different types of instrument A proper understanding of the fundamental nature of instruments is a necessary pre- requisite for assessing the possible error levels in measurements, and ensuring that the performance of the instrument chosen is satisfactory. A convenient approach to this is to classify instruments into different types and to study the characteristics of each. These subclassifications are useful in broadly establishing attributes of par- ticular instruments, such as accuracy, cost, and general applicability to different applications. Measurement Systems in Environmental Management 43 Deflection/null-type instruments The pressure gauge in Figure 3.2(a) is also a good example of a deflection type of instrument, where the value of the quantity being measured is displayed in terms of the amount of movement of the pointer. In contrast, the dead-weight pressure gauge shown in Figure 3.2(b) is a null-type instrument. Here, weights are added on top of the piston until the piston reaches a datum level, known as the null point, where the downward force due to the weights is balanced by the upward force due to the fluid pressure. Pressure measurement is made in terms of the value of the weights needed to reach this null position. The accuracy of these two instruments depends on different things. For the first one, it depends on the linearity and calibration of the spring, whilst for the second it relies on the calibration of the weights. As calibration of weights is much easier than careful choice and calibration of a linear-characteristic spring, this means that the second type of instrument will normally be more accurate. This is in accordance with the general rule that null-type instruments are more accurate than deflection types. In terms of usage, the deflection-type instrument is clearly more convenient. It is far simpler to read the position of a pointer against a scale than to add and subtract weights until a null point is reached. Therefore, a deflection-type instrument is the one that would normally be used in the workplace. However, for calibration duties, the null-type instrument is preferable because of its superior accuracy. The extra effort required to use such an instrument is perfectly acceptable in this case because of the infrequent nature of calibration operations. Active/passive instruments Instruments are divided into active or passive ones according to whether the instru- ment output is entirely produced by the quantity being measured or whether the Figure 3.2 Deflection/null types: (a) deflection-type pressure gauge; (b) dead-weight pressure gauge. From Morris (1997) Measurement and Calibration Requirements, © John Wiley & Sons, Ltd. Reproduced with permission. 44 ISO 14000 Environmental Management Standards quantity being measured simply modulates the magnitude of some external power source. The pressure gauge shown in Figure 3.2(a) is an example of a passive instru- ment, because the energy expended in moving the pointer is derived entirely from the change in pressure measured: there is no other energy input to the system. A petrol tank level indicator, as sketched in Figure 3.3, is an example of an active instrument. The change in petrol level moves a potentiometer arm, and the output signal con- sists of a proportion of the external voltage source applied across the two ends of the potentiometer. The energy in the output signal comes from the external power source: the primary transducer float system is merely modulating the value of the voltage from this external power source. It should be noted that, whilst the external power source is usually in electrical form, in some cases it can be other forms of energy, such as pneumatic or hydraulic. One very important difference between active and passive instruments is the level of measurement resolution obtained. With the simple pressure gauge shown, the amount of movement made by the pointer for a particular pressure change is defined by the nature of the instrument. Whilst it is possible to increase measurement reso- lution by making the pointer longer, such that the pointer tip moves through a longer arc, the scope for such improvement is clearly bounded by the practical limit on what is a convenient length for the pointer. However, in an active instrument, adjustment of the magnitude of the external energy input allows much greater control over measurement resolution. Incidentally, whilst the scope for improving measurement resolution is much greater, it is not infinite, because of limitations placed on the mag- nitude of the external energy input – in consideration of heating effects and for safety reasons. In terms of cost, passive instruments are normally of a more simple construction than active ones and are therefore cheaper to manufacture. Therefore, choice between active and passive instruments for a particular application involves carefully balanc- ing the measurement-resolution requirements against cost. Figure 3.3 Example of active instrument: petrol tank level indicator. From Morris (1997) Measurement and Calibration Requirements, © John Wiley & Sons, Ltd. Reproduced with permission. Measurement Systems in Environmental Management 45 Analogue/digital instruments An analogue instrument gives an output that varies continuously as the quantity being measured changes. The output can have an infinite number of values within the range that the instrument is designed to measure. The deflection-type of pressure gauge in Figure 3.2(a) is a good example of an analogue instrument. As the input value changes, the pointer moves with a smooth, continuous motion. Whilst the pointer can therefore be in an infinite number of positions within its range of move- ment, the number of different positions that the eye can discriminate between is strictly limited, this discrimination being dependent upon the size of the scale, and how finely it is divided. A digital instrument, such as the rev-counter sketched in Figure 3.4, has an output that varies in discrete steps, and so can only have a finite number of values. The cam of the rev-counter is attached to the revolving body whose motion is being measured, and opens and closes a switch on each revolution. The switching operations are counted by an electronic counter. This system can only count whole revolutions and therefore cannot discriminate any motion that is less than a full revolution. The distinction between analogue and digital instruments has become particularly important with the rapid growth in the application of computers in measurement and control systems. In such applications, an instrument whose output is in digital form is particularly advantageous, as it can be interfaced directly to the computer. In contrast, analogue instruments must be interfaced to the microcomputer by an analogue-to-digital (A/D) converter. Intelligent/nonintelligent instruments The term ‘intelligent instrument’ is used to describe a package that incorporates a digital processor as well as one or more of the measurement system components iden- tified in Figure 3.1. Intelligent instruments are also sometimes referred to by other names, such as: intelligent device, smart sensor and smart transmitter. There is no formal definition for any of these alternative names, so that there is considerable overlap between the characteristics of particular devices and the name given to them. The processor within an intelligent instrument allows it to apply preprogrammed Figure 3.4 Rev-counter. From Morris (1997) Measurement and Calibration Requirements, © John Wiley & Sons, Ltd. Reproduced with permission. 46 ISO 14000 Environmental Management Standards signal-processing and data-manipulation algorithms to improve the quality of measurements, although the additional features inevitably make an intelligent instru- ment more expensive to buy than a comparable nonintelligent one. One important function of most intelligent instruments is to compensate measurements for system- atic errors caused by environmental disturbances. To achieve this, they are provided with one or more secondary sensors to monitor the value of environmental distur- bances, in addition to the primary sensor that measures the principal variable of interest. Although automatic compensation for environmental disturbances is a very important attribute of intelligent instruments, many versions of such devices also perform additional functions, such as: Providing switchable ranges (using several primary sensors within the instrument that each measure over a different range). Providing for remote adjustment and control of instrument parameters. Providing switchable output units (e.g. display in imperial or SI units). Linearisation of the output. Correction for the loading effect of measurement on the measured system. Providing signal damping with selectable time constants. Self-diagnosis of faults. By contrast, nonintelligent instruments, as the name implies, do not have any form of computational power within them. Therefore, all mechanisms for carrying out any improvements to the quality of the measurements have to be achieved by devices external to the instrument. 3.1.2 Static instrument characteristics The static characteristics of an instrument consist of a set of parameters that col- lectively describe the quality of the steady-state output measurement provided*. Some examples of static characteristics are: accuracy, sensitivity, linearity and the reaction to ambient temperature changes. All relevant static characteristics are given in the data sheet for a particular instrument, but it must be noted that values quoted in a data sheet only apply when the instrument is used under specified, standard calibration conditions. Due allowance must be made for variations in the character- istics when the instrument is used in other conditions. The important static charac- teristics to consider when choosing an instrument for a particular application are defined in the following paragraphs. Accuracy Accuracy is the extent to which a reading might be wrong, and is often quoted as a percentage of the full-scale reading of an instrument. If, for example, a pressure gauge of range 0–10 bar has a quoted inaccuracy of ±1.0% f.s. (±1% of full-scale * ‘Steady-state output measurement’ means the non-changing output after any dynamic effects in the output reading have died out. Measurement Systems in Environmental Management 47 reading), then the maximum error to be expected in any reading is 0.1 bar. This means that when the instrument is reading 1.0 bar, the possible error is 10% of this value. For this reason, it is an important system design rule that instruments are chosen such that their range is appropriate to the spread of values being measured, in order that the best possible accuracy be maintained in instrument readings. Thus, if we were measuring pressures with expected values between 0 and 1 bar, we would not use an instrument with a range of 0–10 bar. Tolerance Tolerance is a term that is closely related to accuracy, and it defines the maximum error that is to be expected in some value. Whilst it is not, strictly speaking, a static characteristic of measuring instruments, it is mentioned here because the accuracy of some instruments is sometimes quoted as a tolerance figure. Tolerance, when used correctly, describes the maximum deviation of a manufactured component from some specified value. For example, if resistors have a quoted tolerance of 5%, one resistor chosen at random from a batch having a nominal value 1000 W might have an actual value anywhere between 950 W and 1050 W. Precision/repeatability/reproducibility Precision is a term that describes an instrument’s degree of freedom from random errors. If a large number of readings are taken of the same quantity by a high- precision instrument, then the spread of readings will be very small. High precision does not imply anything about measurement accuracy. Hence, a high-precision instrument might actually have a low accuracy. Low-accuracy measurements from a high-precision instrument are usually caused by a bias in the measurements, which is removable by recalibration. The terms ‘repeatability’ and ‘reproducibility’ mean approximately the same, but are applied in different contexts, as given below. Repeatability describes the closeness of output readings when the same input is applied repetitively over a short period of time, with the same measurement conditions, same instrument and observer, same location and same conditions of use maintained throughout. Reproducibility describes the closeness of output readings for the same input when there are changes in the method of measurement, observer, measuring instrument, location, conditions of use and time of measurement. Thus, both terms describe the spread of output readings for the same input. This spread is referred to as repeatability if the measurement conditions are constant, and as reproducibility if the measurement conditions vary. The degree of repeatability or reproducibility in measurements is an alternative way of expressing precision. Figure 3.5 explains precision more clearly. This shows the results of testing three industrial robots that were programmed to place compo- nents at a particular point on a table. The target point was at the centre of the concentric circles shown, and the black dots represent the points where each robot actually deposited components at each attempt. Both the accuracy and precision of Robot 1 is shown to be low in this trial. Robot 2 consistently puts the component 48 ISO 14000 Environmental Management Standards Figure 3.5 Explanation of precision. down at approximately the same place, but this is the wrong point. Therefore, it has high precision but low accuracy. Finally, Robot 3 has both high precision and high accuracy, because it consistently places the component at the correct target position. Range or span The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure. Instruments must not be used to measure values whose magnitude is outside the specified measurement range, since this could result in large measurement errors. Linearity It is normally desirable that the output reading of an instrument is linearly propor- tional to the quantity being measured. The Xs marked on Figure 3.6(a) show a plot of the typical output readings of an instrument when a sequence of input quantities are applied to it. Normal procedure is to draw a good-fit straight line through the Xs, as shown in this figure. (Whilst this can often be done with reasonable accuracy by eye, it is always preferable to apply a mathematical least-squares line-fitting tech- nique.) The nonlinearity is then defined as the maximum deviation of any of the output readings marked X from this straight line. Nonlinearity is usually expressed as a percentage of the full-scale reading. Sensitivity of measurement The sensitivity of measurement is a measure of the change in instrument output that occurs when the quantity being measured changes by a given amount. Sensitivity is thus the ratio: Measurement Systems in Environmental Management 49 Figure 3.6 Instrument sensitivity: (a) standard instrument output characteristic; (b) effect on charac- teristic of drift: (i) sensitivity drift, (ii) zero drift (bias), (iii) sensitivity drift plus zero drift. Scale deflection Value of measured quantity The sensitivity of measurement is therefore the slope of the straight line drawn on Figure 3.6(a). For example, if a pressure of 2 bars produces a deflection of 10 degrees in a pressure transducer, the sensitivity of the instrument is 5 degrees/bar (assuming that the relationship between pressure and the instrument reading is a straight-line one). Sensitivity to disturbance All calibrations and specifications of an instrument are only valid under controlled conditions of temperature, pressure, etc. These standard ambient conditions are usually defined in the instrument specification. As variations occur in the ambient temperature, etc., certain static instrument characteristics change, and the sensitivity to disturbance is a measure of the magnitude of this change. Such environmental changes affect instruments in two main ways, known as sensitivity drift and zero drift (bias). Sensitivity drift (scale factor drift) Sensitivity drift or scale factor drift defines the amount by which an instrument’s sensitivity of measurement varies as ambient conditions change. Many components within an instrument are affected by environmental fluctuations, such as tempera- ture changes: for instance, the modulus of elasticity of a spring is temperature- dependent. Line (i) on Figure 3.6(b) shows the typical effect of sensitivity drift on the output characteristic of an instrument. For the pressure gauge shown in Figure 3.2(a), in which the output characteristic is expressed in units of angular degrees/bar, 50 ISO 14000 Environmental Management Standards the sensitivity drift would be expressed in units of the form (angular degree/bar)/°C if the spring was affected by temperature change. Zero drift or bias Zero drift, also known as bias, describes the effect where the zero reading of an instru- ment is modified by a change in ambient conditions. This causes a constant error over the full range of measurement of the instrument. Bathroom scales are a common example of instruments that are prone to bias. If there is a bias of 1 kg, then the reading would be 1 kg with no one standing on the scales. If someone of known weight 70 kg were to get on the scales, then the reading would be 71 kg, and if someone of known weight 100 kg were to get on the scales, the reading would be 101 kg. Instruments prone to zero drift normally have a means of adjustment (a thumbwheel in the case of bathroom scales) that allows the drift to be removed, so that measurements made with the instrument are unaffected. Typical units by which zero drift is measured are volts/°C, in the case of a voltmeter affected by ambient temperature changes. A typical change in the output characteristic of a pressure gauge subject to zero drift is shown by line (ii) in Figure 3.6(a). If the instrument suffers both zero drift and sensitivity drift at the same time, then the typical modifi- cation of the output characteristic is shown by line (iii) in Figure 3.6(b). Resolution When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in the input measured quantity that produces an ob- servable change in the instrument output. Resolution is sometimes specified as an absolute value and sometimes as a percentage of full-scale deflection. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions. For example, a car speedometer has subdivisions of typically 20 km/h. This means that, when the pointer is between the scale markings, we cannot estimate speed more accurately than to the nearest 5 km/h. This figure of 5 km/h thus represents the resolution of the instrument. 3.1.3 Dynamic instrument characteristics The static characteristics of a measuring instrument are concerned only with the steady-state reading that the instrument settles down to, such as the accuracy of the reading, etc. The dynamic characteristics describe the behaviour between the time that a measured quantity changes value and the time when the instrument output attains a steady value in response. As with static characteristics, any values for dynamic characteristics quoted in instrument data sheets only apply when the instru- ment is used under specified environmental conditions. Outside these calibration conditions, some variation in the dynamic parameters can be expected. Various types of dynamic characteristics can be classified, known as zero-order, first-order and second-order characteristics. Fortunately, the practical effects of dynamic characteristics in the output of an instrument can be understood without resorting to formal mathematical analysis. Measurement Systems in Environmental Management 51 Figure 3.7 First-order characteristic. From Morris (1997) Measurement and Calibration Requirements, © John Wiley & Sons, Ltd. Reproduced with permission. In a zero-order instrument, the dynamic characteristics are negligible, and the instrument output reaches its final reading almost instantaneously following a step change in the measured quantity applied at its input. A potentiometer, which measures motion, is a good example of such an instrument, where the output voltage changes approximately instantaneously as the slider is displaced along the poten- tiometer track. In a first-order instrument, the output quantity qo in response to a step change in the measured quantity qi varies with time in the manner shown in Figure 3.7. The time constant t of the step response is the time taken for the output quantity qo to reach 63% of its final value. The liquid-in-glass thermometer is a good example of a first-order instrument. It is well known that, if a mercury thermometer at room tem- perature is plunged into boiling water, the mercury does not rise instantaneously to a level indicating 100 °C, but instead approaches a reading of 100 °C in the manner indicated by Figure 3.7. A large number of other instruments also belong to this first- order class. The main practical effect of first-order characteristics in an instrument is that the instrument must be allowed to settle to a steady reading before the output is read. Fortunately, the time constant of many first-order instruments is small rela- tive to the dynamics of the process being measured, and so no serious problems are created. It is convenient to describe the characteristics of second-order instruments in terms of three parameters: K (static sensitivity), w (undamped natural frequency) and e (damping ratio). The manner in which the output reading changes following a change in the measured quantity applied to its input depends on the value of these three parameters. The damping ratio parameter, e, controls the shape of the output response, and the responses of a second-order instrument for various values of e are shown in Figure 3.8. For case (A) where e = 0, there is no damping, and the 52 ISO 14000 Environmental Management Standards Figure 3.8 Second-order characteristic. instrument output exhibits constant-amplitude oscillations when disturbed by any change in the physical quantity measured. For light damping of e = 0.2, represented by case (B), the response to a step change in input is still oscillatory, but the oscilla- tions gradually die down. Further increase in the value of e reduces oscillations and overshoot still more, as shown by curves (C) and (D), and finally the response becomes very overdamped, as shown by curve (E) where the output reading creeps up slowly towards the correct reading. Clearly, the extreme response curves (A) and (E) are grossly unsuitable for any measuring instrument. If an instrument were to be only ever subjected to step inputs, then the design strategy would be to aim towards a damping ratio of 0.707, which gives the critically damped response (C). Unfortu- nately, most of the physical quantities that instruments are required to measure do not change in the mathematically convenient form of steps, but rather in the form of ramps of varying slopes. As the form of the input variable changes, so the best value for e varies, and the choice of e becomes a compromise between those values that are best for each type of input variable behaviour anticipated. Commercial second-order instruments, of which the accelerometer is a common example, are generally designed to have a damping ratio (e) somewhere in the range of 0.6–0.8. Thus, as for first-order instruments, it is necessary to allow second-order instru- ments to settle before the output reading is read, since there is a time lag between the measured quantity changing in value and the measuring instrument settling to a constant reading. This may limit the frequency at which the instrument output can be read, and can cause consequential difficulties in measuring rapidly changing variables. 3.1.4 Cost, durability and maintenance considerations in instrument choice The static and dynamic characteristics discussed so far are those features that form the technical basis for a comparison between the relative merits of different instru- Measurement Systems in Environmental Management 53 ments. However, in assessing the relative suitability of different instruments for a par- ticular measurement situation, considerations of cost, durability and maintenance are also of great importance. Cost is very strongly correlated with the performance of an instrument, as measured by its characteristics. For example, increasing the accu- racy or resolution of an instrument can only be done at a penalty of increasing its manufacturing cost. Therefore, instrument choice proceeds by specifying the minimum characteristics required by the measurement situation and then searching manufacturers’ catalogues to find an instrument whose characteristics match those required. As far as accuracy is concerned, it is usual to specify maximum measure- ment uncertainty levels that are 10% of the tolerance levels of the parameter to be measured. To select an instrument whose accuracy and other characteristics are su- perior to the minimum levels required would only mean paying more than necessary for a level of performance that is greater than that needed. As well as purchase cost, other important factors in the assessment exercise are the maintenance requirements and the instrument’s durability. Maintenance require- ments must be taken into account, as they also have cost implications. With regard to durability, it would not be sensible to spend £400 on a new instrument whose pro- jected life was five years if an instrument of equivalent specification with a projected life of 10 years was available for £500. However, this consideration is not necessarily simple, as the projected life of instruments often depends on the conditions in which the instrument will have to operate. As a general rule, a good assessment criterion is obtained if the total purchase cost and estimated maintenance costs of an instrument over its life are divided by the period of its expected life. The figure obtained is thus a cost per year. However, this rule becomes modified where instruments are being installed on a process whose life is expected to be limited, perhaps in the manufacture of a particular model of car. Then, the total costs can only be divided by the period of time for which an instru- ment is expected to be used, unless an alternative use for the instrument is envisaged at the end of this period. To summarise therefore, instrument choice is a compromise between performance characteristics, ruggedness and durability, maintenance requirements, and purchase cost. To carry out such an evaluation properly, the instrument engineer must have a wide knowledge of the range of instruments available for measuring particular physi- cal quantities, and he/she must also have a deep understanding of how instrument characteristics are affected by particular measurement situations and operating conditions. 3.2 Calibration of Measuring Instruments Whatever instrument is chosen for a particular measurement application, its charac- teristics will change over a period of time and affect the relationship between the input and output. Changes in characteristics are brought about by factors such as mechanical wear, and the effects of dirt, dust, fumes and chemicals in the operating environment. To a great extent, the magnitude of the drift in characteristics depends on the amount of use that an instrument receives, and hence on the amount of wear 54 ISO 14000 Environmental Management Standards and the length of time that it is subjected to the operating environment. However, some drift also occurs even in storage, as a result of ageing effects in components within the instrument. Thus, in order to maintain the accuracy of measurements made, all instruments should be calibrated at some predetermined frequency. It should also be emphasised that all elements in the measurement system, including the final signal recorder, must be included in the calibration exercise. 3.2.1 The calibration process Calibration consists of comparing the output of the instrument being calibrated against the output of a standard instrument of known accuracy, when the same input (measured quantity) is applied to both instruments. During this calibration process, the instrument is tested over its whole range by repeating the comparison procedure for a range of inputs. The instrument used as a standard for this procedure must be one that is kept solely for calibration duties. It must never be used for other purposes. Most particularly, it must not be regarded as a spare instrument that can be used for normal measure- ments if the instrument normally used for that purpose breaks down. Proper pro- vision for instrument failures must be made by keeping a spare set of instruments. Standard calibration instruments must be kept totally separate. To ensure that these conditions are met, the calibration function must be managed and executed in a professional manner. This will normally mean setting aside a par- ticular place within the instrumentation department of a company where all cali- bration operations take place and where all instruments used for calibration are kept. As far as possible, this should take the form of a separate room, rather than a sectioned-off area in a room used for other purposes as well. This will enable better environmental control to be applied in the calibration area, and will also offer better protection against unauthorised handling or use of the calibration instruments. Calibration instruments usually have a greater inherent accuracy (often at least ten times better) than the instruments that they are used to calibrate. Where instruments are only used for calibration purposes, this greater accuracy can often be achieved by specifying a type of instrument that would be unsuitable for normal measure- ments. For example, ruggedness is not required in calibration instruments, and freedom from this constraint opens up a much wider range of possible instruments. In practice, high-accuracy, null-type instruments are commonly used for calibration duties, because their requirement for a human operator is not a problem in these circumstances. 3.2.2 Standards laboratories The calibration facilities provided within the instrumentation department of a company provide the first link in the calibration chain. An instrument used for cali- bration at this level is known as a working standard. As this working standard instru- ment is one that is kept by the instrumentation department for calibration duties, and for no other purpose, then it can be assumed that it will maintain its accuracy over a reasonable period of time, because use-related deterioration in accuracy is largely Measurement Systems in Environmental Management 55 eliminated. However, over the longer term, even the characteristics of such a stan- dard instrument will drift, mainly due to ageing effects in components within it. Therefore, over this longer term, a programme must be instituted for calibrating the working standard instrument against one of yet higher accuracy at appropriate inter- vals of time. The instrument used for calibrating working standard instruments is known as a secondary reference standard. This must obviously be a well-engineered instrument that gives high accuracy and is stabilised against drift in its performance over time. This implies that it will be an expensive instrument to buy. It also requires that the environmental conditions in which it is used are carefully controlled in respect of ambient temperature, humidity, etc. Because of the expense involved in providing secondary reference standard instru- ments and the controlled environment that they need to operate in, the establishment of a company standards laboratory to provide such a calibration facility is eco- nomically viable only in the case of very large companies, where large numbers of instruments need to be calibrated across several factories. In the case of small- to medium-sized companies, the cost of buying and maintaining such equipment is not justified. Instead, they would normally use the services of one of the specialist com- panies that have developed a suitable standards laboratory for providing calibration at this level. When the working standard instrument has been calibrated by an authorised stan- dards laboratory, a calibration certificate will be issued1. This will contain at least the following information: the identification of the equipment calibrated; the calibration results obtained; the measurement uncertainty; any use limitations on the equipment calibrated; the date of calibration; the authority under which the certificate is issued. 3.2.3 Validation of standards laboratories In the United Kingdom, the appropriate National Standards Organisation for vali- dating standards laboratories is the National Physical Laboratory (in the United States of America, the equivalent body is the National Bureau of Standards). This has established a National Measurement Accreditation Service (NAMAS) that monitors both instrument calibration and mechanical testing laboratories. The formal structure for accrediting instrument calibration in standards laboratories is known as the British Calibration Service (BCS), and that for accrediting testing facilities is known as the National Testing Laboratory Accreditation Scheme (NATLAS). Although each country has its own structure for the maintenance of standards, each of these different frameworks tends to be equivalent in its effect. To achieve con- fidence in the goods and services that move across national boundaries, international agreements have established the equivalence of the different accreditation schemes in existence. 56 ISO 14000 Environmental Management Standards A standards laboratory has to meet strict conditions2 before it is approved. These conditions control laboratory management, environment, equipment and documen- tation. The person appointed as head of the laboratory must be suitably qualified, and independence of operation of the laboratory must be guaranteed. The manage- ment structure must be such that any pressure to rush or skip calibration procedures for production reasons can be resisted. As far as the laboratory environment is con- cerned, proper temperature and humidity control must be provided, and high stan- dards of cleanliness and housekeeping must be maintained. All equipment used for calibration purposes must be maintained to reference standards, and supported by calibration certificates that establish this traceability. Finally, full documentation must be maintained. This should describe all calibration procedures, maintain an index system for recalibration of equipment, and include a full inventory of appa- ratus and traceability schedules. Having met these conditions, a standards laboratory becomes an accredited laboratory for providing calibration services and issuing calibration certificates. This accreditation is reviewed at approximately 12-monthly intervals to ensure that the laboratory is continuing to satisfy the conditions laid down for approval. 3.2.4 Primary reference standards Primary reference standards describe the highest level of accuracy that is achievable in the measurement of any particular physical quantity. All items of equipment used in standards laboratories as secondary reference standards have to be calibrated themselves against primary reference standards at appropriate intervals of time. This procedure is acknowledged by the issue of a calibration certificate in the stan- dard way. National standards organisations maintain suitable facilities for this cali- bration, although, in certain cases, such primary reference standards can be located outside national standards organisations. For example, the primary reference stan- dard for dimension measurement is defined by the wavelength of the orange–red line of krypton light, and this can be realised in any laboratory equipped with an interferometer. In certain cases (e.g. the measurement of viscosity), such primary reference stan- dards are not available and reference standards for calibration are achieved by collaboration between several national standards organisations that perform measurements on identical samples under controlled conditions3. 3.2.5 Traceability What has emerged from the foregoing discussion is that calibration has a chain-like structure, in which every instrument in the chain is calibrated against a more accu- rate instrument immediately above it in the chain, as shown in Figure 3.9(a). All of the elements in the calibration chain must be known, so that the calibration of process instruments at the bottom of the chain is traceable to the fundamental measurement standards. This knowledge of the full chain of instruments involved in the calibration proce- dure is known as traceability, and is specified as a mandatory requirement in satis- Measurement Systems in Environmental Management 57 Figure 3.9 Calibration chains: (a) typical structure; (b) calibration chain for micrometers. From Morris (1997) Measurement and Calibration Requirements, © John Wiley & Sons, Ltd. Reproduced with permission. fying standards such as ISO 9001 and ISO 14001. Documentation must exist which shows that process instruments are calibrated by standard instruments that are linked by a chain of increasing accuracy back to national reference standards. There must be clear evidence to show that there is no break in this chain. To illustrate a typical calibration chain, consider the calibration of micrometers shown in Figure 3.9(b). A typical shop-floor micrometer has an uncertainty (inac- curacy) of less than 1 in 104. These would normally be calibrated in the instrumen- tation department laboratory of a company against laboratory-standard gauge blocks with a typical uncertainty of less than 1 in 105. A specialist calibration service company would provide facilities for calibrating these laboratory-standard gauge blocks against reference-grade gauge blocks with a typical uncertainty of less than 1 in 106. More accurate calibration equipment still is provided by national standards organisations. The National Physical Laboratory (UK) maintains two sets of stan- dards for this type of calibration, a working standard and a primary standard. Spec- tral lamps are used to provide a working reference standard with an uncertainty of less than 1 in 107. The primary standard is provided by an iodine-stabilised helium– neon laser which has a specified uncertainty of less than 1 in 109. All of the links in this calibration chain must be shown in the measurement and calibration system documentation. 58 ISO 14000 Environmental Management Standards 3.2.6 Practical implementation of calibration procedures Having laid down these theoretical foundations of calibration procedures, the prac- tical aspects of implementing these procedures must be considered. In practice, what is sensible, practical, achievable and affordable in any given situation may differ in substantial respects from the ideal. The most appropriate person to give advice about what standards of measurement accuracy and calibration are appropriate and accept- able in any given situation is a consultant who has a lot of experience in the par- ticular industry involved in any situation. As far as management of calibration procedures is concerned, it is important that the performance of all calibration operations is assigned as the clear responsibility of just one person. That person should have total control over the calibration func- tion, and be able to limit access to the calibration laboratory to designated approved personnel only. Only by giving this appointed person total control over the calibration function, can the function be expected to operate efficiently and effectively. Lack of such rigid management will inevitably lead to unintentional neglect of the calibra- tion system, and result in the use of equipment in an out-of-date state of calibration. Professional management is essential, so that the customer can be assured that an efficient calibration system is in operation, and that the accuracy of measurements is guaranteed. As instrument calibration is an essential component in an EMS, the clause in ISO 14001 that requires all persons performing EMS-related functions to be adequately trained clearly extends to personnel who calibrate measuring instruments. Thus, the manager in charge of the calibration function must ensure that adequate training is provided and targeted at the particular needs of the calibration systems involved. People must understand what they need to know, and especially why they must have this information. Successful completion of training courses should be marked by the award of qualification certificates. These attest to the proficiency of personnel involved in calibration duties, and are a convenient way of demonstrating that the training requirement has been satisfied. All instrument characteristics are affected to some extent by environmental con- ditions, and any parameters given in data sheets only apply for specified conditions. Therefore, as far as practicable, these same environmental conditions should be repro- duced during calibration procedures. However, specification of the level of environ- mental control required should be considered carefully with due regard to the level of accuracy needed in the calibration procedure, as overspecification will lead to unnecessary expense. In practice, full air-conditioning is not normally required for calibration at this level, as it is very expensive, but sensible precautions should be taken to guard the area from extremes of heat or cold, and good standards of clean- liness should also be maintained. For various reasons, it is not always possible to perform calibration operations in a controlled environment. For example, it may not be convenient or possible to remove instruments from process plant, and in such cases it is standard practice to calibrate them in situ. In these circumstances, appropriate corrections must be made for the deviation in the calibration environmental conditions away from those speci- fied. However, this practice does not obviate the need to protect calibration instru- Measurement Systems in Environmental Management 59 ments and to maintain them in constant conditions in a calibration laboratory at all times other than when they are involved in such calibration duties on plant. The effect of environmental conditions that differ from those specified for cali- bration can be quantified by the following procedure: whilst keeping all other envi- ronmental parameters at some constant level, the environmental parameter under investigation is varied in steps over a range of values, causing the instrument output to vary in steps over a corresponding range of readings. This allows an input/output relationship to be drawn for that particular environmental parameter. Following this quantification of the effect of any environmental parameters that differ from the stan- dard value specified for calibration, the instrument calibration exercise can proceed. Determining the calibration frequency required The characteristics of an instrument are only guaranteed immediately after calibra- tion. Thereafter, as time progresses, the characteristics will change because of factors such as ageing effects, mechanical wear, long-term environmental changes, and the effects of dust, dirt, fumes and chemicals in the operating atmosphere. Fortunately, a certain amount of degradation in characteristics can be allowed before the instru- ment needs to be recalibrated. For example, if an instrument is required to measure a parameter to an accuracy of ±2% and its accuracy is ±1% following calibration, then its accuracy can degrade from ±1% to ±2% before recalibration is necessary. Susceptibility to the factors that cause characteristics to change will vary accord- ing to the type of instrument involved and the frequency and conditions of usage. Often, an experienced engineer or the instrument manufacturer will be able to esti- mate the likely rate at which characteristics will change according to the conditions in which an instrument is used, and so the necessary calibration frequency can be calculated accordingly. However, it is prudent not to rely too much on such a priori predictions, because some significant effect on the instrument may have been over- looked. Thus, as long as the circumstances permit, it is preferable to start from basics in deriving the required calibration frequency, and not to use past-history infor- mation about the instrument and its operating environment. Applying this philoso- phy, the following frequency for checking the characteristics of new instruments might be appropriate (assuming 24 hours/day working): week 1: once per day; weeks 2–3: twice per week; weeks 4–8: once per week; months 3–6: twice per month; months 7–12: once per month; year 2: every three months; thereafter: every six months. The frequency of calibration checks should be reduced according to the above scheme, until a point is reached where deterioration in the instrument’s accuracy is first detected. Comparison of the amount of performance degradation with the inac- curacy level that is permissible in the instrument will show whether the instrument 60 ISO 14000 Environmental Management Standards should be calibrated immediately at this point or whether it can be safely left for a further period. If the above pattern of calibration checks were followed for an instru- ment, and the check at week 8 showed deterioration in accuracy that was close to the permissible limit, then this would determine that the calibration frequency for the instrument should be every eight weeks. The above method of establishing the optimum calibration frequency is clearly an ideal which cannot always be achieved in practice, and indeed for some types of instrument this level of rigour is unnecessary. When used on many production processes, for instance, it would be unacceptable to interrupt production every hour to recheck instrument calibrations, unless a very good case could be made for why this was necessary. Also, the nature of some instruments, for example a mercury- in-glass thermometer, means that calibration checks will only ever be required infrequently. However, for instruments such as unprotected base-metal thermo- couples, initial calibration checks after one hour of operation would not be at all inappropriate. 3.2.7 Procedure following calibration When the instrument is calibrated against a standard instrument, its accuracy will be shown to be either inside or outside the required measurement accuracy limits. If the instrument is found to be inside the required measurement limits, the only course of action required is to record the calibration results in the instrument’s record sheet and then put it back into use until the next scheduled time for calibration. The options available if the instrument is found to be outside the required measure- ment limits depend on whether its characteristics can be adjusted and the extent to which this is possible. If the instrument has adjustment screws, these should be turned until the characteristics of the instrument are within the specified measurement limits. Following this, the adjustment screws must be sealed to prevent tampering during the instrument’s subsequent use. In some cases, it is possible to redraw the output scale of the instrument. After such adjustments have been made, the instrument can be returned to its normal location for further use. The second possible course of action if the instrument is outside measurement limits covers the case where no adjustment is possible or the range of possible adjust- ment is insufficient to bring the instrument back within measurement limits. In this event, the instrument must be withdrawn from use, and this withdrawal must be marked prominently on it to prevent it from being reused inadvertently. The options available then are to either send the instrument for repair, if this is feasible, or to scrap it. 3.2.8 Calibration procedure review Whatever system and frequency of calibration are established, it is important to review these from time to time to ensure that the system remains effective and effi- cient. It may happen that a cheaper (but equally effective) method of calibration becomes available with the passage of time, and such an alternative system must clearly be adopted in the interests of cost-efficiency. However, the main item under scrutiny in this review is normally whether the calibration frequency is still appro- Measurement Systems in Environmental Management 61 priate. Records of the calibration history of the instrument will be the primary basis on which this review is made. It may happen that an instrument starts to go out of calibration more quickly after a period of time, either because of ageing factors within the instrument or because of changes in the operating environment. The con- ditions or mode of usage of the instrument may also be subject to change. As the environmental and usage conditions of an instrument may change beneficially as well as adversely, there is the possibility that the recommended calibration-interval may increase as well as decrease. 3.3 Documentation of Measurement and Calibration Systems An essential element in the maintenance of measurement systems and the operation of calibration procedures is the provision of full documentation. The documentation must give a full description of the measurement requirements throughout the work- place, the instruments used, and the calibration system and procedures operated. Individual calibration records for each instrument must be included within this. This documentation is a necessary part of the EMS manual, although it may physically exist as a separate volume if this is more convenient. An overriding constraint on the style in which the documentation is presented is that it should be simple and easy to read. This is often greatly facilitated by a copious use of appendices. The starting point in the documentation must be a statement of what measure- ment limits have been defined for each measurement system documented. It is cus- tomary to express the measurement limits as ±2 standard deviations, i.e. within 95% confidence limits (see Chapter 4 for further explanation). Following this, the instruments specified for each measurement situation must be listed. This list must be accompanied by full instructions about the proper use of the instruments concerned. These instructions will include details about any environ- mental control or other special precautions that must be taken to ensure that the instruments provide measurements of sufficient accuracy to meet the measurement limits defined. The proper training courses appropriate to personnel who will use the instruments must also be specified. Having disposed of the question about the instruments that are used, the docu- mentation must go on to cover the subject of calibration. A formal procedure for calibration must be defined, and the standard instruments used must be specified. This procedure must include instructions for the storage and handling of standard calibration instruments, and must specify the required environmental conditions under which calibration is to be performed. However, where a calibration procedure for a particular instrument uses standard practices that are documented elsewhere, it is sufficient to include reference to that standard practice in the documentation, rather than reproduce the whole procedure. Finally, whatever calibration system is established, the documentation must define a formal and regular review procedure that ensures its continued effectiveness. The results of each review must also be documented in a formal way. An important part of calibration procedures is to maintain proper records of all calibrations carried out and the results obtained. A standard format for the record- ing of calibration results in record sheets should be defined in the documentation 62 ISO 14000 Environmental Management Standards and, where appropriate, the documentation must also define the manner in which calibration results are to be recorded on the instruments themselves. A separate record, similar to that shown in Figure 3.10, must be kept for every measuring instru- ment, irrespective of whether it is in use or kept as a spare. Each record should include a description of the instrument, its serial number, the required calibration frequency and the person responsible for calibration, the date of each calibration and the calibration results in terms of the deviation from the required characteristics and the action taken to correct it. The documentation must also specify procedures to be followed if an instrument is found to be outside the calibration limits. This may involve adjustment, redrawing its scale or withdrawing it, depending upon the nature of the discrepancy and the type of instrument involved. Withdrawn instruments will either be repaired or scrapped, but, until faults have been rectified, their status must be clearly marked on them to prevent them being accidentally put back into use. Two other items must also be covered by the calibration document. The traceabil- ity of the calibration system back to national reference standards must be defined and supported by calibration certificates (see Section 3.2). Training procedures must also be documented, specifying the particular training courses to be attended by various personnel and what, if any, refresher courses are required. Figure 3.10 Typical format for instrument record sheets. From Morris (1997) Measurement and Cali- bration Requirements, © John Wiley & Sons, Ltd. Reproduced with permission. Measurement Systems in Environmental Management 63 All aspects of these documented calibration procedures will be given considera- tion as part of the periodic audit of the EMS. Whilst the basic responsibility for choosing a suitable interval between calibration checks rests with the engineers responsible for the instruments concerned, the auditor will require to see the results of tests which show that the calibration interval has been chosen correctly and that instruments are not going outside allowable measurement uncertainty limits between calibrations. Audits will check in particular for the existence of procedures that are instigated in response to instruments found to be out of calibration. Evidence that such procedures are effective in avoiding degradation in the environmental manage- ment function will also be required. References 1. NAMAS Document B 5103: Certificates of Calibration (NAMAS Executive, National Physical Laboratory, Middlesex, UK), 1985. 2. ISO/IEC GUIDE 25, 1990, General Requirements for the Calibration and Competence of Testing Laboratories (International Organisation for Standards, Geneva). 3. ISO 5725, 1986, Accuracy of Measurement Test Methods and Results (International Organisation for Standards, Geneva) (also published by British Standards Institution as BS.ISO5725).

Use Quizgecko on...
Browser
Browser