Measurement and Instrumentation Principles PDF
Document Details
Uploaded by SportyWombat4350
2001
Alan S. Morris
Tags
Summary
This textbook, "Measurement and Instrumentation Principles" by Alan S. Morris, covers the fundamental concepts of measurement and instrumentation. It details different types of instruments, their characteristics, and errors within the measurement process. The book delves into calibration techniques, noise reduction, and signal processing.
Full Transcript
Measurement and Instrumentation Principles To Jane, Nicola and Julia Measurement and Instrumentation Principles Alan S. Morris OXFORD AUCKLAND BOSTON JOHANNESBURG MELBOURNE NEW DELHI Butterworth-Heinemann Linacre House, Jordan Hill, Oxford OX2 8DP 22...
Measurement and Instrumentation Principles To Jane, Nicola and Julia Measurement and Instrumentation Principles Alan S. Morris OXFORD AUCKLAND BOSTON JOHANNESBURG MELBOURNE NEW DELHI Butterworth-Heinemann Linacre House, Jordan Hill, Oxford OX2 8DP 225 Wildwood Avenue, Woburn, MA 01801-2041 A division of Reed Educational and Professional Publishing Ltd A member of the Reed Elsevier plc group First published 2001 Alan S. Morris 2001 All rights reserved. No part of this publication may be reproduced in any material form (including photocopying or storing in any medium by electronic means and whether or not transiently or incidentally to some other use of this publication) without the written permission of the copyright holder except in accordance with the provisions of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London, England W1P 9HE. Applications for the copyright holder’s written permission to reproduce any part of this publication should be addressed to the publishers British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0 7506 5081 8 Typeset in 10/12pt Times Roman by Laser Words, Madras, India Printed and bound in Great Britain Contents Preface xvii Acknowledgements xx Part 1: Principles of Measurement 1 1 INTRODUCTION TO MEASUREMENT 3 1.1 Measurement units 3 1.2 Measurement system applications 6 1.3 Elements of a measurement system 8 1.4 Choosing appropriate measuring instruments 9 2 INSTRUMENT TYPES AND PERFORMANCE CHARACTERISTICS 12 2.1 Review of instrument types 12 2.1.1 Active and passive instruments 12 2.1.2 Null-type and deflection-type instruments 13 2.1.3 Analogue and digital instruments 14 2.1.4 Indicating instruments and instruments with a signal output 15 2.1.5 Smart and non-smart instruments 16 2.2 Static characteristics of instruments 16 2.2.1 Accuracy and inaccuracy (measurement uncertainty) 16 2.2.2 Precision/repeatability/reproducibility 17 2.2.3 Tolerance 17 2.2.4 Range or span 18 2.2.5 Linearity 19 2.2.6 Sensitivity of measurement 19 2.2.7 Threshold 20 2.2.8 Resolution 20 2.2.9 Sensitivity to disturbance 20 2.2.10 Hysteresis effects 22 2.2.11 Dead space 23 2.3 Dynamic characteristics of instruments 23 vi Contents 2.3.1 Zero order instrument 25 2.3.2 First order instrument 25 2.3.3 Second order instrument 28 2.4 Necessity for calibration 29 2.5 Self-test questions 30 3 ERRORS DURING THE MEASUREMENT PROCESS 32 3.1 Introduction 32 3.2 Sources of systematic error 33 3.2.1 System disturbance due to measurement 33 3.2.2 Errors due to environmental inputs 37 3.2.3 Wear in instrument components 38 3.2.4 Connecting leads 38 3.3 Reduction of systematic errors 39 3.3.1 Careful instrument design 39 3.3.2 Method of opposing inputs 39 3.3.3 High-gain feedback 39 3.3.4 Calibration 41 3.3.5 Manual correction of output reading 42 3.3.6 Intelligent instruments 42 3.4 Quantification of systematic errors 42 3.5 Random errors 42 3.5.1 Statistical analysis of measurements subject to random errors 43 3.5.2 Graphical data analysis techniques – frequency distributions 46 3.6 Aggregation of measurement system errors 56 3.6.1 Combined effect of systematic and random errors 56 3.6.2 Aggregation of errors from separate measurement system components 56 3.6.3 Total error when combining multiple measurements 59 3.7 Self-test questions 60 References and further reading 63 4 CALIBRATION OF MEASURING SENSORS AND INSTRUMENTS 64 4.1 Principles of calibration 64 4.2 Control of calibration environment 66 4.3 Calibration chain and traceability 67 4.4 Calibration records 71 References and further reading 72 5 MEASUREMENT NOISE AND SIGNAL PROCESSING 73 5.1 Sources of measurement noise 73 5.1.1 Inductive coupling 74 5.1.2 Capacitive (electrostatic) coupling 74 5.1.3 Noise due to multiple earths 74 Contents vii 5.1.4 Noise in the form of voltage transients 75 5.1.5 Thermoelectric potentials 75 5.1.6 Shot noise 76 5.1.7 Electrochemical potentials 76 5.2 Techniques for reducing measurement noise 76 5.2.1 Location and design of signal wires 76 5.2.2 Earthing 77 5.2.3 Shielding 77 5.2.4 Other techniques 77 5.3 Introduction to signal processing 78 5.4 Analogue signal filtering 78 5.4.1 Passive analogue filters 81 5.4.2 Active analogue filters 85 5.5 Other analogue signal processing operations 86 5.5.1 Signal amplification 87 5.5.2 Signal attenuation 88 5.5.3 Differential amplification 89 5.5.4 Signal linearization 90 5.5.5 Bias (zero drift) removal 91 5.5.6 Signal integration 92 5.5.7 Voltage follower (pre-amplifier) 92 5.5.8 Voltage comparator 92 5.5.9 Phase-sensitive detector 93 5.5.10 Lock-in amplifier 94 5.5.11 Signal addition 94 5.5.12 Signal multiplication 95 5.6 Digital signal processing 95 5.6.1 Signal sampling 95 5.6.2 Sample and hold circuit 97 5.6.3 Analogue-to-digital converters 97 5.6.4 Digital-to-analogue (D/A) conversion 99 5.6.5 Digital filtering 100 5.6.6 Autocorrelation 100 5.6.7 Other digital signal processing operations 101 References and further reading 101 6 ELECTRICAL INDICATING AND TEST INSTRUMENTS 102 6.1 Digital meters 102 6.1.1 Voltage-to-time conversion digital voltmeter 103 6.1.2 Potentiometric digital voltmeter 103 6.1.3 Dual-slope integration digital voltmeter 103 6.1.4 Voltage-to-frequency conversion digital voltmeter 104 6.1.5 Digital multimeter 104 6.2 Analogue meters 104 6.2.1 Moving-coil meters 105 6.2.2 Moving-iron meter 106 6.2.3 Electrodynamic meters 107 viii Contents 6.2.4 Clamp-on meters 108 6.2.5 Analogue multimeter 108 6.2.6 Measuring high-frequency signals 109 6.2.7 Thermocouple meter 110 6.2.8 Electronic analogue voltmeters 111 6.2.9 Calculation of meter outputs for non-standard waveforms 112 6.3 Cathode ray oscilloscope 114 6.3.1 Cathode ray tube 115 6.3.2 Channel 116 6.3.3 Single-ended input 117 6.3.4 Differential input 117 6.3.5 Timebase circuit 117 6.3.6 Vertical sensitivity control 117 6.3.7 Display position control 118 6.4 Digital storage oscilloscopes 118 References and further reading 118 7 VARIABLE CONVERSION ELEMENTS 119 7.1 Bridge circuits 119 7.1.1 Null-type, d.c. bridge (Wheatstone bridge) 120 7.1.2 Deflection-type d.c. bridge 121 7.1.3 Error analysis 128 7.1.4 A.c. bridges 130 7.2 Resistance measurement 134 7.2.1 D.c. bridge circuit 135 7.2.2 Voltmeter–ammeter method 135 7.2.3 Resistance-substitution method 135 7.2.4 Use of the digital voltmeter to measure resistance 136 7.2.5 The ohmmeter 136 7.2.6 Codes for resistor values 137 7.3 Inductance measurement 138 7.4 Capacitance measurement 138 7.4.1 Alphanumeric codes for capacitor values 139 7.5 Current measurement 140 7.6 Frequency measurement 141 7.6.1 Digital counter-timers 142 7.6.2 Phase-locked loop 142 7.6.3 Cathode ray oscilloscope 143 7.6.4 The Wien bridge 144 7.7 Phase measurement 145 7.7.1 Electronic counter-timer 145 7.7.2 X–Y plotter 145 7.7.3 Oscilloscope 147 7.7.4 Phase-sensitive detector 147 7.8 Self-test questions 147 References and further reading 150 Contents ix 8 SIGNAL TRANSMISSION 151 8.1 Electrical transmission 151 8.1.1 Transmission as varying voltages 151 8.1.2 Current loop transmission 152 8.1.3 Transmission using an a.c. carrier 153 8.2 Pneumatic transmission 154 8.3 Fibre-optic transmission 155 8.3.1 Principles of fibre optics 156 8.3.2 Transmission characteristics 158 8.3.3 Multiplexing schemes 160 8.4 Optical wireless telemetry 160 8.5 Radio telemetry (radio wireless transmission) 161 8.6 Digital transmission protocols 163 References and further reading 164 9 DIGITAL COMPUTATION AND INTELLIGENT DEVICES 165 9.1 Principles of digital computation 165 9.1.1 Elements of a computer 165 9.1.2 Computer operation 168 9.1.3 Interfacing 174 9.1.4 Practical considerations in adding computers to measurement systems 176 9.2 Intelligent devices 177 9.2.1 Intelligent instruments 177 9.2.2 Smart sensors 179 9.2.3 Smart transmitters 180 9.2.4 Communication with intelligent devices 183 9.2.5 Computation in intelligent devices 184 9.2.6 Future trends in intelligent devices 185 9.3 Self-test questions 185 References and further reading 186 10 INSTRUMENTATION/COMPUTER NETWORKS 187 10.1 Introduction 187 10.2 Serial communication lines 188 10.2.1 Asynchronous transmission 189 10.3 Parallel data bus 190 10.4 Local area networks (LANs) 192 10.4.1 Star networks 193 10.4.2 Ring and bus networks 194 10.5 Gateways 195 10.6 HART 195 10.7 Digital fieldbuses 196 10.8 Communication protocols for very large systems 198 10.8.1 Protocol standardization 198 10.9 Future development of networks 199 References and further reading 199 x Contents 11 DISPLAY, RECORDING AND PRESENTATION OF MEASUREMENT DATA 200 11.1 Display of measurement signals 200 11.1.1 Electronic output displays 200 11.1.2 Computer monitor displays 201 11.2 Recording of measurement data 202 11.2.1 Mechanical chart recorders 202 11.2.2 Ultra-violet recorders 208 11.2.3 Fibre-optic recorders (recording oscilloscopes) 209 11.2.4 Hybrid chart recorders 209 11.2.5 Magnetic tape recorders 209 11.2.6 Digital recorders 210 11.2.7 Storage oscilloscopes 211 11.3 Presentation of data 212 11.3.1 Tabular data presentation 212 11.3.2 Graphical presentation of data 213 11.4 Self-test questions 222 References and further reading 223 12 MEASUREMENT RELIABILITY AND SAFETY SYSTEMS 224 12.1 Reliability 224 12.1.1 Principles of reliability 224 12.1.2 Laws of reliability in complex systems 228 12.1.3 Improving measurement system reliability 229 12.1.4 Software reliability 232 12.2 Safety systems 236 12.2.1 Introduction to safety systems 236 12.2.2 Operation of safety systems 237 12.2.3 Design of a safety system 238 12.3 Self-test questions 241 References and further reading 242 Part 2: Measurement Sensors and Instruments 245 13 SENSOR TECHNOLOGIES 247 13.1 Capacitive and resistive sensors 247 13.2 Magnetic sensors 247 13.3 Hall-effect sensors 249 13.4 Piezoelectric transducers 250 13.5 Strain gauges 251 13.6 Piezoresistive sensors 252 13.7 Optical sensors (air path) 252 13.8 Optical sensors (fibre-optic) 253 13.8.1 Intrinsic sensors 254 13.8.2 Extrinsic sensors 258 13.8.3 Distributed sensors 259 Contents xi 13.9 Ultrasonic transducers 259 13.9.1 Transmission speed 260 13.9.2 Direction of travel of ultrasound waves 261 13.9.3 Directionality of ultrasound waves 261 13.9.4 Relationship between wavelength, frequency and directionality of ultrasound waves 262 13.9.5 Attenuation of ultrasound waves 262 13.9.6 Ultrasound as a range sensor 263 13.9.7 Use of ultrasound in tracking 3D object motion 264 13.9.8 Effect of noise in ultrasonic measurement systems 265 13.9.9 Exploiting Doppler shift in ultrasound transmission 265 13.9.10 Ultrasonic imaging 267 13.10 Nuclear sensors 267 13.11 Microsensors 268 References and further reading 270 14 TEMPERATURE MEASUREMENT 271 14.1 Principles of temperature measurement 271 14.2 Thermoelectric effect sensors (thermocouples) 272 14.2.1 Thermocouple tables 276 14.2.2 Non-zero reference junction temperature 277 14.2.3 Thermocouple types 279 14.2.4 Thermocouple protection 280 14.2.5 Thermocouple manufacture 281 14.2.6 The thermopile 282 14.2.7 Digital thermometer 282 14.2.8 The continuous thermocouple 282 14.3 Varying resistance devices 283 14.3.1 Resistance thermometers (resistance temperature devices) 284 14.3.2 Thermistors 285 14.4 Semiconductor devices 286 14.5 Radiation thermometers 287 14.5.1 Optical pyrometers 289 14.5.2 Radiation pyrometers 290 14.6 Thermography (thermal imaging) 293 14.7 Thermal expansion methods 294 14.7.1 Liquid-in-glass thermometers 295 14.7.2 Bimetallic thermometer 296 14.7.3 Pressure thermometers 296 14.8 Quartz thermometers 297 14.9 Fibre-optic temperature sensors 297 14.10 Acoustic thermometers 298 14.11 Colour indicators 299 14.12 Change of state of materials 299 14.13 Intelligent temperature-measuring instruments 300 14.14 Choice between temperature transducers 300 xii Contents 14.15 Self-test questions 302 References and further reading 303 15 PRESSURE MEASUREMENT 304 15.1 Diaphragms 305 15.2 Capacitive pressure sensor 306 15.3 Fibre-optic pressure sensors 306 15.4 Bellows 307 15.5 Bourdon tube 308 15.6 Manometers 310 15.7 Resonant-wire devices 311 15.8 Dead-weight gauge 312 15.9 Special measurement devices for low pressures 312 15.10 High-pressure measurement (greater than 7000 bar) 315 15.11 Intelligent pressure transducers 316 15.12 Selection of pressure sensors 316 16 FLOW MEASUREMENT 319 16.1 Mass flow rate 319 16.1.1 Conveyor-based methods 319 16.1.2 Coriolis flowmeter 320 16.1.3 Thermal mass flow measurement 320 16.1.4 Joint measurement of volume flow rate and fluid density 321 16.2 Volume flow rate 321 16.2.1 Differential pressure (obstruction-type) meters 322 16.2.2 Variable area flowmeters (Rotameters) 327 16.2.3 Positive displacement flowmeters 328 16.2.4 Turbine meters 329 16.2.5 Electromagnetic flowmeters 330 16.2.6 Vortex-shedding flowmeters 332 16.2.7 Ultrasonic flowmeters 332 16.2.8 Other types of flowmeter for measuring volume flow rate 336 16.3 Intelligent flowmeters 338 16.4 Choice between flowmeters for particular applications 338 References and further reading 339 17 LEVEL MEASUREMENT 340 17.1 Dipsticks 340 17.2 Float systems 340 17.3 Pressure-measuring devices (hydrostatic systems) 341 17.4 Capacitive devices 343 17.5 Ultrasonic level gauge 344 17.6 Radar (microwave) methods 346 Contents xiii 17.7 Radiation methods 346 17.8 Other techniques 348 17.8.1 Vibrating level sensor 348 17.8.2 Hot-wire elements/carbon resistor elements 348 17.8.3 Laser methods 349 17.8.4 Fibre-optic level sensors 349 17.8.5 Thermography 349 17.9 Intelligent level-measuring instruments 351 17.10 Choice between different level sensors 351 References and further reading 351 18 MASS, FORCE AND TORQUE MEASUREMENT 352 18.1 Mass (weight) measurement 352 18.1.1 Electronic load cell (electronic balance) 352 18.1.2 Pneumatic/hydraulic load cells 354 18.1.3 Intelligent load cells 355 18.1.4 Mass-balance (weighing) instruments 356 18.1.5 Spring balance 359 18.2 Force measurement 359 18.2.1 Use of accelerometers 360 18.2.2 Vibrating wire sensor 360 18.3 Torque measurement 361 18.3.1 Reaction forces in shaft bearings 361 18.3.2 Prony brake 361 18.3.3 Measurement of induced strain 362 18.3.4 Optical torque measurement 364 19 TRANSLATIONAL MOTION TRANSDUCERS 365 19.1 Displacement 365 19.1.1 The resistive potentiometer 365 19.1.2 Linear variable differential transformer (LVDT) 368 19.1.3 Variable capacitance transducers 370 19.1.4 Variable inductance transducers 371 19.1.5 Strain gauges 371 19.1.6 Piezoelectric transducers 373 19.1.7 Nozzle flapper 373 19.1.8 Other methods of measuring small displacements 374 19.1.9 Measurement of large displacements (range sensors) 378 19.1.10 Proximity sensors 381 19.1.11 Selection of translational measurement transducers 382 19.2 Velocity 382 19.2.1 Differentiation of displacement measurements 382 19.2.2 Integration of the output of an accelerometer 383 19.2.3 Conversion to rotational velocity 383 19.3 Acceleration 383 19.3.1 Selection of accelerometers 385 xiv Contents 19.4 Vibration 386 19.4.1 Nature of vibration 386 19.4.2 Vibration measurement 386 19.5 Shock 388 20 ROTATIONAL MOTION TRANSDUCERS 390 20.1 Rotational displacement 390 20.1.1 Circular and helical potentiometers 390 20.1.2 Rotational differential transformer 391 20.1.3 Incremental shaft encoders 392 20.1.4 Coded-disc shaft encoders 394 20.1.5 The resolver 398 20.1.6 The synchro 399 20.1.7 The induction potentiometer 402 20.1.8 The rotary inductosyn 402 20.1.9 Gyroscopes 402 20.1.10 Choice between rotational displacement transducers 406 20.2 Rotational velocity 407 20.2.1 Digital tachometers 407 20.2.2 Stroboscopic methods 410 20.2.3 Analogue tachometers 411 20.2.4 Mechanical flyball 413 20.2.5 The rate gyroscope 415 20.2.6 Fibre-optic gyroscope 416 20.2.7 Differentiation of angular displacement measurements 417 20.2.8 Integration of the output from an accelerometer 417 20.2.9 Choice between rotational velocity transducers 417 20.3 Measurement of rotational acceleration 417 References and further reading 418 21 SUMMARY OF OTHER MEASUREMENTS 419 21.1 Dimension measurement 419 21.1.1 Rules and tapes 419 21.1.2 Callipers 421 21.1.3 Micrometers 422 21.1.4 Gauge blocks (slip gauges) and length bars 423 21.1.5 Height and depth measurement 425 21.2 Angle measurement 426 21.3 Flatness measurement 428 21.4 Volume measurement 428 21.5 Viscosity measurement 429 21.5.1 Capillary and tube viscometers 430 21.5.2 Falling body viscometer 431 21.5.3 Rotational viscometers 431 21.6 Moisture measurement 432 21.6.1 Industrial moisture measurement techniques 432 21.6.2 Laboratory techniques for moisture measurement 434 Contents xv 21.6.3 Humidity measurement 435 21.7 Sound measurement 436 21.8 pH measurement 437 21.8.1 The glass electrode 438 21.8.2 Other methods of pH measurement 439 21.9 Gas sensing and analysis 439 21.9.1 Catalytic (calorimetric) sensors 440 21.9.2 Paper tape sensors 441 21.9.3 Liquid electrolyte electrochemical cells 441 21.9.4 Solid-state electrochemical cells (zirconia sensor) 442 21.9.5 Catalytic gate FETs 442 21.9.6 Semiconductor (metal oxide) sensors 442 21.9.7 Organic sensors 442 21.9.8 Piezoelectric devices 443 21.9.9 Infra-red absorption 443 21.9.10 Mass spectrometers 443 21.9.11 Gas chromatography 443 References and further reading 444 APPENDIX 1 Imperial–metric–SI conversion tables 445 APPENDIX 2 Thévenin’s theorem 452 APPENDIX 3 Thermocouple tables 458 APPENDIX 4 Solutions to self-test questions 464 INDEX 469 Preface The foundations of this book lie in the highly successful text Principles of Measurement and Instrumentation by the same author. The first edition of this was published in 1988, and a second, revised and extended edition appeared in 1993. Since that time, a number of new developments have occurred in the field of measurement. In particular, there have been significant advances in smart sensors, intelligent instruments, microsensors, digital signal processing, digital recorders, digital fieldbuses and new methods of signal transmission. The rapid growth of digital components within measurement systems has also created a need to establish procedures for measuring and improving the reliability of the software that is used within such components. Formal standards governing instru- ment calibration procedures and measurement system performance have also extended beyond the traditional area of quality assurance systems (BS 5781, BS 5750 and more recently ISO 9000) into new areas such as environmental protection systems (BS 7750 and ISO 14000). Thus, an up-to-date book incorporating all of the latest developments in measurement is strongly needed. With so much new material to include, the oppor- tunity has been taken to substantially revise the order and content of material presented previously in Principles of Measurement and Instrumentation, and several new chapters have been written to cover the many new developments in measurement and instru- mentation that have occurred over the past few years. To emphasize the substantial revision that has taken place, a decision has been made to publish the book under a new title rather than as a third edition of the previous book. Hence, Measurement and Instrumentation Principles has been born. The overall aim of the book is to present the topics of sensors and instrumentation, and their use within measurement systems, as an integrated and coherent subject. Measurement systems, and the instruments and sensors used within them, are of immense importance in a wide variety of domestic and industrial activities. The growth in the sophistication of instruments used in industry has been particularly significant as advanced automation schemes have been developed. Similar developments have also been evident in military and medical applications. Unfortunately, the crucial part that measurement plays in all of these systems tends to get overlooked, and measurement is therefore rarely given the importance that it deserves. For example, much effort goes into designing sophisticated automatic control systems, but little regard is given to the accuracy and quality of the raw measurement data that such systems use as their inputs. This disregard of measurement system quality and performance means that such control systems will never achieve their full xviii Preface potential, as it is very difficult to increase their performance beyond the quality of the raw measurement data on which they depend. Ideally, the principles of good measurement and instrumentation practice should be taught throughout the duration of engineering courses, starting at an elementary level and moving on to more advanced topics as the course progresses. With this in mind, the material contained in this book is designed both to support introductory courses in measurement and instrumentation, and also to provide in-depth coverage of advanced topics for higher-level courses. In addition, besides its role as a student course text, it is also anticipated that the book will be useful to practising engineers, both to update their knowledge of the latest developments in measurement theory and practice, and also to serve as a guide to the typical characteristics and capabilities of the range of sensors and instruments that are currently in use. The text is divided into two parts. The principles and theory of measurement are covered first in Part 1 and then the ranges of instruments and sensors that are available for measuring various physical quantities are covered in Part 2. This order of coverage has been chosen so that the general characteristics of measuring instruments, and their behaviour in different operating environments, are well established before the reader is introduced to the procedures involved in choosing a measurement device for a particular application. This ensures that the reader will be properly equipped to appreciate and critically appraise the various merits and characteristics of different instruments when faced with the task of choosing a suitable instrument. It should be noted that, whilst measurement theory inevitably involves some mathe- matics, the mathematical content of the book has deliberately been kept to the minimum necessary for the reader to be able to design and build measurement systems that perform to a level commensurate with the needs of the automatic control scheme or other system that they support. Where mathematical procedures are necessary, worked examples are provided as necessary throughout the book to illustrate the principles involved. Self-assessment questions are also provided in critical chapters to enable readers to test their level of understanding, with answers being provided in Appendix 4. Part 1 is organized such that all of the elements in a typical measurement system are presented in a logical order, starting with the capture of a measurement signal by a sensor and then proceeding through the stages of signal processing, sensor output transducing, signal transmission and signal display or recording. Ancillary issues, such as calibration and measurement system reliability, are also covered. Discussion starts with a review of the different classes of instrument and sensor available, and the sort of applications in which these different types are typically used. This opening discussion includes analysis of the static and dynamic characteristics of instruments and exploration of how these affect instrument usage. A comprehensive discussion of measurement system errors then follows, with appropriate procedures for quantifying and reducing errors being presented. The importance of calibration procedures in all aspects of measurement systems, and particularly to satisfy the requirements of stan- dards such as ISO 9000 and ISO 14000, is recognized by devoting a full chapter to the issues involved. This is followed by an analysis of measurement noise sources, and discussion on the various analogue and digital signal-processing procedures that are used to attenuate noise and improve the quality of signals. After coverage of the range of electrical indicating and test instruments that are used to monitor electrical Preface xix measurement signals, a chapter is devoted to presenting the range of variable conver- sion elements (transducers) and techniques that are used to convert non-electrical sensor outputs into electrical signals, with particular emphasis on electrical bridge circuits. The problems of signal transmission are considered next, and various means of improving the quality of transmitted signals are presented. This is followed by an introduction to digital computation techniques, and then a description of their use within intelligent measurement devices. The methods used to combine a number of intelligent devices into a large measurement network, and the current status of development of digital fieldbuses, are also explained. Then, the final element in a measurement system, of displaying, recording and presenting measurement data, is covered. To conclude Part 1, the issues of measurement system reliability, and the effect of unreliability on plant safety systems, are discussed. This discussion also includes the subject of software reliability, since computational elements are now embedded in many measurement systems. Part 2 commences in the opening chapter with a review of the various technologies used in measurement sensors. The chapters that follow then provide comprehensive coverage of the main types of sensor and instrument that exist for measuring all the physical quantities that a practising engineer is likely to meet in normal situations. However, whilst the coverage is as comprehensive as possible, the distinction is empha- sized between (a) instruments that are current and in common use, (b) instruments that are current but not widely used except in special applications, for reasons of cost or limited capabilities, and (c) instruments that are largely obsolete as regards new indus- trial implementations, but are still encountered on older plant that was installed some years ago. As well as emphasizing this distinction, some guidance is given about how to go about choosing an instrument for a particular measurement application. Acknowledgements The author gratefully acknowledges permission by John Wiley and Sons Ltd to repro- duce some material that was previously published in Measurement and Calibration Requirements for Quality Assurance to ISO 9000 by A. S. Morris (published 1997). The material involved are Tables 1.1, 1.2 and 3.1, Figures 3.1, 4.2 and 4.3, parts of sections 2.1, 2.2, 2.3, 3.1, 3.2, 3.6, 4.3 and 4.4, and Appendix 1. Part 1 Principles of Measurement 1 Introduction to measurement Measurement techniques have been of immense importance ever since the start of human civilization, when measurements were first needed to regulate the transfer of goods in barter trade to ensure that exchanges were fair. The industrial revolution during the nineteenth century brought about a rapid development of new instruments and measurement techniques to satisfy the needs of industrialized production tech- niques. Since that time, there has been a large and rapid growth in new industrial technology. This has been particularly evident during the last part of the twentieth century, encouraged by developments in electronics in general and computers in partic- ular. This, in turn, has required a parallel growth in new instruments and measurement techniques. The massive growth in the application of computers to industrial process control and monitoring tasks has spawned a parallel growth in the requirement for instruments to measure, record and control process variables. As modern production techniques dictate working to tighter and tighter accuracy limits, and as economic forces limiting production costs become more severe, so the requirement for instruments to be both accurate and cheap becomes ever harder to satisfy. This latter problem is at the focal point of the research and development efforts of all instrument manufacturers. In the past few years, the most cost-effective means of improving instrument accuracy has been found in many cases to be the inclusion of digital computing power within instruments themselves. These intelligent instruments therefore feature prominently in current instrument manufacturers’ catalogues. 1.1 Measurement units The very first measurement units were those used in barter trade to quantify the amounts being exchanged and to establish clear rules about the relative values of different commodities. Such early systems of measurement were based on whatever was avail- able as a measuring unit. For purposes of measuring length, the human torso was a convenient tool, and gave us units of the hand, the foot and the cubit. Although gener- ally adequate for barter trade systems, such measurement units are of course imprecise, varying as they do from one person to the next. Therefore, there has been a progressive movement towards measurement units that are defined much more accurately. 4 Introduction to measurement The first improved measurement unit was a unit of length (the metre) defined as 107 times the polar quadrant of the earth. A platinum bar made to this length was established as a standard of length in the early part of the nineteenth century. This was superseded by a superior quality standard bar in 1889, manufactured from a platinum–iridium alloy. Since that time, technological research has enabled further improvements to be made in the standard used for defining length. Firstly, in 1960, a standard metre was redefined in terms of 1.65076373 ð 106 wavelengths of the radia- tion from krypton-86 in vacuum. More recently, in 1983, the metre was redefined yet again as the length of path travelled by light in an interval of 1/299 792 458 seconds. In a similar fashion, standard units for the measurement of other physical quantities have been defined and progressively improved over the years. The latest standards for defining the units used for measuring a range of physical variables are given in Table 1.1. The early establishment of standards for the measurement of physical quantities proceeded in several countries at broadly parallel times, and in consequence, several sets of units emerged for measuring the same physical variable. For instance, length can be measured in yards, metres, or several other units. Apart from the major units of length, subdivisions of standard units exist such as feet, inches, centimetres and millimetres, with a fixed relationship between each fundamental unit and its sub- divisions. Table 1.1 Definitions of standard units Physical quantity Standard unit Definition Length metre The length of path travelled by light in an interval of 1/299 792 458 seconds Mass kilogram The mass of a platinum–iridium cylinder kept in the International Bureau of Weights and Measures, Sèvres, Paris Time second 9.192631770 ð 109 cycles of radiation from vaporized caesium-133 (an accuracy of 1 in 1012 or 1 second in 36 000 years) Temperature kelvin The temperature difference between absolute zero and the triple point of water is defined as 273.16 kelvin Current ampere One ampere is the current flowing through two infinitely long parallel conductors of negligible cross-section placed 1 metre apart in a vacuum and producing a force of 2 ð 107 newtons per metre length of conductor Luminous intensity candela One candela is the luminous intensity in a given direction from a source emitting monochromatic radiation at a frequency of 540 terahertz (Hz ð 1012 ) and with a radiant density in that direction of 1.4641 mW/steradian. (1 steradian is the solid angle which, having its vertex at the centre of a sphere, cuts off an area of the sphere surface equal to that of a square with sides of length equal to the sphere radius) Matter mole The number of atoms in a 0.012 kg mass of carbon-12 Measurement and Instrumentation Principles 5 Table 1.2 Fundamental and derived SI units (a) Fundamental units Quantity Standard unit Symbol Length metre m Mass kilogram kg Time second s Electric current ampere A Temperature kelvin K Luminous intensity candela cd Matter mole mol (b) Supplementary fundamental units Quantity Standard unit Symbol Plane angle radian rad Solid angle steradian sr (c) Derived units Derivation Quantity Standard unit Symbol formula Area square metre m2 Volume cubic metre m3 Velocity metre per second m/s Acceleration metre per second squared m/s2 Angular velocity radian per second rad/s Angular acceleration radian per second squared rad/s2 Density kilogram per cubic metre kg/m3 Specific volume cubic metre per kilogram m3 /kg Mass flow rate kilogram per second kg/s Volume flow rate cubic metre per second m3 /s Force newton N kg m/s2 Pressure newton per square metre N/m2 Torque newton metre Nm Momentum kilogram metre per second kg m/s Moment of inertia kilogram metre squared kg m2 Kinematic viscosity square metre per second m2 /s Dynamic viscosity newton second per square metre N s/m2 Work, energy, heat joule J Nm Specific energy joule per cubic metre J/m3 Power watt W J/s Thermal conductivity watt per metre kelvin W/m K Electric charge coulomb C As Voltage, e.m.f., pot. diff. volt V W/A Electric field strength volt per metre V/m Electric resistance ohm V/A Electric capacitance farad F A s/V Electric inductance henry H V s/A Electric conductance siemen S A/V Resistivity ohm metre m Permittivity farad per metre F/m Permeability henry per metre H/m Current density ampere per square metre A/m2 (continued overleaf ) 6 Introduction to measurement Table 1.2 (continued) (c) Derived units Derivation Quantity Standard unit Symbol formula Magnetic flux weber Wb Vs Magnetic flux density tesla T Wb/m2 Magnetic field strength ampere per metre A/m Frequency hertz Hz s1 Luminous flux lumen lm cd sr Luminance candela per square metre cd/m2 Illumination lux lx lm/m2 Molar volume cubic metre per mole m3 /mol Molarity mole per kilogram mol/kg Molar energy joule per mole J/mol Yards, feet and inches belong to the Imperial System of units, which is characterized by having varying and cumbersome multiplication factors relating fundamental units to subdivisions such as 1760 (miles to yards), 3 (yards to feet) and 12 (feet to inches). The metric system is an alternative set of units, which includes for instance the unit of the metre and its centimetre and millimetre subdivisions for measuring length. All multiples and subdivisions of basic metric units are related to the base by factors of ten and such units are therefore much easier to use than Imperial units. However, in the case of derived units such as velocity, the number of alternative ways in which these can be expressed in the metric system can lead to confusion. As a result of this, an internationally agreed set of standard units (SI units or Systèmes Internationales d’Unités) has been defined, and strong efforts are being made to encourage the adoption of this system throughout the world. In support of this effort, the SI system of units will be used exclusively in this book. However, it should be noted that the Imperial system is still widely used, particularly in America and Britain. The European Union has just deferred planned legislation to ban the use of Imperial units in Europe in the near future, and the latest proposal is to introduce such legislation to take effect from the year 2010. The full range of fundamental SI measuring units and the further set of units derived from them are given in Table 1.2. Conversion tables relating common Imperial and metric units to their equivalent SI units can also be found in Appendix 1. 1.2 Measurement system applications Today, the techniques of measurement are of immense importance in most facets of human civilization. Present-day applications of measuring instruments can be classi- fied into three major areas. The first of these is their use in regulating trade, applying instruments that measure physical quantities such as length, volume and mass in terms of standard units. The particular instruments and transducers employed in such appli- cations are included in the general description of instruments presented in Part 2 of this book. Measurement and Instrumentation Principles 7 The second application area of measuring instruments is in monitoring functions. These provide information that enables human beings to take some prescribed action accordingly. The gardener uses a thermometer to determine whether he should turn the heat on in his greenhouse or open the windows if it is too hot. Regular study of a barometer allows us to decide whether we should take our umbrellas if we are planning to go out for a few hours. Whilst there are thus many uses of instrumentation in our normal domestic lives, the majority of monitoring functions exist to provide the information necessary to allow a human being to control some industrial operation or process. In a chemical process for instance, the progress of chemical reactions is indicated by the measurement of temperatures and pressures at various points, and such measurements allow the operator to take correct decisions regarding the electrical supply to heaters, cooling water flows, valve positions etc. One other important use of monitoring instruments is in calibrating the instruments used in the automatic process control systems described below. Use as part of automatic feedback control systems forms the third application area of measurement systems. Figure 1.1 shows a functional block diagram of a simple temperature control system in which the temperature Ta of a room is maintained at a reference value Td. The value of the controlled variable Ta , as determined by a temperature-measuring device, is compared with the reference value Td , and the differ- ence e is applied as an error signal to the heater. The heater then modifies the room temperature until Ta D Td. The characteristics of the measuring instruments used in any feedback control system are of fundamental importance to the quality of control achieved. The accuracy and resolution with which an output variable of a process is controlled can never be better than the accuracy and resolution of the measuring instruments used. This is a very important principle, but one that is often inadequately discussed in many texts on automatic control systems. Such texts explore the theoret- ical aspects of control system design in considerable depth, but fail to give sufficient emphasis to the fact that all gain and phase margin performance calculations etc. are entirely dependent on the quality of the process measurements obtained. Comparator Reference Error Heater Room value Td Room signal temperature (Td−Ta) Ta Ta Temperature measuring device Fig. 1.1 Elements of a simple closed-loop control system. 8 Introduction to measurement 1.3 Elements of a measurement system A measuring system exists to provide information about the physical value of some variable being measured. In simple cases, the system can consist of only a single unit that gives an output reading or signal according to the magnitude of the unknown variable applied to it. However, in more complex measurement situations, a measuring system consists of several separate elements as shown in Figure 1.2. These compo- nents might be contained within one or more boxes, and the boxes holding individual measurement elements might be either close together or physically separate. The term measuring instrument is commonly used to describe a measurement system, whether it contains only one or many elements, and this term will be widely used throughout this text. The first element in any measuring system is the primary sensor: this gives an output that is a function of the measurand (the input applied to it). For most but not all sensors, this function is at least approximately linear. Some examples of primary sensors are a liquid-in-glass thermometer, a thermocouple and a strain gauge. In the case of the mercury-in-glass thermometer, the output reading is given in terms of the level of the mercury, and so this particular primary sensor is also a complete measurement system in itself. However, in general, the primary sensor is only part of a measurement system. The types of primary sensors available for measuring a wide range of physical quantities are presented in Part 2 of this book. Variable conversion elements are needed where the output variable of a primary transducer is in an inconvenient form and has to be converted to a more convenient form. For instance, the displacement-measuring strain gauge has an output in the form of a varying resistance. The resistance change cannot be easily measured and so it is converted to a change in voltage by a bridge circuit, which is a typical example of a variable conversion element. In some cases, the primary sensor and variable conversion element are combined, and the combination is known as a transducer.Ł Signal processing elements exist to improve the quality of the output of a measure- ment system in some way. A very common type of signal processing element is the electronic amplifier, which amplifies the output of the primary transducer or variable conversion element, thus improving the sensitivity and resolution of measurement. This element of a measuring system is particularly important where the primary transducer has a low output. For example, thermocouples have a typical output of only a few millivolts. Other types of signal processing element are those that filter out induced noise and remove mean levels etc. In some devices, signal processing is incorporated into a transducer, which is then known as a transmitter.Ł In addition to these three components just mentioned, some measurement systems have one or two other components, firstly to transmit the signal to some remote point and secondly to display or record the signal if it is not fed automatically into a feed- back control system. Signal transmission is needed when the observation or application point of the output of a measurement system is some distance away from the site of the primary transducer. Sometimes, this separation is made solely for purposes of convenience, but more often it follows from the physical inaccessibility or envi- ronmental unsuitability of the site of the primary transducer for mounting the signal Ł In some cases, the word ‘sensor’ is used generically to refer to both transducers and transmitters. Measurement and Instrumentation Principles 9 Measured variable Output (measurand) measurement Sensor Variable Signal conversion processing element Output Use of measurement display/ at remote point Signal recording Signal transmission presentation or recording Fig. 1.2 Elements of a measuring instrument. presentation/recording unit. The signal transmission element has traditionally consisted of single or multi-cored cable, which is often screened to minimize signal corruption by induced electrical noise. However, fibre-optic cables are being used in ever increasing numbers in modern installations, in part because of their low transmission loss and imperviousness to the effects of electrical and magnetic fields. The final optional element in a measurement system is the point where the measured signal is utilized. In some cases, this element is omitted altogether because the measure- ment is used as part of an automatic control scheme, and the transmitted signal is fed directly into the control system. In other cases, this element in the measurement system takes the form either of a signal presentation unit or of a signal-recording unit. These take many forms according to the requirements of the particular measurement application, and the range of possible units is discussed more fully in Chapter 11. 1.4 Choosing appropriate measuring instruments The starting point in choosing the most suitable instrument to use for measurement of a particular quantity in a manufacturing plant or other system is the specification of the instrument characteristics required, especially parameters like the desired measure- ment accuracy, resolution, sensitivity and dynamic performance (see next chapter for definitions of these). It is also essential to know the environmental conditions that the instrument will be subjected to, as some conditions will immediately either eliminate the possibility of using certain types of instrument or else will create a requirement for expensive protection of the instrument. It should also be noted that protection reduces the performance of some instruments, especially in terms of their dynamic charac- teristics (for example, sheaths protecting thermocouples and resistance thermometers reduce their speed of response). Provision of this type of information usually requires the expert knowledge of personnel who are intimately acquainted with the operation of the manufacturing plant or system in question. Then, a skilled instrument engineer, having knowledge of all the instruments that are available for measuring the quantity in question, will be able to evaluate the possible list of instruments in terms of their accuracy, cost and suitability for the environmental conditions and thus choose the 10 Introduction to measurement most appropriate instrument. As far as possible, measurement systems and instruments should be chosen that are as insensitive as possible to the operating environment, although this requirement is often difficult to meet because of cost and other perfor- mance considerations. The extent to which the measured system will be disturbed during the measuring process is another important factor in instrument choice. For example, significant pressure loss can be caused to the measured system in some techniques of flow measurement. Published literature is of considerable help in the choice of a suitable instrument for a particular measurement situation. Many books are available that give valuable assistance in the necessary evaluation by providing lists and data about all the instru- ments available for measuring a range of physical quantities (e.g. Part 2 of this text). However, new techniques and instruments are being developed all the time, and there- fore a good instrumentation engineer must keep abreast of the latest developments by reading the appropriate technical journals regularly. The instrument characteristics discussed in the next chapter are the features that form the technical basis for a comparison between the relative merits of different instruments. Generally, the better the characteristics, the higher the cost. However, in comparing the cost and relative suitability of different instruments for a particular measurement situation, considerations of durability, maintainability and constancy of performance are also very important because the instrument chosen will often have to be capable of operating for long periods without performance degradation and a requirement for costly maintenance. In consequence of this, the initial cost of an instrument often has a low weighting in the evaluation exercise. Cost is very strongly correlated with the performance of an instrument, as measured by its static characteristics. Increasing the accuracy or resolution of an instrument, for example, can only be done at a penalty of increasing its manufacturing cost. Instru- ment choice therefore proceeds by specifying the minimum characteristics required by a measurement situation and then searching manufacturers’ catalogues to find an instrument whose characteristics match those required. To select an instrument with characteristics superior to those required would only mean paying more than necessary for a level of performance greater than that needed. As well as purchase cost, other important factors in the assessment exercise are instrument durability and the maintenance requirements. Assuming that one had £10 000 to spend, one would not spend £8000 on a new motor car whose projected life was five years if a car of equivalent specification with a projected life of ten years was available for £10 000. Likewise, durability is an important consideration in the choice of instruments. The projected life of instruments often depends on the conditions in which the instrument will have to operate. Maintenance requirements must also be taken into account, as they also have cost implications. As a general rule, a good assessment criterion is obtained if the total purchase cost and estimated maintenance costs of an instrument over its life are divided by the period of its expected life. The figure obtained is thus a cost per year. However, this rule becomes modified where instruments are being installed on a process whose life is expected to be limited, perhaps in the manufacture of a particular model of car. Then, the total costs can only be divided by the period of time that an instrument is expected to be used for, unless an alternative use for the instrument is envisaged at the end of this period. Measurement and Instrumentation Principles 11 To summarize therefore, instrument choice is a compromise between performance characteristics, ruggedness and durability, maintenance requirements and purchase cost. To carry out such an evaluation properly, the instrument engineer must have a wide knowledge of the range of instruments available for measuring particular physical quan- tities, and he/she must also have a deep understanding of how instrument characteristics are affected by particular measurement situations and operating conditions. 2 Instrument types and performance characteristics 2.1 Review of instrument types Instruments can be subdivided into separate classes according to several criteria. These subclassifications are useful in broadly establishing several attributes of particular instruments such as accuracy, cost, and general applicability to different applications. 2.1.1 Active and passive instruments Instruments are divided into active or passive ones according to whether the instrument output is entirely produced by the quantity being measured or whether the quantity being measured simply modulates the magnitude of some external power source. This is illustrated by examples. An example of a passive instrument is the pressure-measuring device shown in Figure 2.1. The pressure of the fluid is translated into a movement of a pointer against a scale. The energy expended in moving the pointer is derived entirely from the change in pressure measured: there are no other energy inputs to the system. An example of an active instrument is a float-type petrol tank level indicator as sketched in Figure 2.2. Here, the change in petrol level moves a potentiometer arm, and the output signal consists of a proportion of the external voltage source applied across the two ends of the potentiometer. The energy in the output signal comes from the external power source: the primary transducer float system is merely modulating the value of the voltage from this external power source. In active instruments, the external power source is usually in electrical form, but in some cases, it can be other forms of energy such as a pneumatic or hydraulic one. One very important difference between active and passive instruments is the level of measurement resolution that can be obtained. With the simple pressure gauge shown, the amount of movement made by the pointer for a particular pressure change is closely defined by the nature of the instrument. Whilst it is possible to increase measurement resolution by making the pointer longer, such that the pointer tip moves through a longer arc, the scope for such improvement is clearly restricted by the practical limit of how long the pointer can conveniently be. In an active instrument, however, adjust- ment of the magnitude of the external energy input allows much greater control over Measurement and Instrumentation Principles 13 Pointer Scale Spring Piston Pivot Fluid Fig. 2.1 Passive pressure gauge. Pivot Float Output voltage Fig. 2.2 Petrol-tank level indicator. measurement resolution. Whilst the scope for improving measurement resolution is much greater incidentally, it is not infinite because of limitations placed on the magni- tude of the external energy input, in consideration of heating effects and for safety reasons. In terms of cost, passive instruments are normally of a more simple construction than active ones and are therefore cheaper to manufacture. Therefore, choice between active and passive instruments for a particular application involves carefully balancing the measurement resolution requirements against cost. 2.1.2 Null-type and deflection-type instruments The pressure gauge just mentioned is a good example of a deflection type of instrument, where the value of the quantity being measured is displayed in terms of the amount of 14 Instrument types and performance characteristics Weights Piston Datum level Fig. 2.3 Deadweight pressure gauge. movement of a pointer. An alternative type of pressure gauge is the deadweight gauge shown in Figure 2.3, which is a null-type instrument. Here, weights are put on top of the piston until the downward force balances the fluid pressure. Weights are added until the piston reaches a datum level, known as the null point. Pressure measurement is made in terms of the value of the weights needed to reach this null position. The accuracy of these two instruments depends on different things. For the first one it depends on the linearity and calibration of the spring, whilst for the second it relies on the calibration of the weights. As calibration of weights is much easier than careful choice and calibration of a linear-characteristic spring, this means that the second type of instrument will normally be the more accurate. This is in accordance with the general rule that null-type instruments are more accurate than deflection types. In terms of usage, the deflection type instrument is clearly more convenient. It is far simpler to read the position of a pointer against a scale than to add and subtract weights until a null point is reached. A deflection-type instrument is therefore the one that would normally be used in the workplace. However, for calibration duties, the null-type instrument is preferable because of its superior accuracy. The extra effort required to use such an instrument is perfectly acceptable in this case because of the infrequent nature of calibration operations. 2.1.3 Analogue and digital instruments An analogue instrument gives an output that varies continuously as the quantity being measured changes. The output can have an infinite number of values within the range that the instrument is designed to measure. The deflection-type of pressure gauge described earlier in this chapter (Figure 2.1) is a good example of an analogue instru- ment. As the input value changes, the pointer moves with a smooth continuous motion. Whilst the pointer can therefore be in an infinite number of positions within its range of movement, the number of different positions that the eye can discriminate between is strictly limited, this discrimination being dependent upon how large the scale is and how finely it is divided. A digital instrument has an output that varies in discrete steps and so can only have a finite number of values. The rev counter sketched in Figure 2.4 is an example of Measurement and Instrumentation Principles 15 Switch Counter Cam Fig. 2.4 Rev counter. a digital instrument. A cam is attached to the revolving body whose motion is being measured, and on each revolution the cam opens and closes a switch. The switching operations are counted by an electronic counter. This system can only count whole revolutions and cannot discriminate any motion that is less than a full revolution. The distinction between analogue and digital instruments has become particularly important with the rapid growth in the application of microcomputers to automatic control systems. Any digital computer system, of which the microcomputer is but one example, performs its computations in digital form. An instrument whose output is in digital form is therefore particularly advantageous in such applications, as it can be interfaced directly to the control computer. Analogue instruments must be interfaced to the microcomputer by an analogue-to-digital (A/D) converter, which converts the analogue output signal from the instrument into an equivalent digital quantity that can be read into the computer. This conversion has several disadvantages. Firstly, the A/D converter adds a significant cost to the system. Secondly, a finite time is involved in the process of converting an analogue signal to a digital quantity, and this time can be critical in the control of fast processes where the accuracy of control depends on the speed of the controlling computer. Degrading the speed of operation of the control computer by imposing a requirement for A/D conversion thus impairs the accuracy by which the process is controlled. 2.1.4 Indicating instruments and instruments with a signal output The final way in which instruments can be divided is between those that merely give an audio or visual indication of the magnitude of the physical quantity measured and those that give an output in the form of a measurement signal whose magnitude is proportional to the measured quantity. The class of indicating instruments normally includes all null-type instruments and most passive ones. Indicators can also be further divided into those that have an analogue output and those that have a digital display. A common analogue indicator is the liquid-in-glass thermometer. Another common indicating device, which exists in both analogue and digital forms, is the bathroom scale. The older mechanical form of this is an analogue type of instrument that gives an output consisting of a rotating 16 Instrument types and performance characteristics pointer moving against a scale (or sometimes a rotating scale moving against a pointer). More recent electronic forms of bathroom scale have a digital output consisting of numbers presented on an electronic display. One major drawback with indicating devices is that human intervention is required to read and record a measurement. This process is particularly prone to error in the case of analogue output displays, although digital displays are not very prone to error unless the human reader is careless. Instruments that have a signal-type output are commonly used as part of automatic control systems. In other circumstances, they can also be found in measurement systems where the output measurement signal is recorded in some way for later use. This subject is covered in later chapters. Usually, the measurement signal involved is an electrical voltage, but it can take other forms in some systems such as an electrical current, an optical signal or a pneumatic signal. 2.1.5 Smart and non-smart instruments The advent of the microprocessor has created a new division in instruments between those that do incorporate a microprocessor (smart) and those that don’t. Smart devices are considered in detail in Chapter 9. 2.2 Static characteristics of instruments If we have a thermometer in a room and its reading shows a temperature of 20° C, then it does not really matter whether the true temperature of the room is 19.5° C or 20.5° C. Such small variations around 20° C are too small to affect whether we feel warm enough or not. Our bodies cannot discriminate between such close levels of temperature and therefore a thermometer with an inaccuracy of š0.5° C is perfectly adequate. If we had to measure the temperature of certain chemical processes, however, a variation of 0.5° C might have a significant effect on the rate of reaction or even the products of a process. A measurement inaccuracy much less than š0.5° C is therefore clearly required. Accuracy of measurement is thus one consideration in the choice of instrument for a particular application. Other parameters such as sensitivity, linearity and the reaction to ambient temperature changes are further considerations. These attributes are collectively known as the static characteristics of instruments, and are given in the data sheet for a particular instrument. It is important to note that the values quoted for instrument characteristics in such a data sheet only apply when the instrument is used under specified standard calibration conditions. Due allowance must be made for variations in the characteristics when the instrument is used in other conditions. The various static characteristics are defined in the following paragraphs. 2.2.1 Accuracy and inaccuracy (measurement uncertainty) The accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value. In practice, it is more usual to quote the inaccuracy figure rather than the accuracy figure for an instrument. Inaccuracy is the extent to Measurement and Instrumentation Principles 17 which a reading might be wrong, and is often quoted as a percentage of the full-scale (f.s.) reading of an instrument. If, for example, a pressure gauge of range 0–10 bar has a quoted inaccuracy of š1.0% f.s. (š1% of full-scale reading), then the maximum error to be expected in any reading is 0.1 bar. This means that when the instrument is reading 1.0 bar, the possible error is 10% of this value. For this reason, it is an important system design rule that instruments are chosen such that their range is appropriate to the spread of values being measured, in order that the best possible accuracy is maintained in instrument readings. Thus, if we were measuring pressures with expected values between 0 and 1 bar, we would not use an instrument with a range of 0–10 bar. The term measurement uncertainty is frequently used in place of inaccuracy. 2.2.2 Precision/repeatability/reproducibility Precision is a term that describes an instrument’s degree of freedom from random errors. If a large number of readings are taken of the same quantity by a high precision instrument, then the spread of readings will be very small. Precision is often, though incorrectly, confused with accuracy. High precision does not imply anything about measurement accuracy. A high precision instrument may have a low accuracy. Low accuracy measurements from a high precision instrument are normally caused by a bias in the measurements, which is removable by recalibration. The terms repeatability and reproducibility mean approximately the same but are applied in different contexts as given below. Repeatability describes the closeness of output readings when the same input is applied repetitively over a short period of time, with the same measurement conditions, same instrument and observer, same location and same conditions of use maintained throughout. Reproducibility describes the closeness of output readings for the same input when there are changes in the method of measurement, observer, measuring instrument, location, conditions of use and time of measurement. Both terms thus describe the spread of output readings for the same input. This spread is referred to as repeatability if the measurement conditions are constant and as reproducibility if the measurement conditions vary. The degree of repeatability or reproducibility in measurements from an instrument is an alternative way of expressing its precision. Figure 2.5 illustrates this more clearly. The figure shows the results of tests on three industrial robots that were programmed to place components at a particular point on a table. The target point was at the centre of the concentric circles shown, and the black dots represent the points where each robot actually deposited components at each attempt. Both the accuracy and precision of Robot 1 are shown to be low in this trial. Robot 2 consistently puts the component down at approximately the same place but this is the wrong point. Therefore, it has high precision but low accuracy. Finally, Robot 3 has both high precision and high accuracy, because it consistently places the component at the correct target position. 2.2.3 Tolerance Tolerance is a term that is closely related to accuracy and defines the maximum error that is to be expected in some value. Whilst it is not, strictly speaking, a static 18 Instrument types and performance characteristics (a) Low precision, low accuracy ROBOT 1 (b) High precision, low accuracy ROBOT 2 (c) High precision, high accuracy ROBOT 3 Fig. 2.5 Comparison of accuracy and precision. characteristic of measuring instruments, it is mentioned here because the accuracy of some instruments is sometimes quoted as a tolerance figure. When used correctly, tolerance describes the maximum deviation of a manufactured component from some specified value. For instance, crankshafts are machined with a diameter tolerance quoted as so many microns (106 m), and electric circuit components such as resistors have tolerances of perhaps 5%. One resistor chosen at random from a batch having a nominal value 1000 W and tolerance 5% might have an actual value anywhere between 950 W and 1050 W. 2.2.4 Range or span The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure. Measurement and Instrumentation Principles 19 2.2.5 Linearity It is normally desirable that the output reading of an instrument is linearly proportional to the quantity being measured. The Xs marked on Figure 2.6 show a plot of the typical output readings of an instrument when a sequence of input quantities are applied to it. Normal procedure is to draw a good fit straight line through the Xs, as shown in Figure 2.6. (Whilst this can often be done with reasonable accuracy by eye, it is always preferable to apply a mathematical least-squares line-fitting technique, as described in Chapter 11.) The non-linearity is then defined as the maximum deviation of any of the output readings marked X from this straight line. Non-linearity is usually expressed as a percentage of full-scale reading. 2.2.6 Sensitivity of measurement The sensitivity of measurement is a measure of the change in instrument output that occurs when the quantity being measured changes by a given amount. Thus, sensitivity is the ratio: scale deflection value of measurand producing deflection The sensitivity of measurement is therefore the slope of the straight line drawn on Figure 2.6. If, for example, a pressure of 2 bar produces a deflection of 10 degrees in a pressure transducer, the sensitivity of the instrument is 5 degrees/bar (assuming that the deflection is zero with zero pressure applied). Output reading Gradient = Sensitivity of measurement Measured quantity Fig. 2.6 Instrument output characteristic. 20 Instrument types and performance characteristics Example 2.1 The following resistance values of a platinum resistance thermometer were measured at a range of temperatures. Determine the measurement sensitivity of the instrument in ohms/° C. Resistance () Temperature (° C) 307 200 314 230 321 260 328 290 Solution If these values are plotted on a graph, the straight-line relationship between resistance change and temperature change is obvious. For a change in temperature of 30° C, the change in resistance is 7 . Hence the measurement sensitivity D 7/30 D 0.233 /° C. 2.2.7 Threshold If the input to an instrument is gradually increased from zero, the input will have to reach a certain minimum level before the change in the instrument output reading is of a large enough magnitude to be detectable. This minimum level of input is known as the threshold of the instrument. Manufacturers vary in the way that they specify threshold for instruments. Some quote absolute values, whereas others quote threshold as a percentage of full-scale readings. As an illustration, a car speedometer typically has a threshold of about 15 km/h. This means that, if the vehicle starts from rest and acceler- ates, no output reading is observed on the speedometer until the speed reaches 15 km/h. 2.2.8 Resolution When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in the input measured quantity that produces an observable change in the instrument output. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of f.s. deflection. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions. Using a car speedometer as an example again, this has subdivisions of typically 20 km/h. This means that when the needle is between the scale markings, we cannot estimate speed more accurately than to the nearest 5 km/h. This figure of 5 k