Introduction to Instrumentation and Control Engineering PDF
Document Details
Uploaded by JoyousJasper3734
Batangas State University
Tags
Summary
This document provides an introduction to instrumentation and control engineering, covering basic concepts, classifications of instruments, and control loop elements. It details the different types of signals, primary and secondary elements, and control systems, including open and closed-loop examples. Key topics of this document include instrument, sensors, and transducers.
Full Transcript
Introduction to Instrumentation and Control Engineering Objectives: 1. To gain understanding about the basic concepts and principles in Instrumentation and Control Engineering. 2. To explain the difference of the classifications of instruments. 3. To know the basic elements of a control loop. Me...
Introduction to Instrumentation and Control Engineering Objectives: 1. To gain understanding about the basic concepts and principles in Instrumentation and Control Engineering. 2. To explain the difference of the classifications of instruments. 3. To know the basic elements of a control loop. Measurement Measurement (also called metrology) is the science of determining values of physical variables. It is a method to obtain information regarding the physical values of the variable. Measurement of a given quantity is essentially an act or result of comparison between the quantity (whose magnitude is unknown) and predetermined or predefined standards. Two quantities are compared the result is expressed in numerical values. Why do we measure? For a few minutes think of an answer for this question and be ready to share it to the class. The Needs for Measurement The measure for the weight of the precious stones, such as diamond, carat is used. Carat was the weight of four carob (Keçiboynuzu) beans. Today carat is standardized as 0.2 gr. Metric System SI Units: Systemes Internationales d’Unites Two different units are defined: - Fundamental Units - Derived Units Standards International Organization for Standardization (ISO) International Electrotechnical Commission (IEC) American National Standards Institute (ANSI) Standards Council of Canada ( SCC) British Standards (BS) Institute of Turkish Standards (TSE) Standard Bodies 1. International standards: Defined by international agreements. 2. Primary standards: Maintained at institutions around the world. Main function is checking accuracy of secondary standards. Instrumentation Instrumentation is used in almost every industrial process and generating system, where consistent and reliable operations are required. Instrumentation provides the means of monitoring, recording and controlling a process to maintain it at a desired state. A typical industrial plant such as an electric generating station yields many process variables that have to be measured and manipulated. Process Variables Variables such as boiler level, temperature, pressure turbine speed, generator output and many others have to be controlled prudently to ensure a safe and efficient operation. With instrumentation, automatic control of such presses can be achieved. Specific instrumentation can be selected to measure, and to indicate process conditions so that a corrective action could be initiated if required. Instrumentation based on industrial application: “It is a collection of instruments, devices, hardware or functions or their application for the purpose of measuring, monitoring or controlling an industrial process or machine, or any combination of these.” What is an instrument? “It is device used for direct or indirect measurement, monitoring, and/or control of a variable including indicators, controllers, and other devices such as annunciators, switches and pushbuttons.” Measurement Instrument A measurement instrument is a device capable of detecting change, physical or otherwise, in a particular process. It then converts these physical changes into some form of information understandable by the user. Instrument Examples: Classification of Instruments Critical Instruments - an instrument which, if not conforming to specification, could potentially compromise product or process quality and safety. Non-critical Instrument - an instrument whose function is not critical to product or process quality, but whose function is more of an operational significance. Reference Only Instrument - an instrument whose function is not critical to product quality, not significant to equipment operation, and not used for making quality decisions. Control Systems and Process Control Control in process industries it refers to the regulation, command or direction of all aspects of the process. 2 Types of Control Manual Control Automatic Control System is an arrangement, set or collection of physical components connected or related in such a manner as to form and/or act as an entire unit. Therefore control system is an arrangement of physical components connected or related in such a manner as to command, direct or regulate itself or another system. A process simply refers to the methods of changing or refining raw materials to create end products and Process Control Process Control play an important role in how a plant process upset can be controlled and subsequent emergency actions executed. Without adequate and reliable process controls, an unexpected process occurrence cannot be monitored, controlled, and eliminated. Process controls can range from simple manual actions to computer logic controllers, remote from the required action point, with supplemental instrumentation feedback systems. Control Systems A system, whose output can be managed, controlled or regulated by varying its input is called Control System. A control system can also be a combination of smaller control systems and are normally used to get desired/required output. If we look around, we will find many control systems in our surroundings i.e. Refrigerator, Air Conditions, Washing Machines etc. Block Diagram of Control System The above figure represents a simple control system and we can think of this control system as a mathematical equation i.e. X+5=Y where, X is input, Y is output and Constant 5 is acting as a Control System. So, by changing the value of input parameter ( X ), we can change our output value (Y). Similarly, if we want a particular output value, we can achieve it by fixing input value. Explain the shown example of a simple Control System. Control Systems are classified into two main categories, which are: 1. Open Control Loop – exist when the process variable is not compared, and action is taken not in response to on the condition of the process variable. 2. Closed Control Loop – exists when a process variable is measured, compared to a setpoint and action is taken to correct any deviation from setpoint. In Open Loop Control Systems, we have three main components i.e. Input, Controller & Output. Input signal is directly fed to the controller, which utilizes it and generates the required output. In Open Loop systems, generated output has no affect on the Input signal i.e. no feedback provided. Example of an Open Loop Control Systems: Clothes Dryer is a very simple example of an open loop system. When damp clothes are put in the dryer machine, the operator / user sets the time for drying the clothes. This time acts as the input signal for the dryer. Correspondingly at the end of that time, the machine stops and clothes can be taken out. Now the thing to be noted here is that no matter if the clothes are dry enough or not, the machine will stop because of the time (input signal) fed to it. So the output of the system does not affect the input in this case. For a better understanding the block diagram of a cloth dryer control system is shown below: Traffic Light system is another easy to understand example of an open loop system. Certain input signals are fed to the controller, which then displays one of the three lights at the output turn by turn. The direct input signals can be altered to change the output light but the output has no affect on the input. As we are not passing any feedback i.e. which light turned ON or OFF. Closed Loop Control System (Feedback control system) is an advanced automated system, which generates the desired output by using inputs, Controllers and feedback elements. These systems use feedback element to fed the Output back to the controller. By doing that, we can compare the current output with input to get errors. Here's the block diagram of a Closed Loop Control System: The block diagram above is an excellent representation of a closed loop control system. As seen, the system output is being fed back to the controller through an error detector. The function of the error detector is to find the difference in the input and output signal, and feed this difference to the controller so that the output can be adjusted. In this way the system output is being automatically adjusted all the time with the help of the feedback signal and the operator does not have to worry about it. Example of an Closed Loop Control Systems: Air conditioner is a very typical example of a closed loop control system. The input signal in the form of required room temperature is fed into the controller of the air conditioner. The compressor along with its various electrical and mechanical components help in achieving the required temperature. Now whenever the room temperature changes, the temperature sensor at the output senses the change in the room temperature and the signal from the sensor is calculated by the error detector and fed back to the controller through the feedback loop to maintain the required room temperature. In this way the required output is always maintained automatically without any manual interference. The block diagram illustration of this process is shown below: Four Basic Elements of a Control Loop 1. Primary Element/Sensor 2. Secondary Element/Signal-generating Element 3. Controlling Element/Controller 4. Final Control Element Primary Elements It measures process parameters and variables. Measurement of the variables or properties are based on certain unique phenomena, such as physical, chemical, thermo-electrical factors. Note: Process Variables sensed by the primary element cannot be transmitted unless converted to an electrical (or pneumatic) signal by a secondary element. Primary Element Examples 1. Sensors Integral part of loop that first senses the value of a process variable that assumes a corresponding predetermined state and generates an output signal indictive of or proportional to the process variable. 2. Detectors It is a device that is used to detect the presence of something, such as flammable or toxic gases or discrete parts. Secondary Elements/Signal Generating Element 1. Transducer Transducers are often employed at the boundaries of automation, measurement, and control systems, where electrical signals are converted to and from other physical quantities (energy, force, torque, light, motion, position, etc.). The process of converting one form of energy to another is known as transduction. 2. Converter Converters are used to convert AC power to DC power. Virtually all the electronic devices require converters. They are also used to detect amplitude modulated radio signals. A power electronic converter uses power electronic components such as SCRs, TRIACs, IGBTs, etc. to control and convert the electric power. The main aim of the converter is to produce conditioning power with respect to a certain application. 3. Transmitter As its name implies, the general purpose of a transmitter is to transmit signals. These signals contain information, which can be audio, video, or data. It converts a reading from one sensor or transducer into a standard and transmits that signal to a monitor or controller. Transducers and transmitters are virtually the same thing, the main difference being the kind of electrical signal each sends. A transducer sends a signal in volts (V) or millivolt (mV) and a transmitter sends a signal in milliamps (mA). Types of Signal 1. Analog Signal - a signal that has no discrete positions or states and changes value. Pneumatic : 3-15 psi Electrical : 4-20 mA (Current) : 1-5 VDC (Voltage) 2. Digital Signal - a signal that generates or uses binary digit signals to represent continuous values or discrete states. Controlling Element Known as the controller and is the brain of the control system. It performs appropriate functions for maintaining the desired level (set point) of parameters to restore quality and rate of production. A controller is a device that receives data from a measurement instrument , compares that data to a programmed setpoint, and, if necessary, signals a control element to take corrective action. Common examples of controller: Programmable Logic Controller (PLC) – usually computers connected to a set of input/output (I/O) devices. The computers are programmed to respond to inputs by sending outputs to maintain all processes at setpoint. Distributed Control System (DCS) – are controllers that, in addition to performing control functions , provide readings of the status of the process, maintain databases amd advance man-machine-interface. Final Control Element The part of the control system that acts to physically change the manipulated variable. Typically used to increase or decrease fluid flow. Common Final Control Elements 1. Actuator - the part of a final control device that causes a physical change in the final control device when signaled to do so. 2. Control Valves - manipulate the flow rate of gas or liquid; whereas, the control switches manipulate the electrical energy entering a system. Instrument applications: ❖Factory automation instruments ❖Plant safety or safeguarding instruments ❖Product Quality monitoring/control instruments ❖Environmental condition monitoring /control instruments. ❖Process variable measurement and control instruments. “You are worth more than just your grades.” -end- Piping & Instrumentation Diagram Fundamentals Objectives: Understanding a P&ID Layout –Symbology –Piping that connects the equipment –Lines and instruments used to monitor and control the process –Tag numbers and functional identifiers Piping and Instrumentation Diagram It is the overall design document for a process plant It shows the interconnection of process equipment and the instrumentation used to control the process. Set of symbols are used to depict mechanical equipment, piping, piping components, valves, equipment drivers and instrumentation and controls. P&IDs – Piping & Instrumentation Drawing (original) – Process & Instrumentation Diagram (also used) – Process Flow Diagram – PFD (simplified version of the P&ID) – Piping and Instrumentation Diagrams or simply P&IDs are the “schematics” used in the field of instrumentation and control (Automation) Who Uses P&IDs? Planning a job Writing a job safety analysis (JSA) Lockout before a repair Troubleshooting when problems arise Process hazard review Training new employees Types of Instrumentation Symbols Instrument Symbols Line Symbols Valves and Actuators Instrument Symbols Symbols such as circles, lines, letters, and numbers are used to provide information about the process. Symbols may represent devices in the system or indicate how devices are connected to each other. In this notation, shapes denote function while the lines in the middle denote location or mounting Line Symbols Line symbols indicate how instruments are connected to each other and to the process and represents the types of signals transmmited in the process. Line Symbols could either be a process line symbol or a signal line symbol. Process Line Symbols Used to represent process lines and instrument connections. Process piping is generally shown with thick solid lines. Thin solid lines indicate instrument-to-process connections or instrument tubing. Signal Line Symbols Signal line symbols indicate the type of signal that connects two instruments. Valve and Actuator Symbols Indicates the action of actuation in a valve- actuator instrument. Also indicates the position during fail mode. Valve are usually drawn as a bow tie shaped symbol. Tag Numbers Instrumentation Identification Number or Tag Number is an alphanumeric code that provides specific information about an instrument or its function. Contains two information - Functional Identification - Loop Identification Loop Identification Loop identification numbers indicate the loop/system in which an instrument belongs. Functional Identifier A functional Identifier is a series of letters, or letter code, that identifies the function of the instrument. The first letter identifies the measured or initiating variable. The succeeding letters designate one or more readout or passive functions and/or output functions. Different Engineering Documents Process Flow Diagram Piping and Instrumentation Diagram (P&ID) Instrument List Logic Diagrams Instrument Loop Diagram Installation Details Location Plans Process Flow Diagram It is the fundamental representation of a process that schematically depicts the conversion of raw materials to finished products without delving into details of how that conversion occurs. It defines the flow of material and utilities, basic relationships between major pieces of equipment, and establishes the flow, pressure and temperature ratings of the process. Instrument List Is an alphanumeric list of date related to a facility’s instrumentation and control systems components and functions. Reference the various documents that contain the information needed to define the total installation. Logic Diagrams Drawings used to design and define the on-off or sequential part of a continuous process plant. May involve the action of a simple switch or it may entail a series of steps comprising a complex automatic system. Instrument Loop Diagrams A schematic representation of a single control loop including its hydraulic, electric, magnetic and pneumatic components. Installation Details Used to show how the instrumentation and control system components are connected and interconnected to the process Define the requirements to correctly install an instrumentation and control component. Location Plans Orthographic views of the plant, drawn to scale, that show the locations of instruments and control system components. Show other control system hardware including marshalling panels, termination racks, local control panels, junctions boxes, instrument racks, and power panels. “You know more than you think you do” -end- Example 1: A flow transmitter is ranged 0 to 350 gallons per minute, 4-20 mA output, direct responding. Calculate the current signal value at a flowrate of 204GPM. SOLUTION 1: SOLUTION 2: Example 2: An electronic loop controller outputs a signal of 8.55 Ma to a direct-responding control valve (where 4mA is shut and 20Ma is wide open. How far open should the control valve be at this MV signal level? Example 3: A pneumatic temperature transmitter ranged 50 to 140 degrees Fahrenheit and has a 3-15 psi output signal. Calculate the pneumatic output pressure if the temperature is 79 degrees Fahrenheit. Calculating and substituting the slope (m) value for this equation, using the full rise-over-run of the linear function: y= 𝑥+𝑏 = 𝑥+𝑏 The y-intercept value will be different for this example than it was for previous examples, since the measurement range is not zero-based. However, the procedure for finding this value is the same – plug any corresponding x and y values into the equation and solve for b. In this case, I will use the values of 3 psi for y and 50 ◦F for x: 3= (50) + 𝑏 3 = 6.67 + b b = -3.67 Therefore, our customized linear equation for this temperature transmitter is as follows: y= 𝑥 – 3.67 At a sensed temperature of 79 ◦F, the transmitter’s output pressure will be 6.86 psi: 12 𝑦= 𝑥 − 3.67 90 12 𝑦= (79) − 3.67 90 𝑦 = 10.53 − 3.67 𝑦 = 6.86 𝑝𝑠𝑖 Example 4: A pH transmitter has a calibrated range of 4 pH to 10pH, with a 4-20 mA output signal. Calculate the pH sensed by the transmitter if its output is 11.3mA. Example 5: A current – to – pressure transducer is used to convert a 4-20 mA electronic signal into a 3-15 psi pneumatic signal. This particular transducer is configured for reverse action instead of direct, meaning that its pressure output at 4 mA should be 15 psi and its pressure output at 20 mA should be 3 psi. Calculate the necessary current signal value to produce an output pressure of 12.7 psi. FORMULA: %= X 100% % mA = (URV – LRV) ( ) + LRV Example 1: A flow transmitter is ranged 0 to 350 gallons per minute, 4-20 mA output, direct responding. Calculate the current signal value at a flowrate of 204GPM. %= X 100% = X 100% = 58.29% %. mA = (URV – LRV) ( ) + LRV = (20 – 4)( ) + 4 = 13.33mA Example 2: An electronic loop controller outputs a signal of 8.55 mA to a direct-responding control valve (where 4mA is shut and 20mA is wide open. How far open should the control valve be at this current signal level?. %= X 100% = x 100 = 28.44% Example 3: A pneumatic temperature transmitter ranged 50 to 140 degrees Fahrenheit and has a 3-15 psi output signal. Calculate the pneumatic output pressure if the temperature is 79 degrees Fahrenheit. %= X 100% = X 100% = 32.22% %. % psi = (URV – LRV) ( ) + LRV = (15 – 3) ( ) + 3 = 6.87psi Example 4: A pH transmitter has a calibrated range of 4 pH to 10pH, with a 4-20 mA output signal. Calculate the pH sensed by the transmitter if its output is 11.3mA.. %= X 100% = X 100% = 45.625% %. % pH = (URV – LRV) ( ) + LRV = (10 – 4) ( ) + 4 = 6.74pH Example 5: A current – to – pressure transducer is used to convert a 4-20 mA electronic signal into a 3-15 psi pneumatic signal. This particular transducer is configured for reverse action instead of direct, meaning that its pressure output at 4 mA should be 15 psi and its pressure output at 20 mA should be 3 psi. Calculate the necessary current signal value to produce an output pressure of 12.7 psi.. %= X 100% = X 100% = 80.83% %. % mA = (URV – LRV) ( ) + LRV = (4 – 20) ( ) + 20 = 7.07mA Process Measurement Measurement Measurement is an important subsystem of a mechatronics system. Its main function is to collect the information on system status and to feed it to the micro- processor(s) for controlling the whole system. Measurement system comprises of sensors, transducers and signal processing devices. Measurement Measurement is an important subsystem of a mechatronics system. Its main function is to collect the information on system status and to feed it to the micro- processor(s) for controlling the whole system. Measurement system comprises of sensors, transducers and signal processing devices. Methods of Measurement Direct Method the process variable is directly measured in units that represent the basic nature of that variable. Inferential Method is the measurement of a process variable indirectly by using another variable. Direct Method The measurement for level in this tank is measured directly in units of height since the level of the tank is seen directly through a sight glass (scaled) representing the current level of the tank. Inferential Method The level of liquid is measured based on the hydrostatic pressure below the tank. Since pressure is directly proportional to the height of the liquid, any change in level will also have the same proportionate change in the readout pressure. Types of Measurement Single Point Type the measurement depends on a fixed value of the process variable. the reading is indicated either as high or low Continuous Type the measurement indicates the actual value of the process variable. Single Point Type Measurement Sensor A and Sensor B will only trigger when the level reaches the set height high and low respectively. Continuous Type Measurement The magnetic float indicates the current measurement of the level through a sight glass according to its range regardless of the current height of the liquid in the tank. The actual level, in real time, is monitored. Instrument Range - refers to the capability of the instrument to measure a variable. Calibration Range - refers to the set of values within the instrument measuring range where the scaled output; 4-20 mA, 3-15 psi or 1-5 V is set during calibration. Instrument Span It is the distance (or difference) between the upper range value (URV) and lower range value (LRV). Upper Range Value (URV) is the highest value of the measured process variable that the output of a transmitter is currently configured to measure. Lower Range Value (LRV) is the lowest value of the measured process variable that the analog output of a transmitter is currently configured to measure. Discrete Process Measurement In engineering, a “discrete” variable or measurement refers to a true-or-false condition. Thus, a discrete sensor is one that is only able to indicate whether the measured variable is above or below a specified setpoint. Discrete sensors typically take the form of “switches”, built to trip when the measured quantity either exceeds or falls below a specified value. These devices are less sophisticated than so-called continuous sensors capable of reporting an analog value, but they are quite useful in industry. “Normal” Status of a Switch The “normal” status for a switch is the status its electrical contacts are in under a condition of minimum physical stimulus. For a momentary-contact pushbutton switch, this would be the status of the switch when it is not being pressed. Electrical switch contacts are typically classified as either normally-open or normally closed, referring to the open or closed status of the contacts under “normal” conditions. Normally-Open Status The lamp will energize only if someone presses the switch, holding its normally-open contacts in the closed position. Normally-open switch are sometimes referred to in the electrical industry as form-A contacts Normally-Closed Status The lamp would energize only if the switch was left alone, but it would turn off if anyone pressed the switch. Normally-close switch are sometimes referred to in the electrical industry as form-B contacts. Hand Switches A hand switch is an electrical switch actuated by a person’s hand motion. This may take a form of toggle, pushbutton or rotary. Limit Switches A limit switch detects the physical motion of an object by direct contact with that object. A limit switch will be in its “normal” status when it is not in contact with anything. Proximity Switches A proximity switch detects the proximity (closeness) of an object. By definition, these switches are non-contact sensors, using magnetic, electric or optical means to sense the proximity of objects. Pressure Switches A pressure switch detects the presence of fluid pressure. Pressure switches often use diaphragms or bellows as the pressure sensing elements, the motion of which actuates one or more switch contacts. Level Switches A level switch detects the level of liquid or solid (granules or powder) in a vessel. Level switches often use floats as the level-sensing element, the motion of which actuates one or more switch contacts. Temperature Switches A temperature switch detects the temperature of an object. Temperature switches often use bimetallic strips as the temperature-sensing element. Flow Switches A flow switch detects the flow of some fluid through pipe. Flow switches often use “paddles” as the flow-sensing element, the motion of which actuates one or more switch contacts. Discrete Control Elements On/Off Valves An on/off valve is the fluid equivalent of an electrical switch: a device that either allows unimpeded flow or acts to prevent flow altogether. Valve styles commonly used for on/off service include ball, plug, butterfly, gate and globe. Continuous Process Measurement Analog Electronic Instrumentation An “analog” electronic signal is a voltage or current whose magnitude represents some physical measurement or control quantity. An instrument is often classified as being “ analog” simply by virtue of using an analog standard to communicate information. 4 to 20 mA Analog Current Signals The most popular form of signal transmission used in modern industrial instrumentation systems is the 4 to 20 mA DC standard. This is an analog signal standard, meaning that the electric current is used to proportionately represent measurements or command signals. Relating 4 to 20 mA signals to instrument variables To calculate the equivalent milliamp value for any given percentage of signal range, the equation takes the form of the standard slope-intercept line equation y=mx + b. y = equivalent current in milliamps x = the desired percentage of signal m = the span of the 4-20 mA (16mA) b = the offset value, or the “live zero” of 4mA Example 1: A flow transmitter is ranged 0 to 350 gallons per minute, 4-20mA output, direct responding. Calculate the current signal value at a flow rate of 204 GPM. Example 2: An electronic loop controller outputs a signal of 8.55 mA to a direct-responding control valve (where 4mA is shut and 20MA is wide open. How far open should the control valve be at this MV signal level? Example 3: A pneumatic temperature transmitter I ranged 50 to 140 degrees Fahrenheit and has a 3-15 PSI output signal. Calculate the pneumatic output pressure if the temperature is 79 degrees Fahrenheit. Example 4: A pH transmitter has a calibrated range of 4pH to 10pH, with a 4-20mA output signal. Calculate the pH sensed by the transmitter if its output is 11.3mA. Example 5: A current-to-pressure transducer is used to convert a 4- 20mA electronic signal into a 3-15 PSI pneumatic signal. This particular transducer is configured for reverse action instead of direct, meaning that its pressure output at 4mA should be 15 PSI and its pressure output at 20mA should be 3 PS I. Calculate the necessary current signal value to produce an output pressure of 12.7 PSI “You don’t want to look back and know you could’ve done better.” -Anonymous -end- Temperature (sometimes called thermodynamic temperature) is a measure of how hot or cold something is: specifically, a measure of the average kinetic energy of the particles in a system. While there is no maximum theoretically reachable temperature, there is a minimum temperature, known as absolute zero, at which all molecular motion stops. Temperature is by far the most measured parameter. It impacts the physical, chemical and biological world in numerous ways. All matter is made of particles - atoms or molecules - that are in constant motion. Because the particles are in motion, they have kinetic energy. The faster the particles are moving, the more kinetic energy they have. The more kinetic energy the particles of an object have, the higher is the temperature of the object. The higher the temperature, the faster the molecules of the substance move, on the average. History about Temperature 1592 - Galileo Galilei invented the liquid-in-glass thermometer. 1643 - Athanasius Kircher invented the first mercury thermometer. 1714 - Daniel Gabriel Fahrenheit invented both the mercury and the alcohol thermometer with Fahrenheit scale (1724). 1742 - Anders Celsius proposed a centigrade scale 1800’s - William Thomson (later Lord Kelvin) postulated the existence of an absolute zero. 1821 - Thomas Seebeck discovered the principle behind the thermocouple the existence of the thermoelectric current. 1821 - Sir Humphry Davy noted the temperature dependence of metals. 1932 - C.H. Meyers built the first Resistance Temperature Detector (RTD). 1948 – the name centigrade scale was change to Celsius 20th century - The development of temperature sensors fully developed. Temperature measurement, also known as thermometry, describes the process of measuring a current local temperature for immediate or later evaluation. Temperature measurement can be classified into a few general categories: a) Thermometers b) Probes c) Non-contact International Practical Temperature Scale The International Practical Temperature Scale is the basis of most present-day temperature measurements. The scale was established by an international commission in 1948 with a text revision in 1960. A revision of the scale was formally adopted in 1990 and still being used today. Nonelectric Temperature Sensors Liquid-in-Glass Thermometers Most versions have used mercury as the liquid. The element mercury is liquid in the temperature range of about −40 to 700°F (−38.9 to 356.7°C). As a liquid, mercury expands as it gets warmer; its expansion rate is linear. Because of mercury’s toxicity and the strict governing laws, the use of the mercury-in-glass thermometer has declined. Bimetallic Thermometers Bonding two dissimilar metals with different coefficients of expansion produces a bimetallic element. These are used in bimetallic thermometers, temperature switches, and thermostats having a range of 100 to 1000°F (−73 to 537°C). Solids tend to expand when heated. The amount that a solid sample will expand with increased temperature depends on the size of the sample, the material it is made of, and the amount of temperature rise. One way to amplify the motion resulting from thermal expansion is to bond two strips of dissimilar metals together, such as copper and iron. If we were to take two equally-sized strips of copper and iron, lay them side-by-side, and then heat both of them to a higher temperature, we would see the copper strip lengthen slightly more than the iron strip: If we bond these two strips of metal together, this differential growth will result in a bending motion that greatly exceeds the linear expansion. This device is called a bi- metal strip: This bending motion is significant enough to drive a pointer mechanism, activate an electromechanical switch, or perform any number of other mechanical tasks, making this a very simple and useful primary sensing element for temperature. Filled-bulb Systems Filled system thermometers have been used for decades. They have a useful range of -125°F to 1200°F. Filled-bulb systems exploit the principle of fluid expansion to measure temperature. If a fluid is enclosed in a sealed system and then heated, the molecules in that fluid will exert a greater pressure on the walls of the enclosing vessel. By measuring this pressure, and/or by allowing the fluid to expand under constant pressure, we may infer the temperature of the fluid. There are basically four types of filled bulb temperature sensors in use in industrial applications They are: Liquid Filled Systems Temperature Sensors (Class I) Class I systems use a liquid fill fluid. Here, the volumetric expansion of the liquid drives an indicating mechanism to show temperature as shown. The steel bulb, stem and indicator are completely filled under pressure with a liquid. The system is totally filled to provide a constant volume. Expansion of the fluid in the tube is converted to pressure. This pressure expands the Bourdon tube which moves the pointer on the scale. The filling fluid is usually an inert hydrocarbon, such as xylene. Vapor Filled Systems Temperature Sensors (Class II) The vapor filled system uses a volatile liquid/vapor combination to generate a temperature dependent fluid expansion. This form of measurement is based on the vapor- pressure curves of the fluid and measurement occurs at the transition between the liquid and vapor phases. This transition occurs in the bulb, and will move slightly with temperature, but it is the pressure that is affected and causes the measurement. If the temperature is raised, more liquid will vaporize and the pressure will increase. A decrease in temperature will result in condensation of some of the vapor, and the pressure will decrease. Gas Filled Systems Temperature Sensors (Class III) Here, the change in pressure with the temperature allows us to sense the bulb’s temperature. As the volume is kept constant, the pressure varies in direct proportion to the absolute temperature Gas filled systems do provide a faster response than other filled devices, and as it converts temperature directly into pressure it is particularly useful in pneumatic systems. Nitrogen is quite commonly used with gas filled systems. Mercury Filled Systems Temperature Sensors (Class V) Mercury expansion systems are different from other liquid filled systems because of the properties of the metal. Mercury is toxic and can affect some industrial processes and is used less in filled system. Mercury filled system provides the widest range of operation (-40 °C to 650°C) Bistate/Phase Change Sensors These low cost nonelectric sensors are made from heat- sensitive fusible crystalline solids that change decisively from a solid to a liquid with a different color at a fixed temperature depending on the blend of ingredients. They are available as crayons, lacquers, pellets, or labels over a wide range of temperatures from 100 to 3000°F (38 to 1650°C). All these devices undergo a change in color or appearance depending upon the temperature variations. “They are used, for instance, with steam traps – when a trap exceeds a certain temperature, a white dot on a sensor label attached to the trap will turn black. Response time typically takes minutes, so these devices often do not respond to transient temperature changes.” The major uses are where a quick check of the temperature of an object is desired, or, in the case of the temperature labels or stickers, a record of whether the object has exceeded a certain temperature. Electronic Thermometers/Sensors Thermocouples A thermocouple is an assembly of two wires of unlike metals joined at one end designated the hot end. At the other end, referred to as the cold junction, the open circuit voltage is measured. Called the Seebeck voltage, this voltage (electromotive force) depends on the difference in temperature between the hot and the cold junction and the Seebeck coefficient of the two metals. 1.) Peltier Effect- If the junctions of a thermocouple are at the same temperature and a current is passed through the circuit of the thermocouple, HEAT is produced at one junction and ABSORBED at the other. 2. ) Thompson Effect- The absorption or evolution of heat when current is passed through an unequally heated conductor. 3) Seebeck Effect - When two dissimilar metals with different temperatures and they’re touching, they produce an emf or voltage. When two dissimilar metal wires are joined together at one end, a voltage is produced at the other end that is approximately proportional to temperature. That is to say, the junction of two different metals behaves like a temperature-sensitive battery. This phenomenon provides us with a simple and direct way to electrically infer temperature: simply measure the voltage produced by the junction, and you can tell the temperature of that junction. Three Laws that Apply to Thermocouples Law of Intermediate Metals This law is interpreted to mean that the addition of different metals to a circuit will not affect the voltage the circuit creates. The added junctions are to be at the same temperature as the junctions in the circuit. For example, a third metal such as copper leads may be added to help take a measurement. This is why thermocouples may be used with digital multi-meters or other electrical components. It is also why solder may be used to join metals to form thermocouples. Law of Homogenous Materials This law states that a thermocouple circuit that is made with a homogeneous wire cannot generate an emf, even if it is at different temperatures and thicknesses throughout. In other words, a thermocouple must be made from at least two different materials in order to generate a voltage. A change in the area of the cross section of a wire, or a change in the temperature in different places in the wire, will not produce a voltage. Law of Intermediate Temperature This law allows a thermocouple that is calibrated with a reference temperature to be used with another reference temperature. It also allows extra wires with the same thermoelectric characteristics to be added to the circuit without affecting its total emf. Thermocouple Types Thermocouples exist in many different types, each with its own color codes for the dissimilar-metal wires. Resistance Temperature Detectors (RTD) A Resistance Temperature Detector or simply RTD is a temperature sensor which measures temperature using the principle that the resistance of a metal changes with temperature. For most metals the change in electrical resistance is directly proportional to its change in temperature and is linear over a range of temperatures. This constant factor called the temperature coefficient of electrical resistance is the basis of RTDs. RTDs work on a basic correlation between metals and temperature. As the temperature of a metal increases, the metal's resistance to the flow of electricity increases. Similarly, as the temperature of the RTD resistance element increases, the electrical resistance, measured in ohms (Ω), increases. RTD elements are commonly specified according to their resistance in ohms at zero degrees Celsius (0° C). The most common RTD specification is 100 Ω, which means that at 0° C the RTD element should demonstrate 100 Ω of resistance. Thermistors Like the RTD, the thermistor is also a resistive device that changes its resistance predictably with temperature. Its benefit is a very large change in resistance per degree change in temperature, allowing very sensitive measurements over narrow spans. Due to its very large resistance, lead wire errors are not significant. Difference between RTDs and Thermistors Thermistors are devices made of metal oxide which either increase in resistance with increasing temperature (a positive temperature coefficient) or decrease in resistance with increasing temperature (a negative temperature coefficient). RTDs are devices made of pure metal (usually platinum or copper) which always increase in resistance with increasing temperature. The major difference between thermistors and RTDs is linearity: thermistors are highly sensitive and nonlinear, whereas RTDs are relatively insensitive but very linear. Pyrometers Pyrometers also called as Radiation Thermometers was invented by Josiah Wedgwood. They are non-contact temperature sensors that measure temperature from the amount of thermal electromagnetic radiation received from a spot on the object of measurement. Pyrometers are mainly divided to two types: a.) Radiation Pyrometers b.) Optical Pyrometers Pyrometers are used to measure the temperature which is difficult to measure. They are non-contact devices, used to measure temperature above 1500 degree Celsius, contact devices may melt at this temperature. Radiation Pyrometers A radiation pyrometer also referred as infrared (IR) thermometer is a noncontact radiant energy detector. Every object in the world radiates IR energy. The amount of radiant energy emitted is proportional to the temperature of an object. Noncontact thermometers measure the intensity of the radiant energy and produce a signal proportional to the target temperature. The physics behind this broadcasting of energy is called Planck’s Law of Thermal Radiation. As shown in the figure, the radiation pyrometer has an optical system, including a lens, a mirror and an adjustable eye piece. The heat energy emitted from the hot body is passed on to the optical lens, which collects it and is focused on to the detector with the help of the mirror and eye piece arrangement. The detector may either be a thermistor or photomultiplier tubes. Thus, the heat energy is converted to its corresponding electrical signal by the detector and is sent to the output temperature display device. Optical Pyrometer Optical Pyrometers work on the basic principle of using human eye to match the brightness of the hot object to the brightness of the calibrated lamp filament inside the instrument. In an optical pyrometer, a brightness comparison is made to measure the temperature. As a measure of the reference temperature, a color change with the growth in temperature is taken. The device compares the brightness produced by the radiation of the object whose temperature is to be measured, with that of a reference temperature. The reference temperature is produced by a lamp whose brightness can be adjusted till its intensity becomes equal to the brightness of the source object. The radiation The radiation from the source is emitted and the optical objective lens captures it. The lens helps in focusing the thermal radiation on to the reference bulb. The observer watches the process through the eye piece and corrects it in such a manner that the reference lamp filament has a sharp focus and the filament is super- imposed on the temperature source image. The observer starts changing the rheostat values and the current in the reference lamp changes. This in turn, changes its intensity. This change in current can be observed in three different ways. 1. The filament is dark. That is, cooler than the temperature source. 2. Filament is bright. That is, hotter than the temperature source. 3. Filament disappears. Thus, there is equal brightness between the filament and temperature source. At this time, the current that flows in the reference lamp is measured, as its value is a measure of the temperature of the radiated light in the temperature source, when calibrated. Pressure is one of the key thermodynamic parameters. It is an intensive property. Pressure is defined as a ratio between a force and a unit area, perpendicular to the direction of that force, on which the force acts. Mathematically this definition is expressed as: In its most basic form, pressure is defined as the amount of force being applied to an area. As this force is distributed over a specific area, a change in movement of the defined area is occurring. It is important to remember that a force is just an occurrence that is causing an object to move, either accelerating or decelerating. The amount of movement from that object is based upon the amount of force. So why is pressure important? In everyday activities, pressure may not be a concern whatsoever, however, in a process environment, pressure is a key component to keeping a system functional. PRESSURE OF A FLUID (P) All fluid molecules will be in constant and random motion called “Brownian motion”, due to which fluid at rest in a vessel, does exerts force on all the walls of the vessel, with which it is in contact. Total pressure of a fluid in a nominated point consists of two elements: 1. Static pressure - Also referred as “hydrostatic pressure” is the pressure of a fluid at rest. 2. Dynamic pressure – the pressure of a fluid moving. Fluid - Any substance that does not conform to a fixed shape such as liquid or gas. Static pressure Defined as a pressure not associated with the fluid motion, but its state. It is the pressure which would be indicated by a gauge moving together with the fluid. Dynamic Pressure A measurement of kinetic energy of a moving fluid and depends on its velocity and density. PRESSURE MEASUREMENT At the end of the 16th century, the Italian Galileo Galilei (1564-1642) was granted the patent for a water pump system used on irrigation. Galileo Galilei found that 10 meters was the limit to which the water would rise in the suction pump, but had no explanation for this phenomenon. Scientists were then devoted to find the cause for this. In 1643, the Italian physicist Evangelista Torricelli (1608-1647) invented the barometer, with which he could evaluate the atmospheric pressure. His research about mercury columns pave way to his discovery of vacuum. Five years later, French physicist Blaise Pascal used the barometer to show that the air pressure was smaller at the top of the mountains. He also determined the weight of air and called it “pressure”. In 1849, Eugène Bourdon was granted the Bourdon Tube patent, used until today in relative pressure measurements. Most common types of Pressure Measurement In function of the reference, the pressure measurement can be classified as: gauge, absolute and differential or relative. Absolute pressure: it is measured with relation to perfect vacuum, namely, the pressure difference at a given measurement point by the vacuum pressure (absolute zero). Normally the ABS notation is used when this greatness is indicated. Example: The absolute pressure applied by the atmosphere at sea level is 760mmHg. Differential pressure : it is the pressure difference measured between two points. When any point other than vacuum or atmosphere is used as reference it means differential pressure. For example, the differential pressure found on an orifice plate. Gauge pressure : it is measured in relation to the ambient pressure, namely, in relation to the atmosphere. It is always important to register on the notation that it is a relative measurement. Example: 10Kgf/cm2 Relative Pressure. Manometers A very simple device used to measure pressure is the manometer: a fluid-filled tube where an applied gas pressure causes the fluid height to shift proportionately. As you can see, a manometer is fundamentally an instrument of differential pressure measurement, indicating the difference between two pressures by a shift in liquid column height Working Principle of Manometer: The term manometer is derived from the ancient Greek words 'manós', meaning thin or rare, and 'métron'. A manometer works on the principle of hydrostatic equilibrium and is used for measuring the pressure (static pressure) exerted by a still liquid or gas. Hydrostatic equilibrium states that the pressure at any point in a fluid at rest is equal, and its value is just the weight of the overlying fluid. In its simplest form, a manometer is a U-shaped tube consisting of an incompressible fluid like water or mercury. It is inexpensive and does not need calibration. Manometer Types Manometers come in a variety of forms and they are as follows: 1. U-Tube Manometers 2. Well Manometers 3. Raised-Well Manometers 4. Inclined Manometers U-Tube Manometers It consists of a glass tube bent like the letter 'U'. In this type of manometer, balancing a column of liquid is done by another column of same or other liquid. One end of the U-tube is attached to the point where pressure is to be measured, while the other end is open to atmospheric pressure. Well Manometers As shown in the figure, the well area is larger than the area of the tube, denoted by A. The rise in liquid level in the tube is considered while that in the well is ignored. If p1 and p2 are absolute pressures applied as shown in figure: Raised Well Manometers It is similar to a well type manometer in construction. The only difference being that the vertical column limb is inclined at an angle θ. Inclined manometers are used for accurate measurement of small pressure. Sphygmomanometer and Digital Manometer A sphygmomanometer, a type of manometer, is commonly used to check blood pressure in humans. Systolic pressure reading is the mercury reading on the pressure gauge when the pulse is first heard, while diastolic pressure reading is when the pulse can first no longer be heard. A digital manometer uses a microprocessor and pressure transducer to sense slight changes in pressure. It gives the pressure readout on a digital screen. It measures differential pressure across two inputs. An analog/digital output in proportion to the instantaneous pressure can be obtained. Mechanical Pressure Elements Mechanical pressure-sensing elements include the bellows, the diaphragm, and the bourdon tube. Each of these devices converts a fluid pressure into a force. Bellows Bellows resemble an accordion constructed from metal instead of fabric. Increasing pressure inside a bellows unit causes it to elongate. They are thin-walled metallic cylinders, with deep convolutions, of which one end is sealed and the other end remains open. The closed end can move freely while the open end is fixed. Bellows Principle of Operation: When pressure is applied to the closed end, the bellows will be compressed. The closed end will move upwards and the link, which is the rod in between the closed end of the bellows and the transmission mechanism, will go up and rotate the pointer. Diaphragms A diaphragm is nothing more than a thin disk of material which bows outward under the influence of a fluid pressure. Many diaphragms are constructed from metal, which gives them spring-like qualities. Some diaphragms are intentionally constructed out of materials with little strength, such that there is negligible spring effect. These are called slack diaphragms, and they are used in conjunction with external mechanisms that produce the necessary restraining force to prevent damage from applied pressure. Diaphragm Principle of Operation: A fluid in contact with a flexible membrane pushes on that membrane, bending it. The pressure is a measure of how hard it pushes. When the outside preference is low, the reference pressure bends the membrane out. As the outside pressure increases, it pushes back on the membrane, bending it back the other way. By measuring how far the membrane bends, the gauge can detect the outside pressure. Bourdon Tubes Bourdon tubes are made of spring-like metal alloys bent into a circular shape. Under the influence of internal pressure, a bourdon tube “tries” to straighten out into its original shape before being bent at the time of manufacture. The Bourdon tube is the namesake of Eugéne Bourdon, a French watchmaker and engineer who invented the Bourdon gauge in 1849. Over the years, the Bourdon tube has entrenched itself as the elastic element in most pressure gauges in application today. Bourdon Tube Working Principle: The Bourdon pressure gauge operates on the principle that, when pressurized, a flattened tube tends to straighten or regain its circular form in cross-section. When a gauge is pressurized, the Bourdon creates the dial tip travel to enable pressure measurement. The higher the pressure requirement of the application, the stiffer the Bourdon tube needs to be Forms of Bourdon Tube C- TYPE Forms of Bourdon Tube HELICAL TYPE Forms of Bourdon Tube SPIRAL TYPE Electrical Pressure Elements Several different technologies exist for the conversion of fluid pressure into an electrical signal response. These technologies form the basis of electronic pressure transmitters: devices designed to measure fluid pressure and transmit that information via electrical signals such as the 4- 20mA analog standard, or in digital form such as HART or FOUNDATION Fieldbus. Piezoresistive Sensors Piezoresistive means “pressure-sensitive resistance,” or a resistance that changes value with applied pressure. The strain gauge is a classic example of a piezoresistive element: A Strain gauge is a sensor whose resistance varies with applied force; It converts force, pressure, tension, weight, etc., into a change in electrical resistance which can then be measured. A strain gauge is an elastically deformable transducer that transforms an applied force or a mechanical displacement into a change in resistance. It is the underlying mechanism for the working of a strain gauge load cell. Strain Gauge Working Principle: When external forces are applied to a stationary object, stress and strain are the result. Stress is defined as the object's internal resisting forces, and strain is defined as the displacement and deformation that occur. Applications of the Strain Gauges The strain gauges are used for two main purposes: 1) Measurement of strain: Whenever any material is subjected to high loads, they come under strain, which can be measured easily with the strain gauges. The strain can also be used to carry out stress analysis of the member. 2) Measurement of other quantities: The principle of change in resistance due to applied force can also be calibrated to measure a number of other quantities like force, pressure, displacement, acceleration etc since all these parameters are related to each other Differential capacitance sensors Another common electrical pressure sensor design works on the principle of differential capacitance. Like the strain gauge, differential capacitance sensors use a change in electrical characteristics to infer pressure. Here a change in capacitance is used to infer pressure measurement. A capacitor is a device that stores electrical charge. It consists of two metal plates separated by an electrical insulator. The metal plates are connected to an external electrical circuit through which electrical charge can be transferred from one metal plate to the other. In this design, the sensing element is a taut metal diaphragm located equidistant between two stationary metal surfaces, forming a complementary pair of capacitances. An electrically insulating fill fluid (usually a liquid silicone compound) transfers motion from the isolating diaphragms to the sensing diaphragm, and also doubles as an effective dielectric for the two capacitors: A classic example of a pressure instrument based on the differential capacitance sensor is the Rosemount model 1151 differential pressure transmitter, shown in assembled form in the following photograph: The concentric corrugations in the metal of the diaphragm allow it to easily flex with applied pressure, transmitting process fluid pressure through the silicone fill fluid to the taut sensing diaphragm inside the differential capacitance cell. Differential pressure transmitters One of the most common, and most useful, pressure measuring instruments in industry is the differential pressure transmitter. This device senses the difference in pressure between two ports and outputs a signal representing that pressure in relation to a calibrated range. Regardless of make or model, every differential pressure (“DP”, “d/p”, or ΔP) transmitter hast two pressure ports to sense different process fluid pressures. One of these ports is labeled “high” and the other is labeled “low”. This labeling does not necessarily mean that the “high” port must always be at a greater pressure than the “low” port. What these labels represent is the effect that a pressure at that point will have on the output signal. The universe doesn’t give you what you ask for with your thoughts; it gives you what you demand with your actions. - Steve Maraboli -end- Level Measurement Learning Objectives : Upon completion of this chapter, the student should be able to: Know and understand concepts about level measurement. List and explain the different types of level measuring instrument and their principle of operation. Level The measurement of level is defined as the “determination of the position of an existing interface between two media.” These media are usually fluids, but they may be solids or a combination of a solid and a fluid. The interface can exist between a liquid and its vapour, two liquids, or a granular or fluidized solid and gas. Liquid level was probably the first of the process variable to be measured and controlled. History records early example of level control in dams used for the storage and orderly release of water for agricultural use. Reasons for Level Measurement Safety - in boilers a dangerous state can develop if the water level varies outside certain limits. Economy - good level control of solids is also desirable, excessive build up hoppers can be expensive to clear. Monitoring - monitoring of level in bulk storage tanks and process vessels is necessary in order that: o Plant efficiency may be assessed and optimize. o Stock records maybe kept. o Cost may be correctly allocated. In the oil and gas industries, level measurement is necessary to achieve the following objectives: 1. Compute tank inventories of hydrocarbon liquid and utility liquids. 2. Protect equipment such as columns, compressors, turbines and pumps from damage. 3. Protect operating and maintenance personnel against injury resulting from hydrocarbon, corrosive or toxic spillage. 4. Protect the environment from the release of objectionable liquids into the rivers and sea. 5. Control phase separation processes and product loading operations. METHODS OF LEVEL MEASUREMENT Two methods used to measure level; Direct and Indirect Methods. DIRECT METHOD Direct level measurement is simple, almost straightforward and economical; it uses a direct measurement of the distance (usually height) from the datum line, and used primarily for local indication. It is now easily adopted to signal transmission techniques for remote indication or control. Direct method examples are dips sticks, sight glass and float. Dip Sticks and Lead Lines Flexible lines fitted with end weights called chains or lead lines have been used for centuries by seafaring men to gauge the depth of water under their ships. Steel tape having plump bob – like weights, and stored conveniently in a reel are still used extensively for measuring level in fuel oil bunkers and petroleum storage tanks. Though crude as this method seems, it is accurate to about 0.1% with ranges up to about 20ft. Although the dipstick an lead line method of level measurement are unrivalled in accuracy, reliability, and dependability, there are drawbacks to this technique. First, it requires an action to be performed, thus causing the operator to interrupt his duty to carry out this measurement. Another limitation to this measuring principle is the inability to successfully and conveniently measure level values in pressurize vessels. These disadvantages limit the effectiveness of these means of visual level measurement. Sight Glass Another simple method is called sight glass or level glass. It is quite straightforward in use; the level in the glass seeks the same position as the level in the tanks. It provides a continuous visual indication of liquid level in a process vessel or a small tank and more convenient than dip sick, dip rod and manual gauging tapes. Level gauges (sightglasses) Level gauges are perhaps the simplest indicating instrument for liquid level in a vessel. They are often found in industrial level-measurement applications, even when another level-measuring instrument is present, to serve as a direct indicator for an operator to monitor in case there is doubt about the accuracy of the other instrument. The level gauge, or sightglass is to liquid level measurement as manometers are to pressure measurement: a very simple and effective technology for direct visual indication of process level. Chain or Float Gauge The visual means of level measurement previously discussed are rivaled in simplicity and dependability by float type measurement devices. Many forms of float type instruments are available, but each uses the principle of a buoyant element that floats on the surface of the liquid and changes position as the liquid level varies. Many methods have been used to give an indication of level from a float position with the most common being a float and cable arrangement. A person lowers a float down into a storage vessel using a flexible measuring tape, until the tape goes slack due to the float coming to rest on the material surface. At that point, the person notes the length indicated on the tape (reading off the lip of the vessel access hole). This distance is called the ullage, being the distance from the top of the vessel to the surface of the process material. Fillage of the vessel may be determined by subtracting this “ullage” measurement from the known height of the vessel. Obviously, this method of level measurement is tedious and may pose risk to the person conducting the measurement. If the vessel is pressurized, this method is simply not applicable. INDIRECT METHODS Indirect or inferred methods of level measurement depend on the material having a physical property which can be measured and related to level. Many physical and electrical properties have been used for this purpose and are well suited to producing proportional output signals for remote transmission. This method employs even the very latest technology in its measurement. Buoyancy/Displacer Uses the theory of Archimedes Principle which states that “the force produced when a body is submerged into liquid with a constant density is equal to the fluid displaced”; which means that, when a body is fully or partially immersed in any liquid, it is reduced in weight by an amount equal to the weight of the volume of the liquid displaced. In diagram A, the displacer is suspended by a spring scale that shows the weight of the displacer in the air. This would represent ‘0%’ in the level measurement application. The full weight of the displacer is entirely supported by the spring (3 lbs). In diagram B, the water is at level that represents ‘50%’ of the full measurement span. Note that the scale indicates a weight of 2 lbs. The loss in weight of the displacer (1 lbs) is equal to the weight of the volume of water displaced. When the water level is increased to a full level scale (diagram C), the net weight of the displacer is 1 lbs, which represent ‘100%’ of the measurement. It lost 2 lbs when the water level arises along the longitudinal axis of the displacer. We can see that the weight of the displacer is inversely proportional to the liquid level in the chamber where the displacer is immersed. Hydrostatic pressure A vertical column of fluid generates a pressure at the bottom of the column owing to the action of gravity on that fluid. The greater the vertical height of the fluid, the greater the pressure, all other factors being equal. This principle allows us to infer the level (height) of liquid in a vessel by pressure measurement. A vertical column of fluid exerts a pressure due to the column’s weight. The relationship between column height and fluid pressure at the bottom of the column is constant for any particular fluid (density) regardless of vessel width or shape. Air Bubblers One of the oldest and simplest methods of level measurement is called the air bubbler, air purge, or dip tube. With the supply air blocked, the water level in the tube will be equal to that in the tank. When the air pressure from the regulator is increased until the water in the tube is displaced by air, the air pressure on the tube is equal to the hydrostatic head of the liquid in the tube. The pressure set in the regulator must overcome the liquid head and bubble up through the measured liquid. This will be indicated by a continuous flow, which is evidence by the formation of bubbles rising to the level of the liquid in the tank. The deeper you submerge the straw, the harder it becomes to blow bubbles out the end with your breath. The hydrostatic pressure of the water at the straw’s tip becomes translated into air pressure in your mouth as you blow, since the air pressure must just exceed the water’s pressure in order to escape out the end of the straw. So long as the flow rate of air is modest (no more than a few bubbles per second), the air pressure will be very nearly equal to the water pressure, allowing measurement of water pressure (and therefore water depth) at any point along the length of the air tube. If we lengthen the straw and measure pressure at all points throughout its length, it will be the same as the pressure at the submerged tip of the straw. This is how industrial “bubbler” level measurement systems work: a purge gas is slowly introduced into a “dip tube” submerged in the process liquid, so that no more than a few bubbles per second of gas emerge from the tube’s end. Gas pressure inside all points of the tubing system will (very nearly) equal the hydrostatic pressure of the liquid at the tube’s submerged end. Any pressure-measuring device tapped anywhere along the length of this tubing system will sense this pressure and be able to infer the depth of the liquid in the process vessel without having to directly contact the process liquid. Echo Based Level Instruments A completely different way of measuring liquid level in vessels is to bounce a traveling wave off the surface of the liquid – typically from a location at the top of the vessel – using the time- of-flight for the waves as an indicator of distance, and therefore an indicator of liquid height inside the vessel. Echo-based level instruments enjoy the distinct advantage of immunity to changes in liquid density, a factor crucial to the accurate calibration of hydrostatic and displacement level instruments. In this regard, they are quite comparable with float- based level measurement systems. Ultrasonic Level Instrument Ultrasonic level instruments measure the distance from the transmitter (located at some high point) to the surface of a process material located farther below using reflected sound waves. The frequency of these waves extend beyond the range of human hearing, which is why they are called ultrasonic. The time-of-flight for a sound pulse indicates this distance, and is interpreted by the transmitter electronics as process level. These transmitters may output a signal corresponding either to the fullness of the vessel (fillage) or the amount of empty space remaining at the top of a vessel (ullage). Radar Level Instrument Radar level instruments measure the distance from the transmitter (located at some high point) to the surface of a process material located farther below in much the same way as ultrasonic transmitters – by measuring the time-of-flight of a traveling wave. The fundamental difference between a radar instrument and an ultrasonic instrument is the type of wave used: radio waves instead of sound waves. Radio waves are electromagnetic in nature and very high frequency. Sound waves are mechanical vibrations and of much lower frequency than radio waves. Laser Level Instrument The least-common form of echo-based level measurement is laser, which uses pulses of laser light reflected off the surface of a liquid to detect the liquid level. Perhaps the most limiting factor with laser measurement is the necessity of having a sufficiently reflective surface for the laser light to “echo” off. Many liquids are not reflective enough for this to be a practical measurement technique, and the presence of dust or thick vapors in the space between the laser and the liquid will disperse the light, weakening the light signal and making the level more difficult to detect. Magnetostrictive Level Instrument In a magnetostrictive level instrument, liquid level is sensed by a lightweight, donut-shaped float containing a magnet. This float is centered around a long metal rod called a waveguide, hung vertically in the process vessel (or hung vertically in a protective cage like the type used for displacement-style level instruments) so that the float may rise and fall with process liquid level. The magnetic field from the float’s magnet at that point, combined with the magnetic field produced by an electric current pulse periodically sent through the rod, generates a torsional stress pulse at the precise location of the float. Weight-based Level Instrument Weight-based level instruments sense process level in a vessel by directly measuring the weight of the vessel. If the vessel’s empty weight (tare weight) is known, process weight becomes a simple calculation of total weight minus tare weight. Obviously, weight-based level sensors can measure both liquid and solid materials, and they have the benefit of providing inherently linear mass storage measurement. Load cells are typically the primary sensing element of choice for detecting vessel weight. Capacitance Level Instrument Capacitance is the property of a circuit that stores electrons and thus opposes a change in voltage in the circuit. A capacitor is an electrical component that consists of two conductors separated by a dielectric or insulator. The capacitance value of a capacitor is measured in Farads ( F ), and the value is determined by the area of the conductors ( usually called plates ), the distance between the plates, and the dielectric constant of the insulator between the plates. Review of Fundamental Principles Pascal’s principle: changes in fluid pressure are transmitted evenly throughout an enclosed fluid volume. Relevant to pressure measurement, as fluid pressure in all parts of an enclosed system will experience the same changes in pressure. Hydrostatic pressure: fluids having substantial weight generate pressure proportional to their density and to their vertical height (P = γh and P = ρgh). Relevant to pressure offsets generated in vertical spans of impulse or capillary tubing, causing a pressure instrument to register more or less pressure than that at the process vessel connection. Archimedes’ principle: the buoyant force experienced by an object submerged in liquid is equal to the weight of the fluid that object displaces, which is equal to the volume displaced multiplied by the weight density of the fluid (Fbuoyant = γV ). Relevant to displacer- type instruments, which work by sensing the buoyant force exerted on an object as liquid rises around it. Time, velocity, and distance: x = vt, describing the relationship between velocity (v), time of travel (t), and distance traveled (x). Relevant to all types of “echo” level instruments, where travel time of a wave is used to measure distance. Flow Measurement Learning Objectives : Upon completion of this chapter, the student should be able to: Know and understand concepts about flow measurement. List and explain the different types of flow measuring instrument and their principle of operation. Flow “Flow” is defined as the volume or mass quantity of fluid that flows through the section of a pipe of channel per time unit. It may refer to volumetric flow (the number of fluid volumes passing by per unit time), mass flow (the number of fluid mass units passing by per unit time), or even standardized volumetric flow (the number of gas volumes flowing, supposing different pressure and temperature values than what the actual process line operates at). The major factors affecting the flow of fluids through pipes are: the velocity of the fluid. the friction of the fluid in contact with the pipe. the viscosity of the fluid. the density of the fluid. Fluid velocity depends on the head pressure which is forcing the fluid through the pipe. The greater the head pressure, the faster the fluid flow rate (all other factors remaining constant), and consequently, the greater the volume of flow. Pipe size also affects the flow rate. For example, doubling the diameter of a pipe increases the potential flow rate by a factor of four times. Pipe friction reduces the flow rate of fluids through pipes and is, therefore, considered a negative factor. Because of the friction of a fluid in contact with a pipe, the flow rate of the fluid is slower near the walls of the pipe than at the center. The smoother, cleaner, and larger a pipe is, the less effect pipe friction has on the overall fluid flow rate. Viscosity, or the molecular friction within a fluid, negatively affects the flow rate of fluids. Viscosity and pipe friction decrease the flow rate of a fluid near the walls of a pipe. Viscosity increases or decreases with changing temperature, but not always as might be expected. In liquids, viscosity typically decreases with increasing temperature. However, in some fluids viscosity can begin to increase above certain temperatures. Generally, the higher a fluid’s viscosity, the lower the fluid flow rate (other factors remaining constant). Viscosity is measured in units of centipoise. Another type of viscosity, called kinematic viscosity, is measured in units of centistokes. It is obtained by dividing centipoise by the fluid’s specific gravity. Density of a fluid affects flow rates in that a more dense fluid requires more head pressure to maintain a desired flow rate. Also, the fact that gases are compressible, whereas liquids essentially are not, often requires that different methods be used for measuring the flow rates of liquids, gases, or liquids with gases in them. It has been found that the most important flow factors can be correlated together into a dimensionless parameter called the Reynolds Number. Physical Nature of Flow Laminar and Turbulent Fluid Flow (Reynolds number) Laminar referred to an orderly motion of flow where every particle of the fluid moves in parallel to the pipe. However, the fluid flowing close to the wall slows down due to friction and viscosity. The flow is said to become turbulent when it speeds up even more. The Reynolds number (Re) tells us if the flow is laminar or turbulent; If less than 2000, it is laminar If more than 4000, it is turbulent Importance of Flow Measurement Measurement of flow, whether it is a liquid or gas, is commonly a critical parameter in many processes. In most operations it is important to know that the right fluid is at the right place at the right time. Some critical applications require the ability to conduct accurate flow measurements to ensure product quality. Health & Safety is always an important factor when working with liquids and gases, investment in ensuring your team can operate in a safe and productive environment is very important. Measuring flow and pressure can provide this security to the process and personnel. Fluid flow metering systems provide vital information for the following purpose : Production Planning - the quantities of product supplied to customers generally vary according to seasonal demand. Usually an average rate of production is planned on a calendar day which takes into account any periods of shutdown necessary for maintenance and inspection. Product Quality - flow controllers are necessary in the proportional blending of intermediate products to produce on-specification finished products of consistent quality. Control of Process - sometimes flow meters are used for control of some other main process variables. Examples in Separator column, liquid levels are kept constant by varying the flow rate of the process in columns are also kept constant by varying the flow rate of the process fluid passing through them. Pressure in column are also kept constant by varying the flow rate of the cooling medium. A. Pressure-based Flowmeters Differential pressure flow instruments create a differential pressure in a fluid flowing through a pipe which can be measured and presented in terms of rate of flow. A restriction is placed in line of a flowing fluid produced a differential pressure across the restriction element, and the flow rate is proportional to the square root of the differential pressure. Bernoulli Equation finds its major use in this type of flow measurement. Types of flow meters that employ this principle are Orifice Plates, Venturi, Pitot tubes, and Flow Nozzle. Orifice Plates An Orifice plate is simply a disc with a hole. Orifice plate functions as the primary element; it creates restriction as well as differential pressure between the upstream and downstream sides. The pressure on the upstream is higher than the pressure at the downstream, and this pressure difference is directly proportional to the velocity and rate of flow of the fluid. The bore or hole on the orifice is normally concentric, but no always the case; some are eccentric or at top of the plate, especially if fluid contains a lot of dissolved gases, to prevent gases build up at the plate. Orifice Plate Taps Taps are where the impulse lines are joined to the orifice plate carrier. There are several different arrangements of flange taps. The most common are; Flange taps are the most popular tap location for orifice meter runs on large pipes. Flanges may be manufactured with tap holes pre- drilled and finished before the flange is even welded to the pipe, making this a very convenient pressure tap configuration. Most of the other tap configurations require drilling into the pipe after installation, which is not only labor-intensive, but may possibly weaken the pipe at the locations of the tap holes. Vena contracta taps offer the greatest differential pressure for any given flow rate, but require precise calculations to properly locate the downstream tap position. Radius taps are an approximation of vena contracta taps for large pipe sizes (one-half pipe diameter downstream for the low-pressure tap location). An unfortunate characteristic of both these taps is the requirement of drilling through the pipe wall. Not only does this weaken the pipe, but the practical necessity of drilling the tap holes in the installed location rather than in a controlled manufacturing environment means there is considerable room for installation error. Corner taps must be used on small pipe diameters where the vena contracta is so close to the downstream face of the orifice plate that a downstream flange tap would sense pressure in the highly turbulent region (too far downstream). Corner taps obviously require special (i.e. expensive) flange fittings, which is why they tend to be used only when necessary. Care should be taken to avoid measuring downstream pressure in the highly turbulent region following the vena contracta. This is why the pipe tap (also known as full-flow tap) standard calls for a downstream tap location eight pipe diameters away from the orifice: to give the flow stream room to stabilize for more consistent pressure readings. Types of Orifice Plates 1. Square-edge Orifice Plate Concentric Orifice Plate Eccentric Orifice Plate Segmental Orifice Plate 2. Non-square-edge Orifice Plate Quadrant-edge Orifice Plate Conical-entrance Orifice Plate Venturi Tubes Venturi tube is a pipe purposefully narrowed to create a region of low pressure. If the fluid going through the venturi tube is a liquid under relatively low pressure, we may vividly show the pressure at different points in the tube by means of piezometers, which are transparent tubes allowing us to view liquid column heights. The greater the height of liquid column in the piezometer, the greater the pressure at that point in the flowstream: The classic venturi tube pioneered by Clemens Herschel in 1887 has been adapted in a variety of forms broadly classified as flow tubes. Variations of Venturi Tube Flow Nozzle V-cone Segmental wedge A flow nozzle is designed to be clamped between the faces of two pipe flanges in a manner similar to an orifice plate. The goal here is to achieve simplicity of installation approximating that of an orifice plate while improving performance (less permanent pressure loss) over orifice plates. The V-cone may be thought of as a venturi tube or orifice plate in reverse: instead of narrowing the tube’s diameter to cause fluid acceleration, fluid must flow around a cone-shaped obstruction placed in the middle of the tube. The tube’s effective area will be reduced by the presence of this cone, causing fluid to accelerate through the restriction just as it would through the throat of a classic venturi tube. Segmental wedge elements are special pipe sections with wedge-shaped restrictions built in. These devices are useful for measuring the flow rates of slurries, especially when pressure is sensed by the transmitter through remote-seal diaphragms (to eliminate the possibility of impulse tube plugging. Pitot Tubes The Pitot tube senses pressure as the fluid stagnates (comes to a complete stop) against the open end of a forward-facing tube. The principle of Pitot tube is a variable head velocity measuring device. It consist of two concentrically tubes bent at right angle. Inner tube faces the impingement and hence measure the static and dynamic pressures while the outer tube measures the static pressure. The tube lies along the flow axis having an open end facing into the flow is called the impact probe/tip. The second tube resides around the pipe wall and it has a hole tangential to the flow is called the static probe. B. Turbine Meters It is very commonly used for measuring condensate, crude oil and diesel. The flow of the liquid causes the rotor to spin at an angular velocity which is proportional to the velocity of the liquid. The speed of the rotor is detected by a pick-up on the outside of the tube, usually by an electromagnetic detector to provide a pulsed electrical signal proportional to flow rate. pressure head drop across the turbine. There are some mechanical friction effects, but these are negligible except at low flows. They generally used for liquid flow only. These meters are delicate and do not like sudden high flows caused by gas pockets, or valves sudden opening. C. Positive Displacement Meter This form of flowmeter divides up the flowing fluid into known volume packets. These measurement devices trap a known volume of fluid and allow it to pass from meter inlet to outlet. The number of trapped volumes passing through the meter is counted to obtain the total flow. These are then counted to give the true value of the volume passing through. If the volume delivered over a particular time is monitored, the volume flow rate is established. The term displacement means that the fluid that flow through the meter replaces (displaces) the volume of fluid that flowed through the meter immediately before. D. Magnetic Flowmeter This type of flow meter uses the principle of inductive voltage/current in accordance with Faraday’s Law and Lenz’s Law. Two metal electrodes are fit into the wall of the tubing flush with inner wall at opposite sides of the pipe. Two specially shaped magnetic coils are then attached to the pipe to produce a uniform magnetic fields at the right angles to the pipe. The meter works by using the flowing liquid as a conductor, moving across the meter generated magnetic field. A voltage is induced across the moving liquid and the amplitude of this voltage is proportional to the velocity of the liquid and the strength of the magnetic field. This induced voltage is fed to the measuring amplifier by the electrode pair. The magnetic flow meter has no moving parts and offers no restriction to the flow, nor any pressure drop running through it. Its accuracy does not depend on viscosity since it measures by volume. So it can be used for highly viscous slurries or liquids with varying viscosities. E. Ultra-Sonic Flowmeter The term ‘ultra sonic’ is used to describe pressure waves at frequencies higher than the human ears can detect. The velocity of the sound waves in the fluid is the same as the velocity of sound in the fluid. If an ultrasonic beam is transmitted across a pipeline at an angle to the flow direction, the time taken for the pulse to reach the receiver is a function of the flow velocity of the fluid, as well as the velocity of sound in the fluid. Thus, this type of flowmeter operates on the principle of transit time differences. An acoustic signal (ultrasonic) is transmitted from one sensor to another. This can be either in the direction of flow (downstream) or against the direction of flow (upstream). The time (transit) that the signal requires to arrive at the receiver is then measured. According to physical principles, the signal sent against the direction of flow requires longer to return than the signal in the direction of flow. The difference in the transit time is directly proportional to the velocity of flow. F. Vortex Flowmeter Vortex flowmeters operate on the physical principle of the Karman vortex street. When a fluid lows past a bluff body, vortices are alternately formed on the sides of that body and then detached or shed by the flow. The frequency of vortex shedding is proportional to the mean flow velocity and, therefore, the volumetric flow (with Re >4000). Alternating pressure changes caused by the vortices are transmitted via lateral ports into the bluff body. The DSC sensor is located within the bluff body and is well protected from water hammer and temperature or pressure shocks. The sensor detects the pressure pulses and converts these into electrical signals. G. Coriolis-Effect Meter It is so called because the instrument employs the Coriolis principle which states that “A body of mass M, moving with constant linear velocity, and subject to an angular velocity, (or vibrating) experiences an inertial force at right angles to the direction of motion”. During operation, a drive coil, located at the centre of the bend in the tube, is energized periodically, and causes the sensor tube to oscillate (move up and down) about the support axis as shown in the figure. The tube vibrates rapidly at a rate of 40-200 cycles per second, and through a distance of just a few hundredths of a centimeter. H. Rotameter (Variable Area Flow Meter) Rotameters are a common type of variable area flow meter. Besides being as a standalone meter, they can also be found on the level bubbler system, and on the caissons. The rotameter consists of a tapered glass metering tube that has a float inside that is free to move up and down. A scale is engraved on the outside the tube in flow units. As the flow varies the float rises and falls and the flow value can be read off against the glass. The flow has to pass through the gap between the float and the walls of the tube, so there is a pressure drop. Learning Objectives : Upon completion of this chapter, the student should be able to: Know and understand concepts about analytical measurement. List and explain the different types of conductivity and pH measuring instrument and their principle of operation. Conductivity Conductivity is a measure of how well a solution conducts electricity. To carry a current a solution must contain charged particles, or ions. Most conductivity measurements are made in aqueous solutions, and the ions responsible for the conductivity come from electrolytes dissolved in the water. Salts (like sodium chloride and magnesium sulfate), acids (like hydrochloric acid and acetic acid), and bases (like sodium hydroxide and ammonia) are all electrolytes. Although water itself is not an electrolyte, it does have a very small conductivity, implying that at least some ions are present. The ions are hydrogen and hydroxide, and they originate from the dissociation of molecular water. Conductivity is not specific. It measures the total concentration of ions in solution. It cannot distinguish one electrolyte or ion from another. Not all aqueous solutions have conductivity. Solutions of non-electrolytes, for example sugar or alcohol, have no conductivity because neither sugar nor alcohol contains ions nor do they produce ions when dissolved in water. Applications of Conductivity Water treatment. Raw water as it comes from a lake, river, or the tap is rarely suitable for industrial use. The water contains contaminants, largely ionic, that if not removed will cause scaling and corrosion in plant equipment, particularly in heat exchangers, cooling towers, and boilers. There are many ways to treat water, and different treatments have different goals. Often the goal is demineralization, which is the removal of all or nearly all of the contaminants. In other cases the goal is to remove only certain contaminants, for example hardness ions (calcium and magnesium). Because conductivity is a measure of the total concentration of ions, it is ideal for monitoring demineralizer performance. It is rarely suitable for measuring how well specific ionic contaminants are being removed. Conductivity is also used to monitor the build up of dissolved ionic solids in evaporative cooling water systems and in boilers. When the conductivity gets too high, indicating a potentially harmful accumulation of solids, a quantity of water is drained out of the system and replaced with water having lower conductivity. Leak detection. Water used for cooling in heat exchangers and surface condensers usually contains large amounts of dissolved ionic solids. Leakage of the cooling water into the process liquid can result in potentially harmful contamination. Measuring conductivity in the outlet of a heat exchanger or in the condenser hot well is an easy way of detecting leaks. Clean in place. In the pharmaceutical and food and beverage industries, piping and vessels are periodically cleaned and sanitized in a procedure called clean-in-place (CIP). Conductivity is used to monitor both the concentration of the CIP solution, typically sodium hydroxide, and the completeness of the rinse. Interface detection. If two liquids have appreciably different conductivity, a conductivity sensor can detect the interface between them. Interface detection is important in a variety of industries including chemical processing and food and beverage manufacturing. Desalination. Drinking water desalination plants, both thermal (evaporative) and membrane (reverse osmosis), make extensive use of conductivity to monitor how completely dissolved ionic solids are being removed from the brackish raw water. Conductivity Measurement There are two types of conductivity measurement: contacting and inductive. The choice of which to use depends on the amount of conductivity, the corrosiveness of the liquid, and the amount of suspended solids. Generally, the inductive method is better when the conductivity is high, the liquid is corrosive, or suspended solids are present. Contacting Conductivity Meter Most contacting conductivity sensors consist of two metal electrodes, usually stainless steel or titanium, in contact with the electrolyte solution, see Figure 3. The analyzer applies an alternating voltage to the electrodes. The electric field causes the ions to move back and forth producing a current. Because the charge carriers are ions, the current is called an ionic current. The analyzer measures the current and uses Ohm’s law to calculate the resistance of the solution (resistance = voltage/current). The conductance of the solution is the reciprocal of the resistance. In the four electrode measurement, the analyzer injects an alternating current through the outer electrodes and measures the voltage across the inner electrodes. The analyzer calculates the conductance of the electrolyte solution from the current and voltage. Because the voltage measuring circuit draws very little current, charge transfer effects at the metal-liquid interface are largely absent in four-electrode sensors. As a result, a single four-electrode sensor has a much wider dynamic range than a two-electrode sensor Contacting conductivity measurements are restricted to applications where the conductivity is fairly low (although four- electrode sensors have a higher end operating range) and the sample is non-corrosive and free of suspended solids. Two- electrode sensors are ideal for measuring high purity water in semi-conductor, steam electric power, and pharmaceutical plants Inductive Conductivity Meter Inductive conductivity is sometimes called toroidal or electrodeless conductivity. An inductive sensor consists of two wire-wound metal toroids encased in a corrosion-resistant plastic body. One toroid is the drive coil, the other is the receive coil. The sensor is immersed in the conductive liquid. The analyzer applies an alternating voltage to