Summary

This chapter introduces system theory, focusing on systems, fields, and waves. It explains the engineering definition of a system as an interconnected set of components achieving a desired function. The chapter also explores the application of mathematical logic and methods to analyze system behavior.

Full Transcript

CHAPTER 1 Introduction 1.1. System theory in general This is a course about systems, fields, and waves. The engineering definition of a system is an interconnected set of components that accomplishes a desired function. As engineers, we want to build useful things. These things are based on scientifi...

CHAPTER 1 Introduction 1.1. System theory in general This is a course about systems, fields, and waves. The engineering definition of a system is an interconnected set of components that accomplishes a desired function. As engineers, we want to build useful things. These things are based on scientific knowledge of the world around us. The particular kind of knowledge that concentrates on applicable theories and phenomena we call engineering science, to distinguish it from pure science, for which the understanding of the world is an end in itself. • Your basic physics course taught you what a force is, how forces combine and how they affect the motion of particles and rigid bodies. After that, you might take an engineering science course in solid mechanics which applies these physical principles to engineered structures like bridges, beams, and gears. • In a physics course you learn how certain materials store electric charge, how other materials conduct or impede the flow of electric current. In an engineering science course you learn about electrical components called capacitors and resistors that can be incorporated into electronic circuits. • In your chemistry class you learn how temperature, pressure, and concentration affect the course of chemical reactions. In an engineering science course on reactor design you learn how the basic chemical principles are applied in practical industrial processes. • In your mathematics classes you learn about functions, their properties, and how to manipulate them with calculus. In an engineering science course you learn how to use mathematical functions to model physical phenomena, engineered components, and interconnections of components (systems). The application of mathematics—its logic and methods—to gain insight into the behavior of systems of all types is called system theory. Thayer School offers three introductory courses in system theory. Engs 22 is concerned with those kinds of systems in which the interesting dynamic variables—e.g., voltage, current, displacement, velocity—may be measured at discrete points. These points are regarded as the terminals of discrete, or lumped, components. Each component obeys a physical element law relating a pair of dynamic variables at its terminals, e.g., voltage and current, or force and velocity. The interconnection of these components according to balance and conservation principles result in mathematical models which have the form of ordinary differential equations. The dynamic variables are functions of time only; the lumped assumption removes any continuous spatial dependence. 2 Introduction In Engs 23 the lumped component assumption is relaxed. Whereas in a lumped analysis an axle is modeled by the angular displacements of its two ends, a distributed model considers how much the axle is twisted at each point along its length. A lumped analysis of an insulated wall considers only the temperature at the surfaces of the wall, but a distributed analysis considers the details of how heat flows within the wall. The dynamic variables now depend both on space and time, and are modeled by vector fields and partial differential equations. Engs 22 and 23 demonstrate one way to classify systems—lumped vs distributed. There are other classifications. In both Engs 22 and 23 the time variable is assumed to be continuous. In Engs 23 the spatial variables are also assumed to be continuous. The models purport to be valid for all time (and space) within their physical boundaries. Such system models are called continuous. In contrast, discrete-time (-space) systems are defined for time and/or spatial variables which are discrete, or sampled, e.g., t = kΔt, where Δt is called a sampling interval. One way that discrete systems arise is in the numerical solution of differential equations when purely analytic methods fail. You were introduced to these in Engs 22, and if time permits we will take up the subject again in Engs 23. You can pursue it further, if you wish, in Engs 91 and 105. Similar approaches are taken in signal processing (Engs 110) and control (Engs 145). Other systems are discrete by their very nature, not as discrete approximations to continuous systems. These include digital systems (Engs 31), queuing and inventory processes (Engs 52), and communication systems (Engs 68). Engs 27 is the course to take if you want to prepare for study in these areas. A third classification concerns the predictability (or variability) of a system’s parameters or dynamic variables. In Engs 22 and 23 we assume that we have exact knowledge of a system’s parameters (e.g., masses, spring constants), that we have exact knowledge of all inputs to the system, and that we make measurements without error. A system in which all parameters, inputs, and outputs are known perfectly is called deterministic. In principle, the equations describing the system’s behavior can be solved to give completely accurate predictions of the system response. A system in which the parameters, inputs, or outputs are subject to random variations is called probabilistic or stochastic. Examples include otherwise deterministic systems with noise corrupting the measured outputs, and digital communications systems with noisy inputs that lead to flipped bits. Probabilistic systems are the subject of Engs 27. Some of the most fertile ground for research in system theory in recent years has been in the area of nonlinear systems. With few exceptions, the systems studied in Engs 22 and 23 are linear, which means that the dynamic variables — voltage, current, etc. — appear in the equations of motion as linear terms, i.e., of the form av or bv̇, where a and b are constants, and not as v 2 , v v̇, or ev . It can truly be said of linear systems that if you know, to good approximation, the equations of motion, initial conditions, and driving functions, then you can predict accurately the behavior of the system for all time. Some nonlinear systems can be approximated as linear over restricted ranges of operation, as you may have seen in Engs 22. But in general, even very simple nonlinear systems are known to exhibit exquisite sensitivity to parameter values and to initial conditions, so that even minute inaccuracies in knowledge of these values can result in wildly different behaviors. Some of these behaviors even have the appearance of random fluctuations, though they are generated by completely deterministic equations of motion. Such apparently random behavior is known as chaos and appears in a surprising variety of real-world systems. Nonlinear system dynamics are covered in Math 53 and Engs 202. 1.2 Fields: Background and Some Definitions 3 A system can be classified along all four of these axes: lumped/distributed, continuous/discrete, linear/nonlinear, deterministic/stochastic. In Engs 23 we will concentrate on distributed, continuous, linear, deterministic systems (Figure 1.1). If all goes well, when you are done with Engs Real Physical World Mathematical Description Stochastic, continuous, time-varying, distributed, non-linear, ...... 1st Approximation Deterministic (time-invariant) Distributed PDE Approximation Lumped ODE Approximation Non-Linear Linear Approximation Continuous Discrete Numerical Analytical Figure 1.1: Classification of systems. This book concentrates on systems that are deterministic, distributed, linear, and continuous. 23 you will be prepared for further study of distributed systems and fields in engineering, e.g., electrical (Engs 61, 120, 123, 124, 125), mechanical (Engs 34, 142, 148, 156), chemical (Engs 36, 156), and environmental (Engs 43, 151). 1.2. Fields: Background and Some Definitions 1.2.1. The idea of a field Much of what we do in science and engineering is to describe or predict the outcome of a “cause”. For example, if we place a small heater in the proximity of an ice cube, the ice cube will begin to melt. Somehow the thermal energy from the small heater has managed to affect its surroundings such that the ice cube is receiving part of this thermal energy. How can that be? Assuming that the experiment is performed in, e.g., our kitchen, the thermal energy has “traveled” via transfer from one air molecule to another through collisions. This type of material transfer of energy is inherently appealing to us since it relates to everyday macroscopic observations of “cause and effect”. For this particular example it is reasonable to speak of an “action at a distance”. However, when we try to explain (or rather describe) how the different planets effect each other, 4 Introduction the action at a distance concept breaks down since there is no matter to transfer...whatever it is that is transferred from one planet to the other in the gravitational attraction. To “explain” the interaction between two mass bodies, scientists in the eighteenth and nineteenth centuries reverted to an idea of Aristotle, namely the ether. The ether is presumed to be some type of matter in which information can be transferred by action at a distance. In particular, Newton had the notion that in our physical universe, all forms of interactions (including light) could be explained by particles (small mass points) interacting in an ether like medium. However, the concept of an ether that mechanically conveys light was refuted by an elegant experiment performed by Michelson and Morley in the late ninetheenth century. By this time many scientists had started to question the appropriateness of the ether hypothesis and the action at a distance concept, in favor of the field idea. Michael Faraday, one of the early proponents of the field concept, suggested that, e.g., the current in a wire acts as a source for a magnetic field which exists everywhere in space. Faraday also postulated that there is gravitational field surrounding all material bodies. Faraday’s heuristic idea of a field was formalized and put into a self-consistent mathematical language by James Clerk Maxwell. The importance of Maxwell’s work has been summarized by Albert Einstein, “Before Clerk Maxwell people conceived of physical reality – in so far as it is supposed to represent events in nature – as material points, whose changes consist exclusively of motions which are subject to total differential equations. After Maxwell they conceived physical reality as represented by continuous fields, not mechanically explicable, which are subject to partial differential equations. This change in the conception of reality is the most profound and fruitful one that has come to physics since Newton.” Later, Einstein took the idea one step further by postulating that the field had taken the place of the ether as the medium by which physical information is conveyed. It is important to remember, however, that a field is just a model, an abstraction concocted by humans to describe and predict physical phenomena. The gravitational field does a great job of describing and predicting our everyday encounters with gravity. However, it does nothing to explain what gravity is or why it exists. Perhaps the area in which the concept of a field has had the most profound impact on how we describe physical phenomena is in the context of forces. The force acting on mass m1 (Figure 1.2) is, in the action at a distance description, F =G m1 m2 r2 (1.1) where G is the gravitational constant and the other variables have their usual meaning. This same gravitational interaction between two bodies in the field description is done by a separation into two parts; the first part creates a field from one of the bodies (say m1 , Figure 1.2) according to the source law m1 r g1 = −G 3 (1.2) r This field permeates all space and produces, at point P , a force on mass m2 through the force law F1 = m2 g1 (1.3) 1.2 Fields: Background and Some Definitions 5 The force F1 is directed along the line joining the two bodies. The argument can be reversed with respect to m1 and m2 , producing the reaction force F2 = −F1 according to Newtons third law. This two-step approach, that one source generates a field which interacts with a second m1 F1 r © 1994 Deneba Systems, Inc. F2 P m2 Figure 1.2: Gravitational forces. source to produce a force acting on this second source, can be used for all types of vector fields; fluid flow, heat conduction, electric, magnetic, gravity, to name a few. The field approach in Equations 1.2 and 1.3 appears to offer more powerful mathematical tools and more insight than the action at a distance formulation in Equation 1.1. In the action at a distance picture the inverse square force law appears accidental; at a superficial level it might just as well have been an inverse cube law. In the field picture, that law is a consequence of the fact that the field from a point source falls off as the inverse square of the radius. This in turn follows from the fact that in Euclidean geometry the surface area of a sphere is proportional to the square of its radius. 1.2.2. Linear superposition Throughout this book we are assuming that our fields are linearly dependent on the sources that are creating them, allowing us to use the superposition principle. Using the example of a gravitational field, if masses m1 , m2 , etc, produce fields g1 , g2 , etc, respectively, then the total field due to all the masses is simply the (vector) sum of the individual fields, gtotal = g1 + g2 + . . . + gn (1.4) Linearity greatly simplifies complex calculations, when it applies. The superposition principle breaks down if the fields are very strong. An example of this is when intense laser light interacts with matter, creating electric field strengths of the same order of magnitude as the field within the atoms of the material. In cases such as this one, nonlinear interactions between different fields have to be incorporated. Dividing physical interactions into two parts using the source law and force law naturally puts a lot of attention on the sources themselves. We shall see in a later chapter that sources are conveniently divided into two categories, so called flux sources and circulation sources. It can be shown that any vector field may always be decomposed into a sum of two vector fields, one due 6 Introduction solely to flux sources and the other solely to circulation sources. The fields from ideal geometric sources (point, line, sheet, sphere, cylinder, slab) have particularly simple mathematical forms and, together with superposition, enable the solution of complex field problems. 1.3. Analogies The fact that different physical systems can be described by the same mathematical model has led to the principle of analogy. The beauty of analogies is that we can generalize the knowledge from a specific field to a broader understanding of, seemingly, unrelated phenomena for other fields, or, as Feynman put it, “The same equations have the same solutions.” You have previously encountered, in Engs 22, analogies between voltage and temperature, current and heat flow, etc. There you may have noticed that the analogy is always made to an electrical system. The main reason for this is that it is relatively easy and cheap to set up and do measurements on an electric circuit that has both dissipative (resistive) and energy storing (capacitive and inductive) elements. Via analogies we can then use an electrical circuit to model, e.g., a large, complex, and expensive fluid system. The analogies between different fields can be shown to originate in the geometry of the sources creating the fields. For example, any point flux source (small heater, charged particle, tiny mass particle) will always create a radial vector field whose magnitude decreases inversely with the square of the distance. Even though analogies are very useful and are stressed throughout this book, it is also prudent to keep in mind that whenever two fundamentally different physical systems produce the same mathematical equations there are likely several approximations involved. If we were to describe our systems on a microscopic scale we would in most cases find that the equations would not be the same for different systems. 1.4. Applicability of Engs 23 To get a sense of how many and varied are the applications of distributed-parameter systems and fields, here are a few of the field-flow-wave phenomena involved in the manufacturing and operation of your personal computer: • • • • • • • • • • • diffusion in silicon, to fabricate the chips optical lithography, by which circuits are patterned onto the chips heat conduction, to melt the solder that attaches the chips to the circuit board heat conduction to get the heat off the chip and out to the heat sinks air flow to convectively cool the chips parasitic electromagnetic coupling between conductors on a circuit board, especially at GHz clock rates magnetic fields to read and write bits on the hard drive electric fields to twist the liquid crystals in the screen optical waves to carry the image from the screen to your eye guided wave propagation on the ethernet cable free space wave propagation for the WiFi

Use Quizgecko on...
Browser
Browser