Signals and Systems Notebook PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document is a Signals and Systems notebook, containing explanations and examples for various topics in the field, including system properties, signals, Fourier analysis, sampling and quantization. It is a useful resource for engineering students.
Full Transcript
SIGNALS AND SYSTEMS Contents Introduction 1 Overview of System Properties 2 Signal Operations 4 Signals: Unit Step and Delta 6 Euler’s Formula and Trigonometry 7 Trig...
SIGNALS AND SYSTEMS Contents Introduction 1 Overview of System Properties 2 Signal Operations 4 Signals: Unit Step and Delta 6 Euler’s Formula and Trigonometry 7 Trigonometric and Exponential Signals 11 Periodic Signals 12 Even and Odd Signals 13 Energy and Power Signals 15 Linearity: Definition 17 Linearity: Examples 18 Time Invariance: Conceptual 19 Time Invariance: Mathematics 21 System Stability 22 Linearity and Time Invariance Example 24 System State: Conceptual Introduction 28 Characterization of System Response 29 Linear Time Invariant (LTI) Systems 31 Impulse Response and Convolution 32 Convolution Properties 32 Convolution Example: Unit Step with Exponential 33 Convolution Example: Two Rectangular Pulses 36 Convolution Example: Triangle with Rectangle 39 Fourier Series 42 Complex Exponential Fourier Series 42 Fourier Series Example: Square Wave 44 Orthogonal Functions 45 Trigonometric Fourier Series 47 Fourier Series Example: Square Wave 49 Fourier Series Example: Impulse Train 50 Fourier Transform 51 Fourier Transform Example: Rectangular Pulse 51 Linearity of the Fourier Transform 52 Fourier Transform: Symmetry Property 52 Fourier Transform: Scaling Property 53 Fourier Transform: Shifting Property 53 Fourier Transform: The Convolution Property 54 Inverse Fourier Transform 54 Parseval’s Relation 55 Fourier Transform Examples 56 Sampling 58 Sampling Theorem 60 Quantization 61 Uniform Quantization 62 Quantization Examples 63 Introduction Inputs Outputs SYSTEM 𝑥(𝑡) 𝑦(𝑡) 𝑥(𝑡) 𝑦(𝑡) 𝑡 𝑡 Example 1: Cruise control Input: desired speed System: car Output: actual speed of the car Example 2: Cellphone Input: speech System: part of the cellphone that converts speech to radio signal Output: transmitted radio signal In the study of signals and systems, we treat inputs and outputs mathematically. The system takes the input function and maps it to the output function. 1 Overview of System Properties Characteristics: How many inputs and outputs the system has Stability Causality Single Input Single Output (SISO) System Input Output SYSTEM Multiple Input Multiple Output (MIMO) System Inputs Outputs SYSTEM ⋮ ⋮ Example: Car Inputs: Steering wheel, Gas pedal Outputs: Direction, Speed There are other possibilities: Single Input Multiple Output (SIMO) System Input Outputs SYSTEM ⋮ Multiple Input Single Output (MISO) System Inputs Output SYSTEM ⋮ 2 Stability Example: Cruise control in a car Input: Desired speed Output: Actual speed If the system is stable then if the input value is bounded, the output value will also stay bounded. An example of an unstable system is if you set the input to a certain speed, but the output (speed of the car) keeps increasing without bound. Of course, we want the system to be stable. The mathematical term we will use for stability is bounded input bounded output (BIBO) system. Causality The mathematical description is: the output 𝑦(𝑡) at a particular point in time 𝑡, depends on the input 𝑥(𝜏) for values of 𝜏, 𝜏 ≤ 𝑡. So a causal system cannot look into the future to determine what its output is. A non-causal system could predict the future (in the real world of course it does not happen). However, it is still important since the input and output functions may be functions of space rather than time. Example: Image processing (2-D) Linearity Time Invariance If the system is linear time invariant (LTI), we can use transform analysis, such as Laplace, to see how systems behave over time, or Fourier, to see how they behave over frequency. Linearity means that the principle of superposition holds 𝑥1 (𝑡) → 𝑦1 (𝑡) 𝑥2 (𝑡) → 𝑦2 (𝑡) 𝑎1 𝑥1 (𝑡) + 𝑎2 𝑥2 (𝑡) → 𝑎1 𝑦1 (𝑡) + 𝑎2 𝑦2 (𝑡) 3 Time invariance means that the system does not change through time. If we run the signal through the system today and if we run the signal through the system tomorrow, we will get the same result. Linearity and time invariance together are useful because we can make signals out of sums of time shifted signals, so if we know how the system responds to one signal, we can determine how it will respond to the sum of such time shifted signals. We can characterize the system by the transfer function. Signal Operations Scaling Let, 𝑦(𝑡) = 2𝑥(𝑡). We are scaling x(t) by a factor of 2. At every point in time we take the value of the signal and we multiply it by 2. 1 We can also let, 𝑦(𝑡) = 𝑥(𝑡). 2 Addition Let, 𝑦(𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡). 4 Time Shifting By having𝑥(𝑡 − 1), we shift 𝑥(𝑡) to the right by 1. Let 𝑡 = 1: 𝑦(1) = 𝑥(1 − 1) = 𝑥(0). Let 𝑡 = 2: 𝑦(2) = 𝑥(2 − 1) = 𝑥(1). By having𝑥(𝑡 + 1), we shift 𝑥(𝑡) to the left by 1. Let 𝑡 = 0: 𝑦(0) = 𝑥(0 + 1) = 𝑥(1). Let 𝑡 = 1: 𝑦(1) = 𝑥(1 + 1) = 𝑥(2). In general, we can have 𝑥(𝑡 − 𝜏)and 𝑥(𝑡 + 𝜏). 5 Signals: Unit Step and Delta Unit step function 0, 𝑡 < 0 𝑢(𝑡) = { 1, 𝑡 ≥ 1 Delta function Suppose we make the rectangle infinitely tall and infinitely narrow, so that the area under the rectangle stays 1. When we do that we end up with the delta function, 𝛿(𝑡). The delta function is also called an impulse. The mathematical expression used to define an impulse response (when an impulse is applied to the system input) is ∞ ∫ 𝑓(𝑡)𝛿(𝑡)𝑑𝑡 = 𝑓(0) −∞ 6 Euler’s Formula and Trigonometry The sine and cosine as coordinates of the unit circle Definition (cosine and sine): Given a point on the unit circle, at a counterclockwise angle 𝜃 from the positive 𝑥‒axis, cos(𝜃) is the 𝑥‒coordinate of the point, sin(𝜃) is the 𝑦‒coordinate of the point. 𝑦 1 (cos(𝜃 ) , sin(𝜃 )) sin(𝜃 ) 𝜃 −1 cos(𝜃 ) 1 𝑥 −1 Some trigonometric identities follow immediately from this definition, in particular, since the unit circle is all the points in plane with 𝑥 and 𝑦 coordinates satisfy 𝑥 2 + 𝑦 2 = 1, we have cos 2 (𝜃) + sin2 (𝜃) = 1. 7 The complex plane The complex number 𝑐 is given as the sum 𝑐 = 𝑎 + 𝑗𝑏 where 𝑎 and 𝑏 are real numbers, 𝑎 is called the real part of 𝑐, that is, 𝑎 = ℛℯ{𝑐}, and 𝑏 is called the imaginary part of 𝑐, that is, 𝑏 = ℐ𝓂{𝑐}. Note that 𝑗 denotes the imaginary number, that is, 𝑗 = √−1. For a complex number 𝑐, one defines its conjugate by changing the sign of the imaginary part 𝑐 ∗ = 𝑎 − 𝑗𝑏. The length-squared of a complex number 𝑐 is 𝑐𝑐 ∗ = (𝑎 + 𝑗𝑏)(𝑎 − 𝑗𝑏) = 𝑎2 + 𝑏 2 which is a real number. To extract the real and imaginary parts of a given complex number one can compute 1 ℛℯ{𝑐} = (𝑐 + 𝑐 ∗ ) 2 1 ℐ𝓂{𝑐} = (𝑐 − 𝑐 ∗ ) 2𝑗 To divide by a complex number 𝑐, one can instead multiply by 𝑐∗ 𝑐𝑐 ∗ in which form the only division is by a real number, the length-squared of 𝑐. Instead of parametrizing points on the plane by pairs (𝑥, 𝑦) of real numbers, one can use a single complex number 𝑧 = 𝑥 + 𝑗𝑦 in which case one often refers to the plane parametrized in this way as the complex plane. Points on the unit circle are now given by the complex numbers cos(𝜃) + 𝑗 sin(𝜃) which go around the circle once starting at 𝜃 = 0 and ending up back at the same point when 𝜃 = 2𝜋. 8 Now the picture is +𝑗 cos(𝜃 ) + 𝑗 sin(𝜃 ) 𝜃 −1 1 −𝑗 A remarkable property of complex numbers is that, since multiplying two complex numbers gives a third complex number, they provide something new and not at all obvious: a consistent way of multiplying points on the plane. Euler’s formula 𝑒 𝑗𝜃 = cos(𝜃) + 𝑗 sin(𝜃). Using the formula for the real part and the imaginary part of a complex number, we obtain 1 cos(𝜃) = (𝑒 𝑗𝜃 + 𝑒 −𝑗𝜃 ) 2 1 𝑗𝜃 sin(𝜃) = (𝑒 − 𝑒 −𝑗𝜃 ). 2𝑗 9 Trigonometric identities cos(𝜃1 + 𝜃2 ) = ℛℯ{𝑒 𝑗(𝜃1 +𝜃2 ) } = ℛℯ{𝑒 𝑗𝜃1 𝑒 𝑗𝜃2 } = ℛℯ{[cos(𝜃1 ) + 𝑗 sin(𝜃1 )][cos(𝜃2 ) + 𝑗 sin(𝜃2 )]} = ℛℯ{[cos(𝜃1 ) cos(𝜃2 ) − sin(𝜃1 ) sin(𝜃2 )] + 𝑗[cos(𝜃1 ) sin(𝜃2 ) + sin(𝜃1 ) cos(𝜃2 )]} = cos(𝜃1 ) cos(𝜃2 ) − sin(𝜃1 ) sin(𝜃2 ) cos(𝜃1 − 𝜃2 ) = ℛℯ{𝑒 𝑗(𝜃1 −𝜃2 ) } = ℛℯ{𝑒 𝑗𝜃1 𝑒 −𝑗𝜃2 } = ℛℯ{[cos(𝜃1 ) + 𝑗 sin(𝜃1 )][cos(𝜃2 ) − 𝑗 sin(𝜃2 )]} = ℛℯ{[cos(𝜃1 ) cos(𝜃2 ) + sin(𝜃1 ) sin(𝜃2 )] + 𝑗[sin(𝜃1 ) cos(𝜃2 ) − cos(𝜃1 ) sin(𝜃2 )]} = cos(𝜃1 ) cos(𝜃2 ) + sin(𝜃1 ) sin(𝜃2 ) sin(𝜃1 + 𝜃2 ) = ℐ𝑚{𝑒 𝑗(𝜃1 +𝜃2 ) } = ℐ𝑚{𝑒 𝑗𝜃1 𝑒 𝑗𝜃2 } = ℐ𝑚{[cos(𝜃1 ) + 𝑗 sin(𝜃1 )][cos(𝜃2 ) + 𝑗 sin(𝜃2 )]} = ℐ𝑚{[cos(𝜃1 ) cos(𝜃2 ) − sin(𝜃1 ) sin(𝜃2 )] + 𝑗[cos(𝜃1 ) sin(𝜃2 ) + sin(𝜃1 ) cos(𝜃2 )]} = cos(𝜃1 ) sin(𝜃2 ) + sin(𝜃1 ) cos(𝜃2 ) sin(𝜃1 − 𝜃2 ) = ℐ𝑚{𝑒 𝑗(𝜃1 −𝜃2 ) } = ℐ𝑚{𝑒 𝑗𝜃1 𝑒 −𝑗𝜃2 } = ℐ𝑚{[cos(𝜃1 ) + 𝑗 sin(𝜃1 )][cos(𝜃2 ) − 𝑗 sin(𝜃2 )]} = ℐ𝑚{[cos(𝜃1 ) cos(𝜃2 ) + sin(𝜃1 ) sin(𝜃2 )] + 𝑗[sin(𝜃1 ) cos(𝜃2 ) − cos(𝜃1 ) sin(𝜃2 )]} = sin(𝜃1 ) cos(𝜃2 ) − cos(𝜃1 ) sin(𝜃2 ) It follows that cos(𝜃1 + 𝜃2 ) + cos(𝜃1 − 𝜃2 ) = 2 cos(𝜃1 ) cos(𝜃2 ) cos(𝜃1 − 𝜃2 ) − cos(𝜃1 + 𝜃2 ) = 2 sin(𝜃1 ) sin(𝜃2 ) sin(𝜃1 + 𝜃2 ) + sin(𝜃1 − 𝜃2 ) = 2 sin(𝜃1 ) cos(𝜃2 ) sin(𝜃1 + 𝜃2 ) − sin(𝜃1 − 𝜃2 ) = 2 cos(𝜃1 ) sin(𝜃2 ) Note that 𝑛 cos(𝑛𝜃) + 𝑗 sin(𝑛𝜃) = 𝑒 𝑗𝑛𝜃 = [𝑒 𝑗𝜃 ] = [cos(𝜃) + 𝑗 sin(𝜃)]𝑛 If 𝑛 = 2: cos(2𝜃) = ℛℯ{[cos(𝜃) + 𝑗 sin(𝜃)]2 } = ℛℯ{[cos(𝜃) + 𝑗 sin(𝜃)][cos(𝜃) + 𝑗 sin(𝜃)]} = ℛℯ{[cos2 (𝜃) − sin2 (𝜃)] + 𝑗[2 cos(𝜃) sin(𝜃)]} = cos2 (𝜃) − sin2 (𝜃) sin(2𝜃) = ℐ𝑚{[cos(𝜃) + 𝑗 sin(𝜃)]2 } = ℐ𝑚{[cos(𝜃) + 𝑗 sin(𝜃)][cos(𝜃) + 𝑗 sin(𝜃)]} = ℐ𝑚{[cos2 (𝜃) − sin2 (𝜃)] + 𝑗[2 cos(𝜃) sin(𝜃)]} = 2 cos(𝜃) sin(𝜃) 10 Trigonometric and Exponential Signals Sinusoid This is a periodic signal. cos(𝜔𝑜 𝑡) } special cases of the above definition sin(𝜔𝑜 𝑡) Exponential Let us introduce a complex exponential 𝑒 𝑗𝜔𝑜 𝑡 (where 𝑗 = √−1) 𝑒 𝑗𝜔𝑜 𝑡 = ⏟ cos(𝜔𝑜 𝑡) + 𝑗 ⏟ sin(𝜔𝑜 𝑡) real imaginary 11 Periodic Signals Definition: 𝑥(𝑡) = 𝑥(𝑡 + 𝑇), where 𝑇 is the period of the signal. The periodic signal looks the same when shifted by its period. Fundamental period: the smallest distance by which we can shift the signal and is looks the same. If the signal has a fundamental period, it is also periodic for 𝑇 = 𝑘𝑇𝑜 , where 𝑘 is an integer. So it will be periodic at 2𝑇𝑜 , 3𝑇𝑜 , and so on. We can also determine the fundamental frequency 1 𝑓𝑜 = [cycles/second] 𝑇𝑜 Example: 𝑇𝑜 = 2 [seconds] 1 𝑓𝑜 = = 0.5 [cycles/second] 2 2𝜋 We can also define the radial frequency 𝜔𝑜 = 2𝜋𝑓𝑜 = 𝑇𝑜 [radians/second]. Example: cos(𝜔𝑜 𝑡), period is 𝑇𝑜. So, cos(𝜔𝑜 𝑡) has a fundamental period of 𝑇𝑜. 12 Even and Odd Signals Even signal: 𝑥(𝑡) = 𝑥(−𝑡). 𝑥(𝑡) If we flip the signal around the 𝑡 = 0 axis, the signal will look the same. (symmetric signal) 𝑡 Odd signal: 𝑥(𝑡) = −𝑥(−𝑡). 𝑥(𝑡) If we flip the signal around the 𝑡 = 0 axis, the signal will look as if it was multiplied by (−1). So by multiplying the flipped signal by (−1), we get back the original signal. 𝑡 (anti-symmetric signal) Any signal 𝑥(𝑡) can be represented as the sum of an even part and an odd part. 𝑥(𝑡) = 𝑥𝑒 (𝑡) + 𝑥𝑜 (𝑡) where 1 𝑥𝑒 (𝑡) = [𝑥(𝑡) + 𝑥(−𝑡)] 2 1 𝑥𝑜 (𝑡) = [𝑥(𝑡) − 𝑥(−𝑡)] 2 13 Example: Decomposing the unit step function into its even and odd part. 𝑥(𝑡) 𝑥(𝑡) 1 1 𝑡 𝑡 𝑥(−𝑡) 𝑥(−𝑡) 1 1 𝑡 𝑡 1 𝑥𝑒 (𝑡) = [𝑥(𝑡) + 𝑥(−𝑡)] 2 −𝑥(−𝑡) 1/2 𝑡 𝑡 −1 1 𝑥𝑜 (𝑡) = [𝑥(𝑡) − 𝑥(−𝑡)] 2 1/2 𝑡 −1/2 We can verify that we get the unit step function as the sum of the even and odd part. 𝑥𝑒 (𝑡) 1/2 𝑡 𝑥𝑜 (𝑡) 1 /2 𝑡 −1/2 𝑥(𝑡) 1 14 𝑡 Energy and Power Signals We consider a simple circuit. We are interested in the power associated with the resistor (the power dissipated by the resistor as the current flows through it). 𝑖(𝑡) 𝑣 2 (𝑡) + 𝑃= = 𝑖 2 (𝑡)𝑅 𝑅 ± 𝑅 𝑣(𝑡) − Power is proportional to 𝑣 2 (𝑡) and 𝑖 2 (𝑡) (with a different proportionality constant), 𝑃~𝑣 2 (𝑡), 𝑃~𝑖 2 (𝑡). For that reason, when we consider signals, we say that power associated with a signal is proportional to the magnitude of the signal squared, 𝑃~|𝑥(𝑡)|2. This gives us instantaneous power. Recall, power is energy per unit time. Since power is energy per unit time, we get energy by integrating the power over all time. ∞ 𝐸 = ∫ |𝑥(𝑡)|2 𝑑𝑡 −∞ |𝑥(𝑡)|2 Signals that have finite energy are known as energy signals. Energy signals have zero average power. 15 Suppose we have a signal that is a sinusoidal wave. 𝑇 𝑇 − 2 2 𝑇 The energy will be the area under |𝑥(𝑡)|2. The signal extends to infinity in both directions. Therefore, in this case the energy will be infinite, 𝐸 → ∞. However, we can see that at any point in time the signal will have a finite instantaneous power. So, it will also have a finite average power. 1 𝑇/2 𝑃 = lim ∫ |𝑥(𝑡)|2 𝑑𝑡 𝑇→∞ 𝑇 −𝑇/2 1 If 𝑥(𝑡) is a sinusoid between −1 and 1, the average power will be 𝑃 = 2. 1 𝑇/2 1 𝑇/2 1 𝑃 = lim ∫ cos 2 (𝜔𝑜 𝑡)𝑑𝑡 = lim ∫ [1 + cos(2𝜔𝑜 𝑡)]𝑑𝑡 𝑇→∞ 𝑇 −𝑇/2 𝑇→∞ 𝑇 −𝑇/2 2 1 1 𝑇/2 1 1 𝑇/2 1 1 𝑇 𝑇 = lim ∫ 𝑑𝑡 + lim ∫ cos(2𝜔𝑜 𝑡) 𝑑𝑡 = lim [ − (− )] 2 𝑇→∞ 𝑇 −𝑇/2 2 𝑇→∞ 𝑇 ⏟−𝑇/2 2 𝑇→∞ 𝑇 2 2 0 1 1 𝑇 𝑇 1 1 1 = lim ( + ) = lim 𝑇 = 2 𝑇→∞ 𝑇 2 2 2 𝑇→∞ 𝑇 2 Signals that have infinite energy, but finite power, are known as power signals. 16 Linearity: Definition Homogeneity SYSTEM 𝑥(𝑡) 𝑦(𝑡) 𝑎𝑥(𝑡) 𝑎𝑦(𝑡) The system is homogeneous if scaling the input leads to scaling the output. Additivity SYSTEM 𝑥1 (𝑡) 𝑦1 (𝑡) 𝑥2 (𝑡) 𝑦2 (𝑡) 𝑥1 (𝑡) + 𝑥2 (𝑡) 𝑦1 (𝑡) + 𝑦2 (𝑡) 17 Linearity: Examples Example: gain GAIN OF 2 𝑥(𝑡) 𝑦(𝑡) 𝑦(𝑡) = 2𝑥(𝑡) First check homogeneity: 𝑎𝑥(𝑡) → 𝑎𝑦(𝑡). If 𝑎𝑥(𝑡) goes into the system, the output is 2𝑎𝑥(𝑡). Since, 𝑦(𝑡) = 2𝑥(𝑡), the output is 𝑎𝑦(𝑡). So, the system is homogeneous. Next, we need to check if the system is additive. 𝑥1 (𝑡) → 𝑦1 (𝑡) = 2𝑥1 (𝑡) 𝑥2 (𝑡) → 𝑦2 (𝑡) = 2𝑥2 (𝑡) If 𝑥1 (𝑡) + 𝑥2 (𝑡) goes into the system, the output is 2[𝑥1 (𝑡) + 𝑥2 (𝑡)]. The question is if this equals 𝑦1 (𝑡) + 𝑦2 (𝑡). We have 2[𝑥1 (𝑡) + 𝑥2 (𝑡)] = 2𝑥1 (𝑡) + 2𝑥2 (𝑡) = 𝑦1 (𝑡) + 𝑦2 (𝑡) So, the system is additive. Therefore, since the system is both homogeneous and additive, we conclude that the system that has a gain of 2 is linear. Example: squarer SQUARER 𝑥(𝑡) 𝑦(𝑡) 𝑦(𝑡) = [𝑥(𝑡)]2 First check homogeneity: 𝑎𝑥(𝑡) → 𝑎𝑦(𝑡). If 𝑎𝑥(𝑡) goes into the system, the output is [𝑎𝑥(𝑡)]2. [𝑎𝑥(𝑡)]2 = 𝑎2 𝑥 2 (𝑡) = 𝑎2 𝑦(𝑡) So, even though the input is 𝑎𝑥(𝑡), the output is 𝑎2 𝑦(𝑡), and not 𝑎𝑦(𝑡). Therefore, the system is not homogeneous. Next, we check if the system is additive. 𝑥1 (𝑡) → 𝑦1 (𝑡) = 𝑥12 (𝑡) 𝑥2 (𝑡) → 𝑦2 (𝑡) = 𝑥22 (𝑡) 18 If the input is 𝑥1 (𝑡) + 𝑥2 (𝑡), and the system is additive, the output should be 𝑦1 (𝑡) + 𝑦2 (𝑡). Let’s check, [𝑥1 (𝑡) + 𝑥2 (𝑡)]2 = 𝑥12 (𝑡) + 2𝑥1 (𝑡)𝑥2 (𝑡) + 𝑥22 (𝑡) = 𝑦1 (𝑡) + 2𝑥1 (𝑡)𝑥2 (𝑡) + 𝑦2 (𝑡) Therefore, the system is not additive. The system is neither homogeneous nor additive. This means that the system is not linear. Time Invariance: Conceptual The basic idea is that time invariant system does not change as a function of time. GAIN OF 2 𝑥(𝑡) 𝑦(𝑡) 𝑥(𝑡) 𝑦(𝑡) 𝑡 𝑡 𝑥𝐷 (𝑡) = 𝑥(𝑡 − 𝜏) 𝑦𝐷 (𝑡) 𝑡 𝑡 𝜏 𝜏 The question is does𝑦𝐷 (𝑡) = 𝑦(𝑡 − 𝜏)? Indeed, this is true and the system is time invariant. This is expected as the system has constant gain, that is, the gain of the system never changes. 19 cos(𝜔𝑜 𝑡) 𝑥(𝑡) 𝑦(𝑡) cos(𝜔𝑜 𝑡) We will assume that the frequency 𝜔𝑜 is very low, so the signal is changing slowly. 𝑡 The gain of the system is not constant, but cos(𝜔𝑜 𝑡). 𝑥(𝑡) 𝑦(𝑡) 𝑡 𝑡 𝑥𝐷 (𝑡) 𝑦𝐷 (𝑡) 𝑡 𝑡 The question is does 𝑦𝐷 (𝑡) = 𝑦(𝑡 − 𝜏)? Clearly, 𝑦𝐷 (𝑡) ≠ 𝑦(𝑡 − 𝜏). So, this is a time varying system. This makes sense, because the gain of the system changes as a function of time. 20 Time Invariance: Mathematics SYSTEM 𝑥(𝑡) 𝑦(𝑡) 𝑥(𝑡) → 𝑦(𝑡) ⇒ 𝑦(𝑡 − 𝜏) ? 𝑥𝐷 (𝑡) = 𝑥(𝑡 − 𝜏) → 𝑦𝐷 (𝑡) ⇒ 𝑦(𝑡 − 𝜏) Example: ⊗ 2 𝑥(𝑡) → 𝑦(𝑡) = 2𝑥(𝑡) ⇒ 𝑦(𝑡 − 𝜏) = 2𝑥(𝑡 − 𝜏) Example of a system that is 𝑥𝐷 (𝑡) = 𝑥(𝑡 − 𝜏) → 𝑦𝐷 (𝑡) = 2𝑥𝐷 (𝑡) = 2𝑥(𝑡 − 𝜏) time invariant Example: ⊗ cos(𝜔𝑜 𝑡) 𝑥(𝑡) → 𝑦(𝑡) = 𝑥(𝑡) cos(𝜔𝑜 𝑡) ⇒ 𝑦(𝑡 − 𝜏) = 𝑥(𝑡 − 𝜏) cos[𝜔𝑜 (𝑡 − 𝜏)] Example of a time varying 𝑥𝐷 (𝑡) = 𝑥(𝑡 − 𝜏) → 𝑦𝐷 (𝑡) = 𝑥𝐷 (𝑡) cos(𝜔𝑜 𝑡) = 𝑥(𝑡 − 𝜏) cos(𝜔𝑜 𝑡) system 21 System Stability Bounded-Input Bounded-Output (BIBO) Stability SYSTEM 𝑥(𝑡) 𝑦(𝑡) If |𝑥(𝑡)| < 𝑎 for all 𝑡, then |𝑦(𝑡)| < 𝑏 for all 𝑡. If this statement holds, then the system is bounded-input bounded-output stable. In other words, as long as the input stays bounded, the output will remain bounded as well. Example: gain 𝐺 ⊗𝐺 𝑥(𝑡) 𝑦(𝑡) Is this a BIBO system? 𝑦(𝑡) = 𝐺𝑥(𝑡) Let us assume that the input is bounded, that is, |𝑥(𝑡)| < 𝑎. Then, |𝑦(𝑡)| = |𝐺𝑥(𝑡)| = |𝐺||𝑥(𝑡)| < 𝐺𝑎 ⏟ 𝑏 So, |𝑦(𝑡)| < 𝑏 This means that a gain is a BIBO stable system. It may be possible for the input to be unbounded. For example, 𝑥(𝑡) = 𝑒 𝛼𝑡 , gets very large. So, 𝑥(𝑡) = 𝑒 𝛼𝑡 , does not satisfy the inequality |𝑥(𝑡)| < 𝑎, (the input is not bounded). 22 Let us suppose, we have a system where the output and input are related as 𝑡 𝑦(𝑡) = ∫ 𝑥(𝜏)𝑑𝜏 0 Is this system BIBO stable? The way we can show that a system is not BIBO stable is by finding one input signal that is bounded that gives an unbounded output. Suppose the input is 𝑥(𝑡) = 𝑢(𝑡), where 𝑢(𝑡) is the unit step 𝑢(𝑡) function. The unit step function is a bounded function (satisfied for any 𝑎 > 1). 𝑡 Then the output is 𝑡 𝑦(𝑡) = ∫ 1𝑑𝜏 = 𝑡 for 𝑡 ≥ 0 0 This is not a bounded output (𝑡 gets large). It is not possible to find a constant value 𝑏 that could satisfy |𝑦(𝑡)| < 𝑏 for all values of 𝑡. Therefore, this is an example of a system that has a bounded input and gives an unbounded output. 𝑢(𝑡) 𝑡 𝑡′ 𝑡 ′′ In summary, to show that a system is BIBO stable, we need to show that for every possible bounded input, the output is bounded. To show that a system is not BIBO stable, we just need to find one bounded input that gives an unbounded output. 23 Linearity and Time Invariance Example We consider an𝑅𝐶 circuit. The output of the circuit is the voltage across the capacitor. voltageacrosstheresistor 𝑥(𝑡) − 𝑦(𝑡) 𝑅 𝑖(𝑡) = = 𝑖(𝑡) resistance 𝑅 + 𝑥(𝑡) 𝑦(𝑡) 𝑑𝑦(𝑡) ± 𝑖(𝑡) = 𝐶 𝑑𝑡 𝐶 − We have 𝑥(𝑡) − 𝑦(𝑡) 𝑑𝑦(𝑡) =𝐶 𝑅 𝑑𝑡 𝑑𝑦(𝑡) 1 1 + 𝑦(𝑡) = 𝑥(𝑡) 𝑑𝑡 𝑅𝐶 𝑅𝐶 First we look into homogeneity. 𝑥(𝑡) 𝑦(𝑡) The capacitor charges to 1 V. 1 1 𝑡 𝑡 2𝑥(𝑡) This assumes 𝑦(0) = 0 V. 2 2 𝑡 𝑡 So, the system satisfies Suppose 𝑦(0) = 2 V. homogeneity as long as the initial 𝑥(𝑡) 𝑦(𝑡) condition 𝑦(0) = 0 V. Otherwise it 2 does not satisfy homogeneity. 1 1 𝑡 𝑡 2𝑥(𝑡) 2 2 𝑡 𝑡 24 Next, we look into additivity. This assumes 𝑦(0) = 0 V. 𝑥1 (𝑡) 𝑦1 (𝑡) 1 1 𝑡 𝑡 𝑥2 (𝑡) 𝑦2 (𝑡) 1 1 𝑡 𝑡 𝑥1 (𝑡) + 𝑥2 (𝑡) 𝑦1 (𝑡) + 𝑦2 (𝑡) 2 2 1 1 𝑡 𝑡 We observe that if 𝑥1 (𝑡) → 𝑦1 (𝑡) 𝑥2 (𝑡) → 𝑦2 (𝑡) then 𝑥1 (𝑡) + 𝑥2 (𝑡) → 𝑦1 (𝑡) + 𝑦2 (𝑡) Additivity is satisfied. 25 Next, we consider a different initial condition, 𝑦(0) = 2. 𝑥1 (𝑡) 𝑦1 (𝑡) 2 1 1 𝑡 𝑡 𝑥2 (𝑡) 𝑦2 (𝑡) 2 1 1 𝑡 𝑡 𝑥1 (𝑡) + 𝑥2 (𝑡) not𝑦1 (𝑡) + 𝑦2 (𝑡) 2 2 1 1 𝑡 𝑡 We observe that if 𝑥1 (𝑡) → 𝑦1 (𝑡) 𝑥2 (𝑡) → 𝑦2 (𝑡) then 𝑥1 (𝑡) + 𝑥2 (𝑡) ↛ 𝑦1 (𝑡) + 𝑦2 (𝑡) Additivity is not satisfied. Easy to see: consider at 𝑡 = 0. 𝑦1 (0) = 𝑦2 (0) = 2 𝑦1 (0) + 𝑦2 (0) = 4 But, the output to 𝑥1 (𝑡) + 𝑥2 (𝑡) at 𝑡 = 0is 2. We have discovered that the circuit is linear if 𝑦(0) = 0. It is however non-linear whenever 𝑦(0) ≠ 0. 26 Next, we consider time invariance. 𝑥1 (𝑡) 𝑦1 (𝑡) 1 1 𝑡 𝑡 𝑥2 (𝑡) 𝑦2 (𝑡) 1 1 𝑡 𝑡 We can see that in this case the system is time invariant. What if 𝑦(0) = 2 V? 𝑥1 (𝑡) 𝑦1 (𝑡) 2 1 1 𝑡 𝑡 𝑥2 (𝑡) 𝑦2 (𝑡) 2 1 1 𝑡 𝑡 We can see that in this case the system is time varying. We observe that if the system is time invariant or not, depends on the initial condition 𝑦(0), that is, if 𝑦(0) = 0 V or not. Note that 𝑦(0) is known as a state variable. In order to understand the behavior of the circuit we need to know the state variable (the initial voltage across the capacitor). In this case, it actually determines if the circuit is linear. 27 System State: Conceptual Introduction State = Memory Conceptually, the system state is its memory. In other words, it is the information that we need to know in order to figure out what the system will do in the future. More mathematically speaking, the system state is the information at time 𝑡𝑜. If we have this information and if we have the input 𝑥(𝑡)for 𝑡 ≥ 𝑡𝑜 , then we can compute the state for values of 𝑡 > 𝑡𝑜 and we can compute the output for 𝑡 > 𝑡𝑜. Information at 𝑡𝑜 State 𝑡 > 𝑡𝑜 ⟹ 𝑥(𝑡)for𝑡 ≥ 𝑡𝑜 Output 𝑡 > 𝑡𝑜 The basic idea is that if we know the state at 𝑡𝑜 and we know the input for all values of time 𝑡 > 𝑡𝑜 , then we can compute the state for any time 𝑡 > 𝑡𝑜 and we can compute the output for any time 𝑡 > 𝑡𝑜. If the system is described by a differential equation, the initial conditions will essentially be the state variables. A question that might come up is how do we determine what components or elements of the system are likely to be state variables. Again, we consider state as memory. If we have an electrical system (for example a circuit), then the state variables are Capacitor voltage Inductor current Recall from circuit theory that neither capacitor voltage nor the inductor current can change instantaneously. In other words, they represent a memory element in the circuit. If we have a mechanical system, typically state variables are Position; Angular Position Rotating system Velocity; Angular Velocity In a real physical system objects have mass, so these cannot change instantaneously. In other words, they have memory. 28 Characterization of System Response SYSTEM 𝑢(𝑡) 𝑦(𝑡) The input is a unit step function, 𝑢(𝑡). If the system is stable, usually one of two things may happen. 1) The output becomes more or less constant. In this case, we define the steady state error to be the difference between the desired (input) value and the actual (output) value. 𝑢(𝑡) steady state error 𝑡 2) Another possibility is for the output to vary above and below the desired value. In this case, the steady state error will be the largest deviation between the value we want and the value we have. 𝑢(𝑡) steady state error 𝑡 If the system is unstable, the output would just get larger and larger (unbounded). 29 Next, we consider the system settling time, rise time, and overshoot. 𝑢(𝑡) overshoot[%] 90% 10% 𝑡 ⏟ rise time settling time ⏟ The settling time basically tells us how long it takes the system to get close to the steady state. It is the time from when the input goes up, until the output drops down, so that the error between the actual output and the desired output stays below a pre-set threshold. Another time that may be of interest is the time it may take the output signal to go from, say 10% of its final value to, say90% of its final value. This is known as the rise time. o Rise times are a particularly big issue in digital circuits, because the longer it takes for an output signal to go up, in response to a unit step input, the slower the circuit runs. Finally, quite often the output signal gets to the desired value, continues to rise beyond the desired value, and then comes back down. The distance that the output signal goes above the desired value is called the overshoot, because we have overshot the value that we wanted to go to. 30 Linear Time Invariant (LTI) Systems If we know the impulse response, ℎ(𝑡), for a linear time invariant (LTI) system, we can figure out what the output will be for any input. It is done through an operation called convolution, 𝑦(𝑡) = 𝑥(𝑡) ∗ ℎ(𝑡) ℎ(𝑡)is known as an impulse response because it is a response of the system to an impulse. SYSTEM 𝛿(𝑡) 𝑦(𝑡) Another way to characterize the system is through a transfer function, (frequency response), 𝐻(𝜔). (Recall that the radial frequency 𝜔 = 2𝜋𝑓.) It allows us to compute the output as 𝑌(𝜔) = 𝑋(𝜔) ∙ 𝐻(𝜔) where we obtain 𝐻(𝜔) with the help of the Fourier transform ℱ{ℎ(𝑡)} → 𝐻(𝜔) The impulse response, ℎ(𝑡), tells us everything we need to know about a linear time-invariant system. 31 Impulse Response and Convolution SYSTEM 𝛿(𝑡) 𝑦(𝑡) Considerations: If the input is 𝛿(𝑡 − 𝛼) since the system is time invariant, the output will be ℎ(𝑡 − 𝛼). 𝛿(𝑡 − 𝛼) → ℎ(𝑡 − 𝛼) The signal 𝑥(𝑡) can be expressed as ∞ 𝑥(𝑡) = ∫ 𝑥(𝜏)𝛿(𝑡 − 𝜏)𝑑𝜏 −∞ by using the sifting properties of the delta function. The output can be written then as ∞ 𝑦(𝑡) = ∫ 𝑥(𝜏)ℎ(𝑡 − 𝜏)𝑑𝜏 = ⏟ 𝑥(𝑡) ∗ ℎ(𝑡) −∞ convolution Therefore, the output of an LTI system is given by the convolution of the input and the impulse response. Convolution Properties Commutative: 𝑥(𝑡) ∗ ℎ(𝑡) = ℎ(𝑡) ∗ 𝑥(𝑡) Associative: [𝑥(𝑡) ∗ 𝑔(𝑡)] ∗ ℎ(𝑡) = 𝑥(𝑡) ∗ [𝑔(𝑡) ∗ ℎ(𝑡)] Distributive: 𝑥(𝑡) ∗ [𝑔(𝑡) + ℎ(𝑡)] = 𝑥(𝑡) ∗ 𝑔(𝑡) + 𝑥(𝑡) ∗ ℎ(𝑡) What happens if you convolve a function with the delta function ∞ 𝑥(𝑡) ∗ 𝛿(𝑡 − 𝑇) = ∫ 𝑥(𝜏)𝛿(𝑡 − 𝑇 − 𝜏)𝑑𝜏 = 𝑥(𝑡 − 𝑇) −∞ 𝑥(𝑡) ∗ 𝛿(𝑡 − 𝑇) = 𝑥(𝑡 − 𝑇) So, convolving the signal 𝑥(𝑡) with a shifted delta function takes the signal 𝑥(𝑡) and shifts it by the amount that the delta function was shifted. 32 Convolution Example: Unit Step with Exponential 𝑢(𝑡) ∗ ℎ(𝑡) = ℎ(𝑡) ∗ 𝑢(𝑡)since convolution is commutative. First, we calculate ∞ ℎ(𝑡) ∗ 𝑢(𝑡) = ∫ ℎ(𝜏)𝑢(𝑡 − 𝜏)𝑑𝜏 −∞ 33 Therefore, Next, let us check ∞ 𝑢(𝑡) ∗ ℎ(𝑡) = ∫ 𝑢(𝜏)ℎ(𝑡 − 𝜏)𝑑𝜏 −∞ 34 35 Convolution Example: Two Rectangular Pulses Step 1.ℎ(𝑡 − 𝜏): mirrorℎ(𝑡) about the 𝑦 axis and then shift to 𝑡. Step 2: Multiply 𝑥(𝜏)and ℎ(𝑡 − 𝜏). Step 3. Integrate. Repeat steps 1, 2, and 3, for various values of 𝑡. 36 37 Note that the length of the flat spot is 2, which is actually the difference in the lengths of the two rectangular pulses that were convolved. 38 Convolution Example: Triangle with Rectangle 𝑥(𝑡) = −𝑡 + 1 ℎ(𝑡) Note that ℎ(𝑡) = ℎ(−𝑡). 1 1 0 1 𝑡 −1 0 1 𝑡 ∞ 𝑥(𝑡) ∗ ℎ(𝑡) = ∫ 𝑥(𝜏)ℎ(𝑡 − 𝜏)𝑑𝜏 −∞ 𝑡 < −1 1 𝑡−1 𝑡 𝑡+1 0 1 𝜏 𝑥(𝑡) ∗ ℎ(𝑡) = 0 𝑡>2 1 0 1 𝑡−1 𝑡 𝑡+1 𝜏 𝑥(𝑡) ∗ ℎ(𝑡) = 0 39 0