EEE2047S 2024 Notes Chapter 7 PDF

Document Details

BelovedBoltzmann

Uploaded by BelovedBoltzmann

2024

Tags

Laplace transform signal processing mathematics engineering

Summary

These notes provide a summary of the Laplace transform, including its properties and applications within an engineering context. The document details the Fourier transform and its relationship to the generalized Laplace transform. It also covers Laplace transform properties, like time shift, frequency shift, and time differentiation, to facilitate effective signal processing. The document is aimed at undergraduate engineering students who are studying signals and systems.

Full Transcript

7 Laplace transform The Laplace transform is a generalised Fourier transform that can handle a larger class of signals. Instead of a real-valued frequency variable ω indexing the exponential component ejωt it uses a complex-valued variable s and the generalised exponential est. If all signals of...

7 Laplace transform The Laplace transform is a generalised Fourier transform that can handle a larger class of signals. Instead of a real-valued frequency variable ω indexing the exponential component ejωt it uses a complex-valued variable s and the generalised exponential est. If all signals of interest are right-sided (zero for negative t) then a unilateral variant can be defined that is simple to use in practice. 7.1 Development The Fourier transform of a signal x(t) exists if it is absolutely integrable: Z ∞ |x(t)|dt < ∞. −∞ While it’s possible that the transform might exist even if this condition isn’t satisfied, there are a whole class of signals of interest that do not have a Fourier transform. We still need to be able to work with them. Consider the signal x(t) = e2t u(t). For positive values of t this signal grows exponentially without bound, and the Fourier integral does not converge. However, we observe that the modified signal φ(t) = x(t)e−σt does have a Fourier transform if we choose σ > 2. Thus φ(t) can be expressed in terms of frequency components ejωt for −∞ < ω < ∞. The bilateral Laplace transform of a signal x(t) is defined to be Z ∞ X(s) = x(t)e−st dt, −∞ where s is a complex variable. The set of values of s for which this transform exists is called the region of convergence, or ROC. Suppose the imaginary axis s = jω lies in the ROC. The values of the Laplace transform along this line are Z ∞ X(jω) = x(t)e−jωt dt, −∞ which are precisely the values of the Fourier transform. Writing s = σ + jω this integral can be expanded as Z ∞ X(σ + jω) = x(t)e−σt e−jωt dt, −∞ −σt which is the Fourier transform of x(t)e. −σt Since F {x(t)e } = X(σ + jω) the Fourier reconstruction formula gives Z ∞ 1 x(t)e−σt = X(σ + jω)ejωt dω. 2π −∞ Multiplying both sides by eσt yields ∞ 1 Z x(t) = X(σ + jω)e(σ+jω)t dω. 2π −∞ With the change of variables s = σ + jω we have ds = jdω, and the inverse transform is Z σ+j∞ 1 x(t) = X(s)est dω. 2πj σ−j∞ 7-1 This is a line integral in the complex plane. In practice any value of σ can be used that corresponds to a line in the region of convergence. This integration in the complex plane requires knowledge in theory of functions of complex variables, but in practice we can avoid explicit integration by using tables of Laplace transform pairs. The Fourier synthesis formula reconstructs a signal using a set of scaled complex exponentials of the form ejωt for various values of ω. For a real-valued signal the positive and negative frequency components can be combined into sinusoids. The first plot below shows two such reconstruction basis functions for the Fourier transform. The basis functions for the Laplace transform include an exponential envelope that either increases or decreases with time, and two of these functions are shown in the second plot below. Since these functions grow with time they can represent signals that grow with time. This is the reason why the Laplace transform can represent a larger class of signals than the Fourier transform. 2 Fourier basis 0 -2 -5 0 5 t Laplace basis 2 0 -2 -5 0 5 t The remainder of this section demonstrates how some bilateral Laplace transforms can be calcu- lated. To continue we will need the following result: With z = α + jβ we have lim e−zt = lim e−(α+jβ)t = lim e−αt ejβt. t→∞ t→∞ t→∞ The quantity e−αt ejβt is just a complex number expressed in polar form, with magnitude e−αt. For α > 0 this magnitude tends to zero as t → ∞, so in general ( −zt 0 if Re(z) > 0 lim e = t→∞ ∞ if Re(z) < 0. The Laplace transform of the signal x(t) = e−at u(t) for a possibly complex value a can now be found: Z ∞ Z ∞ −at −st −1 h −(s+a)t i∞ L{x(t)} = X(s) = e u(t)e dt = e−(s+a)t dt = e. −∞ 0 s+a t=0 Using the above result the transform is therefore −1 1 X(s) = (0 − 1) = with ROC Re(s + a) > 0, s+a s+a 7-2 leading to the Laplace transform pair 1 e−at u(t) ⇐⇒ Re(s) > −Re(a). s+a Note that this holds for all a. However, if Re(a) is negative then the transform does not converge on the line s = jω, and the Fourier transform does not exist. The region of convergence is a critical component of the Laplace transform. Using a derivation analogous to the one above we determine that the following is a valid Laplace transform pair: 1 −e−at u(−t) ⇐⇒ Re(s) < −Re(a). s+a The signals e−at u(t) and −e−at u(−t) therefore have the same Laplace transform — only the ROCs are different. Thus there is no one-to-one correspondence between x(t) and X(s) unless the ROC is specified, which makes the transform tricky to use. This ambiguity vanishes if we restrict all our signals to be causal or right-sided. For example, the Laplace transform X(s) = 1/(s + a) then has only one inverse, namely e−at u(t), and we are not required to know the ROC. This leads to the unilateral Laplace transform. 7.2 Unilateral Laplace transform When people talk about the Laplace transform they usually mean the unilateral variant Z ∞ X(s) = x(t)e−st dt. 0− This is essentially just the bilateral Laplace transform applied to a signal that is known to be zero for negative time, also called a right-sided signal. The lower integration limit is taken to be 0− , infinitesimally before t = 0. This allows for impulses at the origin, and permits a natural specification of initial conditions when using the transform to solve differential equations. The unilateral Laplace transform cannot handle signals that are not right-sided, or systems that are not causal. There is essentially no difference between the unilateral and the bilateral Laplace transform, except that the former deals with the subclass of signals that start at t = 0. The inverse transform therefore remains unchanged. By way of example consider firstly the transform of δ(t): Z ∞ Z ∞ Z ∞ −st −s(0) −s(0) L{δ(t)} = δ(t)e dt = δ(t)e dt = e δ(t)dt = 1. 0− 0− 0− Thus the following Laplace transform pair is obtained: δ(t) ⇐⇒ 1 for all s. Similarly 1/2 1/2 L{cos(ω0 t)u(t)} = L{1/2ejω0 t u(t) + 1/2e−jω0 t u(t)} = + s − jω0 s + jω0 for Re(s) > −Re(−jω0 ) and Re(s) > −Re(jω0 ), or just Re(s) > 0. Multiplying out yields the Laplace transform pair s cos(ω0 t)u(t) ⇐⇒ for Re(s) > 0. s2 + ω02 7-3 Since for unilateral Laplace transforms any F (s) has a unique inverse, we generally ignore any reference to the ROC. Note however that finding a Fourier transform by evaluating the Laplace transform at s = jω is only valid if the imaginary axis lies in the ROC. Finding the inverse Laplace transform using the synthesis formula requires integration in the complex plane, which is a subject in its own right. In practice for the signals of interest we can simply find inverse transforms using tables. This is because most transforms F (s) that we care about are rational functions of the form F (s) = P (s)/Q(s), where P (s) and Q(s) are polynomials in s. The values of s for which F (s) = 0 are called the zeros of F (s), and the values where F (s) → ∞ are the poles of F (s). For rational functions the zeros satisfy P (s) = 0 and the poles satisfy Q(s) = 0. The table below contains some common unilateral Laplace transform pairs: 1 R c+j∞ R∞ x(t) = 2πj c−j∞ X(s)est ds X(s) = 0− x(t)e−st dt δ(t) 1 1 u(t) s 1 tu(t) s2 n! tn u(t) sn+1 1 eλt u(t) s−λ 1 teλt u(t) (s−λ)2 n! tn eλt u(t) (s−λ)n+1 s cos(bt)u(t) s2 +b2 b sin(bt)u(t) s2 +b2 s+a e−at cos(bt)u(t) (s+a)2 +b2 b e−at sin(bt)u(t) (s+a)2 +b2 (r cos θ)s+(ar cos θ−br sin θ) re−at cos(bt + θ)u(t) s2 +2as+(a2 +b2 ) 0.5rejθ 0.5re−jθ re−at cos(bt + θ)u(t) s+a−jb + s+a+jb As+B re−at cos(bt + θ)u(t) s2 +2as+c q A2 c+B 2 −2ABa √ Aa−b r= c−a2 , b= c − a2 , θ = tan−1 √ A c−a2 Suppose for example we wanted to find the right-sided inverse transform of 8s + 10 F (s) =. (s + 1)(s + 2)3 This has a zero at s = −10/8 = −5/4, a pole at s = −1, and a pole of order three (or three poles) at s = −2. Using partial fractions we can write 8s + 10 2 6 2 2 3 = + 3 − 2 −. (s + 1)(s + 2) s + 1 (s + 2) (s + 2) s+2 From tables the Laplace transform pair n! tn eλt u(t) ⇐⇒ (s − λ)n+1 7-4 can then be used to obtain the inverse transform f (t) = [2e−t + (3t2 − 2t − 2)e−2t ]u(t). Exercise: Convince yourself that the form of the partial fraction expansion above is appropriate, derive the values for the coefficients, and verify the inverse transform. 7.3 Laplace transform properties Since the bilateral Laplace transform is a generalised Fourier transform we would expect many of the properties to be similar, and this is indeed the case. However, the properties of the unilateral Laplace transform are slightly different and require explanation. A summary of the Laplace transform properties appears below. In all cases we assume that F F x(t)←→X(s) and v(t)←→V (s) are valid transform pairs. Property Transform Pair/Property Linearity ax(t) + bv(t) ⇐⇒ aX(s) + bV (s) Time shift x(t − a)u(t − a) ⇐⇒ e−as X(s) a ≥ 0 Time scaling x(at) ⇐⇒ a1 X( as ) a > 0 Frequency differentiation tn x(t) ⇐⇒ (−1)n X (n) (s) Frequency shift eat x(t) ⇐⇒ X(s − a) Differentiation x′ (t) ⇐⇒ sX(s) − x(0− ) x′′ (t) ⇐⇒ s2 X(s) − sx(0− ) − x′ (0− ) x(n) (t) ⇐⇒ sn X(s) − sn−1 x(0− ) − · · · − x(n−1) (0− ) Rt Integration 0− x(λ)dλ ⇐⇒ 1s X(s) Rt R 0− −∞ x(λ)dλ ⇐⇒ 1s X(s) + 1s −∞ x(λ)dλ Time convolution x(t) ∗ v(t) ⇐⇒ X(s)V (s) 1 Frequency convolution x(t)v(t) ⇐⇒ 2πj X(s) ∗ V (s) The properties are exposed in more detail in the remainder of this section. 7.3.1 Time shift L Suppose we have a valid unilateral Laplace transform pair x(t)u(t)←→X(s). The transform of the signal shifted to the right is then Z ∞ Z ∞ L{x(t − t0 )u(t − t0 )} = x(t − t0 )u(t − t0 )e−st dt = x(p)u(p)e−s(p+t0 ) dp, 0− (−t0 )− where the change of variables p = t − t0 has been made. If t0 < 0 then the shifted signal is no longer right-sided and the unilateral Laplace transform is inappropriate, so we require t0 > 0. Under these conditions, since u(p) = 0 for p < 0 the lower integration limit can be changed from (−t0 )− to 0− , yielding Z ∞ Z ∞ −s(p+t0 ) −st0 L{x(t − t0 )u(t − t0 )} = x(p)u(p)e dp = e x(p)u(p)e−sp dp = e−st0 X(s). 0− 0− 7-5 The resulting property is as follows: Time shift: L If x(t)←→X(s) L then x(t − t0 ) ←→ e−st0 X(s) for t0 > 0. 7.3.2 Scaling L If x(t)←→X(s) then for a > 0 we have Z ∞ ∞ s 1 1 s Z L{x(at)} = x(at)e−st dt = x(p)e− a p dp = X , 0− 0− a a a with p = at. The property is summarised as follows: Scaling: L If x(t)←→X(s) L 1 s then x(at) ←→ X for a > 0. a a 7.3.3 Frequency shift L If x(t)←→X(s) then Z ∞ Z ∞ L{x(t)es0 t } = x(t)es0 t e−st dt = x(t)e−(s−s0 )t dt = X(s − s0 ). 0− 0− The property is summarised as follows: Frequency shift: L If x(t)←→X(s) L then x(t)es0 t ←→ X(s − s0 ). Note the symmetry or duality between this and the time shift property. 7.3.4 Time differentiation L Consider the Laplace transform of the derivative dx(t)/dt of a signal x(t) for which x(t)←→X(s):   Z ∞ dx dx −st L = e dt. dt 0− dt Using integration by parts gives   Z ∞ dx −st ∞ x(t)e−st dt.   L = x(t)e t=0− +s dt 0− For the Laplace transform to converge we must have x(t)e−st → 0 as t → ∞ for values of s in the ROC of X(s), in which case   dx L = −x(0− ) + sX(s). dt 7-6 This property can be repeatedly applied to give Laplace transforms of higher-order derivatives. A summary of the property follows. The quantity ẋ(t) indicates the first derivative of x(t) with respect to time, and x(n) (t) indicates the nth derivative. Time differentiation: L If x(t)←→X(s) dx L d2 x L then ←→ sX(s) − x(0− ) and ←→ s2 X(s) − sx(0− ) − ẋ(0− ). dt dt2 In general dn x L ←→ sn X(s) − sn−1 x(0− ) − sn−2 ẋ(0− ) − · · · − x(n−1) (0− ). dtn 7.3.5 Time integration L L Suppose f (t)←→F (s) and g(t)←→G(s). If we define Z t g(t) = f (τ )dτ 0− dg(t) then f (t) = dt and g(0− ) = 0. Now   d F (s) = L g(t) = sG(s) − g(0− ) = sG(s), dt so G(s) = F (s)/s. Thus t F (s) Z L f (τ )dτ ←→. 0− s The property is as follows: Time integration: L If x(t)←→X(s) R 0− t Z t x(τ )dτ X(s) X(s) Z L L −∞ then x(τ )dτ ←→ and x(τ )dτ ←→ +. 0− s −∞ s s 7.3.6 Convolution L L Suppose f1 (t)←→F1 (s) and f2 (t)←→F2 (s). Then Z ∞  Z ∞ Z ∞  L{f1 (t) ∗ f2 (t)} = L f1 (λ)f2 (t − λ)dλ = f1 (λ)f2 (t − λ)dλ e−st dt 0− 0− 0− Z ∞ Z ∞  Z ∞ Z ∞  −st −s(u+λ) = f1 (λ) f2 (t − λ)e dt dλ = f1 (λ) f2 (u)e du dλ 0−  0 Z ∞ 0− 0− − Z ∞  Z ∞ Z ∞ = f1 (λ) e−sλ f2 (u)e−su du dλ = f1 (λ)e−sλ dλ f2 (u)e−su du 0− 0− 0− 0− = F1 (s)F2 (s). Thus convolution in time is equivalent to multiplication of Laplace transforms. Additionally it can be shown that multiplication in time corresponds to convolution in the Laplace domain. These properties can be summarised as follows: 7-7 Convolution: L L If x(t)←→X(s) and v(t)←→V (s) L L 1 then x(t) ∗ v(t) ←→ X(s)V (s) and x(t)v(t) ←→ [X(s) ∗ V (s)]. 2πj 7.3.7 Initial and final values One often needs to know the value of f (t) as t → 0 and t → ∞, the initial and final values. If you have the Laplace transform then it’s not necessary to obtain the inverse transform to find these values. L Since df /dt←→sF (s) − f (0− ) we have ∞ Z 0+ Z ∞ df −st df −st df −st Z sF (s) − f (0− ) = e dt = e dt + e dt 0− dt 0− dt 0+ dt Z 0+ Z ∞ Z ∞ df 0 df −st df −st = e dt + e dt = f (0+ ) − f (0− ) + e dt. 0− dt 0+ dt 0+ dt Thus ∞ ∞ df −st df  Z Z  lim sF (s) = f (0+ ) + lim e dt = f (0+ ) + lim e−st dt = f (0+ ). s→∞ s→∞ 0+ dt 0+ dt s→∞ The following is a statement of the property: Initial value: df If f (t) and its derivative are both Laplace transformable then dt f (0+ ) = lim sF (s) if the limit exists. s→∞ Also, since ∞ ∞ df −st df Z Z ∞ lim [sF (s) − f (0− )] = lim e dt = dt = f (t)|0− = lim f (t) − f (0− ), s→0 s→0 0− dt 0− dt t→∞ we have the following property: Final value: df If f (t) and its derivative are both Laplace transformable then dt lim f (t) = lim sF (s) as long as all poles are in the left half plane. t→∞ s→0 7.4 Solutions of differential equations The time-differentiation Laplace transform property dk y/dtk ⇐⇒ sk Y (s) can be used to solve linear constant coefficient differential equations with arbitrary input conditions. Essentially, the transform turns the differential equation into an algebraic equation that is readily solved for Y (s), and the inverse is then the solution y(t). Suppose we want to find the solution to the differential equation d2 y(t) − 4y(t) = x(t) dt2 7-8 for x(t) = sin(2t)u(t) and under the initial conditions y(0− ) = 1 and y ′ (0− ) = −2. This is called an initial value problem, or IVP. Using the derivative property we have d2 y(t) L ←→ s2 Y (s) − sy(0− ) − y ′ (0− ). dt2 The differential equation in the transform domain therefore becomes s2 Y (s) − sy(0− ) − y ′ (0− ) − 4Y (s) = X(s), or X(s) sy(0− ) + y ′ (0− ) Y (s) = +. s2 − 4 s2 − 4 For the particular problem under consideration we can find the transform of x(t) as L 2 sin(2t)u(t) ←→ , s2 + 4 and substitute for X(s) and the initial values. The required solution in the Laplace transform domain is then found to be 2 s−2 2 1 Y (s) = + = 2 +. (s2 + 4)(s2 − 4) s2 − 4 (s + 4)(s + 2)(s − 2) s + 2 The time-domain solution is obtained by finding the inverse transform. Using partial fractions we can write 2 −1/4 −1/16 1/16 2 = 2 + + , (s + 4)(s + 2)(s − 2) s +4 s+2 s−2 so y(t) = −1/8 sin(2t)u(t) + 15/16e−2tu(t) + 1/16e2tu(t). The expression obtained earlier, namely X(s) sy(0− ) + y ′ (0− ) Y (s) = + , s2 − 4 s2 − 4 is interesting. We can distinguish between two cases. When x(t) (or X(s)) is zero then the solution satisfies sy(0− ) + y ′ (0− ) Y (s) =. s2 − 4 This is called the zero-input component, and its solution is called the zero-input response yzi (t). This response is independent of the forcing function or input x(t), and essentially characterises how the system responds to the initial conditions. On the other hand, when the initial conditions y(0− ) and y ′ (0− ) are zero then the resulting solution satisfies X(s) Y (s) =. s2 − 4 for the given X(s). This is called the zero-state component and its solution is the zero-state response yzs (t). Saying that the initial value and derivates are all zero prior to the application of the input is exactly the initial rest conditions that were discussed when dealing the the Fourier transform. Since we can write this last equation as 1 Y (s) = X(s) = H(s)X(s) s2 − 4 for H(s) = s21−4 we see that this component of the solution satisfies yzs (t) = h(t) ∗ x(t), where h(t) is the impulse response of the system under the initial rest conditions. The total solution to the 7-9 differential equation is the sum of these two responses y(t) = yzs (t) + yzi (t). The decomposition of the solution into these two components is similar but not directly related to the decomposition into homogeneous and particular solutions you may have encountered before. Applying this decomposition directly to the problem described before we see that the two compo- nents of the solution are yzs (t) = −1/8 sin(2t)u(t) − 1/16e−2tu(t) + 1/16e2tu(t) and yzi (t) = e−2t u(t). 1.5 yzs (t) yzi (t) Response 1 y(t) 0.5 0 0 0.5 1 1.5 t We can note that the zero-state solution (and therefore the total solution) grows without bound, even though the input is bounded. Any system corresponding to this differential equation is therefore unstable, but the Laplace transform still handles it correctly. We could also evaluate the stability of the system by noting that it has two poles, one at s = −2 and one at s = +2. All the poles of a stable system lie in the left half-plane, so the presence of this latter pole indicates instability. 7.5 Circuit response The Laplace transform can be used for circuit analysis under general initial conditions. Consider the circuit below vR (t) vL (t) R R L Va = 5V i(t) vC (t) C 1 with L = 1H, R = 800Ω, and C = 4.1 × 10−5 F. Suppose that the switch has been open for a very long time and is then closed at t = 0, and we want to determine a time-domain expression for the subsequent current i(t). Prior to closing the switch there is no current through the capacitor, so i(0− ) = 0A. Also, since the current is zero and unchanging there is no voltage drop across either of the resistors or the inductor, so vC (0− ) = 5V. These values specify the initial conditions. When the switch is closed the voltage source and the resistor on the left have no further effect, and the current that we calculate will show how the capacitor discharges through the resistor-inductor combination. Kirchhoff’s voltage law in the time domain requires that vR (t) + vL (t) + vC (t) = 0. 7-10 To continue we need to consider the voltage-current relationships for each device. For the resistor and the inductor we have di(t) vR (t) = Ri(t) and vL (t) = L. dt The capacitor is a little more complicated. It satisfies dvC (t) dvC (t) 1 i(t) = C or = v̇C (t) = i(t). dt dt C In general if we know vC (0) then for positive t we can express vC (t) using the fundamental theorem of calculus: Z t 1 t Z − − vC (t) = vC (0 ) + v̇C (τ )dτ = vC (0 ) + i(τ )dτ. 0− C 0− Substituting into the voltage constraint relationship gives the integro-differential equation di(t) 1 t Z Ri(t) + L + vC (0− ) + i(τ )dτ = 0. dt C 0− L To solve this equation we apply the Laplace transform, with i(t)←→I(ω). The only complicated term involves the inductor, and introduces an initial condition on the current:   di(t) L L = L[sI(s) − i(0− )]. dt Noting that vc (0− ) is constant the full transformation is given by vC (0− ) I(s) RI(s) + L[sI(s) − i(0− )] + + = 0, s sC which can be solved for I(s): si(0− ) − vc (0 ) − L I(s) = 2 R 1. s + L s + LC Substituting for all the values for the components and the initial conditions and completing the square gives −5 −5 I(s) = 2 =. s + 800s + 410000 (s + 400)2 + 5002 L The inverse transform can be obtained by applying the frequency shift property eat x(t)←→X(s−a) to the Laplace transform pair for the signal sin(bt)u(t), yielding b e−at sin(bt)u(t) ⇐⇒. (s + a)2 + b2 Thus the desired result is i(t) = −0.01e−400t sin(500t)u(t). For fixed L, if R is increased to a sufficiently large value or if C is decreased then the solution will not be oscillatory — the damping effect of the resistor dominates the response. In par- ticular, from the general properties √ of second-order systems presented later the solution in this case only oscillates if R < 2/ C, or C < 4/R2. If for example we change the capacitance to C = (16/3)/R2) ≈ 8.333 × 10−6 F then the current is −5 −5 −5 I(s) = 16 = =. s2 + Rs + 3R2 s + 800 + 120000 (s + 600)(s + 200) 7-11 Using partial fractions this becomes 1/80 1/80 I(s) = − s + 600 s + 200 and the time-domain response is seen to be   1 −600t 1 −200t i(t) = e − e u(t). 80 80 The solutions presented above for the two different cases have very different characteristics. In the first case the resistance was low enough that the energy could ”slosh around” between the capacitor and the inductor before decaying to the steady state. Hence the sinusoidal solution, decaying as energy is dissipated in the resistor. In the second case the resistance is high enough that the energy in the capacitor is dissipated before the current reverses direction. No oscillation occurs and the response is two simple real exponential functions. The differences in behaviour can be related to the locations of the system poles. In the first instance the poles must satisfy (s + 400)2 + 5002 = 0, which has the two solutions s = −400 ± j500. These poles are complex conjugates of one another; for a real-valued signal poles always occur in conjugate pairs. The time-domain solution is oscillatory because the poles are complex. We observe that the real part of this pole pair leads to the exponential factor e−400t , and the imaginary part leads to the oscillation component sin(500t). As the two poles move towards the imaginary axis the rate of decay will decrease and the duration of the response in the time domain will increase. As the poles move away from the real axis the frequency of the oscillation increases. In contrast, in the second case the poles are at the real values s = −600 and s = −200. These have a zero imaginary component and there is no oscillation. In the example given above the capacitor had an initial charge, and the circuit differential equa- tion had to be reformulated accordingly. In practice we usually formulate our circuit equations directly in the Laplace transform domain with complex impedances. One can deal with the initial conditions by inserting current or voltage generators at appropriate points in the circuit, and then assuming that the active elements are initially at rest. Take for example the capacitor with governing equation dvC (t) ic (t) = C , dt which in the transform domain becomes IC (s) = C sVC (s) − vC (0− ).   This can be written in two different ways, leading to two different formulations: 1 vC (0− ) 1 VC (s) = IC (s) + or VC (s) = [IC (s) + CvC (0− )]. Cs s Cs Equivalent circuits for each case are shown below: i(t) I(s) I(s) + + + 1 Cs + 1 Cs v(t) v(0) V (s) V (s) Cv(0) C − + v(0) s − − − − 7-12 On the left is the circuit element in the time domain, with an initial charge on the capacitor. Ac- cording to the first equation above the voltage-current relationship can be expressed as an initially- discharged capacitor in series with a voltage source with potential vc (0). This is demonstrated in the Laplace domain by the equivalent circuit in the center of the figure above. Alternatively, using the second equation the equivalent circuit looks like an initially-discharged capacitor being driven by a current IC (s) + CvC (0− ), depicted on the right-hand side of the figure. In practice, any time we have a nonzero initial charge on the capacitor we can replace it with an equivalent initial-rest configuration by adding either a constant voltage or current source, which can be thought of as initial condition generators. Exercise: The same approach can be used for inductors, which satisfy the time-domain relation vL (t) = LdiL (t)/dt, under the condition where the initial current is nonzero. Find the corresponding equivalent circuits using both voltage and current sources. 7.6 Feedback and control We often want to drive a physical system so that it reaches a desired state. For example, the cruise control on a car must adjust the engine power until a desired speed is achieved. This cannot be done by simply modelling the relationship between engine power and car speed, since this relationship will change for example when we go up a hill or when there are more people in the car. A better approach is to monitor the current speed of the car, and develop a system that will constantly determine the current engine power needed to get the car to this speed and then keep it there. The input to the system is therefore constantly adjusted to obtain the desired response. The difference between the actual speed and the desired speed is used as feedback, and when used appropriately leads to a closed-loop system. The remainder of this section uses an example of position control of a rotating system. This could for example represent the problem of rotating a directional antenna so that it points in a desired direction, or any other problem where a servo-motor is required. We assume that the object to be rotated has significant rotational inertia J, and that there is a viscous damping component B that dissipates the applied torque. The figure below depicts the system. f (t) if θ(t) J B We need a nominal physical model of how the system behaves. The angle θ(t) of the rotor is considered to be the output, and can be varied by changing the current f (t) in the motor armature. Details are as follows. The motor current f (t) generates a torque τ (t) = KT f (t), where KT is a constant for the motor. The equivalent of Newton’s second law for rotating systems says that applied torque and rotational acceleration are related by τ (t) = J θ̈(t). The damping component is proportional to angular velocity and reduces torque by B θ̇(t). The input f (t) and the output θ(t) therefore obeys the differential equation J θ̈(t) = KT f (t) − B θ̇(t), or J θ̈(t) + B θ̇(t) = KT f (t). The Laplace transform of this differential equation is B KT Js2 Θ(s) + BsΘ(s) = KT F (s), or s2 Θ(s) + sΘ(s) = F (s). J J For clarity we assume now that B/J = 8 and KT /J = 1. Since s2 Θ(s) + 8sΘ(s) = F (s) the 7-13 transfer function of the system is found to be Θ(s) 1 G(s) = =. F (s) s(s + 8) Once we have this mathematical representation of the system we no longer need to be concerned about the actual physical details. This system transfer function could for example also be obtained from a mass-spring-damper system with linear forces being applied, or to a RLC circuit with an applied voltage. In practice one could get an expert to generate a transfer function model for some physical system, and as an engineer you could then just use it. Alternatively you could estimate a system model by driving it with a known input and observing the output, a process called system identification. Our task is to control the motor current so that the angle θ (the angle of the antenna in the example above) matches some desired value θi. Since the desired angle will usually vary with time, in general we want to control the current so that the resulting angle θ(t) matches some desired θi (t). The target signal θi (t) is called the setpoint. We can envisage a simple strategy for control in this case. We measure the actual angle θ(t) at any instant, using for example a rotary encoder. If the current observed angle is smaller than the target setpoint, then we drive the current in the motor in the direction that increases the angle. If the observed angle is larger than the setpoint then we drive the current in the reverse direction. We could also drive the motor with a larger current if the difference is large, to try to decrease the time it takes to reach the setpoint and improve the response rate of the system. This simple strategy leads to a proportional controller, represented in the block diagram below: θi (t) e(t) f (t) θ(t) + K G(s) − The difference between the setpoint and the measured angle is the error signal e(t) = θi (t) − θ(t). The controller in this instance just multiplies this error signal by a constant value K to produce the motor current control signal f (t): f (t) = Ke(t) = K(θi (t) − θ(t)). In the transform domain this is F (s) = K(Θi (s) − Θ(s)). Since Θ(s) = G(s)F (s) we have Θ(s) = KG(s)(Θi (s) − Θ(s)). The closed-loop transfer function that incorporates the feedback relates the output Θ(s) to the output Θi (s). From this last equation we have K Θ(s) KG(s) s(s+8) K = = K = 2. Θi (s) 1 + KG(s) 1 + s(s+8) s + 8s + K The value K determines how much the error signal e(t) gets amplified by to produce the motor armature current f (t), and it has an effect on the dynamics of the closed-loop system. To visualise the resulting behaviour one often considers the step response of the overall system, or the output Θ(t) when the input is Θi (t) = u(t). Since L{u(t)} = 1/s this output will be K Θ(s) =. s(s2 + 8s + K) Consider the case of K = 7. Then using partial fractions we find 7 7 1 7/6 1/6 Θ(s) = = = − +. s(s2 + 8s + 7) s(s + 1)(s + 7) s s+1 s+7 7-14 The time-domain step response is therefore θ(t) = (1 − 7/6e−t + 1/6e−7t)u(t). For K = 16 we have 16 16 1 4 1 Θ(s) = = = − − , s(s2 + 8s + 16) s(s + 4)2 s (s + 4)2 s+4 and using tables along with the frequency shift property one finds θ(t) = 1 − (1 + 4t)e−4t u(t).   For a much larger value of K = 80 the response is 80 80 1 s+8 Θ(s) = = = − 2. s(s2 + 8s + 80) s(s + 4 − j8)(s + 4 + j8) s s + 8s − 80 Using tables of transforms this leads to the time-domain step response " √ # 5 −4t −1 θ(t) = 1 − e cos(8t + tan (−1/2)) u(t). 2 The three step responses calculated are shown below. 1.2 1 0.8 θ(t) 0.6 0.4 0.2 K=7 K=16 0 K=80 -0.2 -1 0 1 2 3 4 5 t We see that the response for K = 7 is quite slow, and the system takes a long time to reach the setpoint. The response for K = 80 is much faster, but this comes at the cost of overshoot and ringing. The case of K = 16 yields the fastest response without oscillations. The three cases can be distinguished by looking √ at the poles of the closed-loop transfer function. These satisfy s2 + 8s + K = 0, or s = −4 ± 16 − K. For K = 7 the two poles are real and occur at s = −7 and s = −1. The pole at s = −7 leads to the term e−7t in the step response, which decays to zero very quickly. The response is therefore dominated by the ”slow” pole at s = −1, which leads to the slowly-decaying exponential term e−t. The system is said to be overdamped. For K = 16 the poles are both at s = −4. This is the case where the real poles are as far from the origin as possible, leading to the fastest response without oscillation. The system is critically damped. For K = 80 the poles are complex, and occur as a conjugate pair at s = −4 ± 8j. The real part leads to exponential damping terms e−4t in the response, and the imaginary part determines the frequency of oscillation or ringing in the response. The system is said to be underdamped. 7-15 In the analysis above we assumed fixed and known values for all the system components that determine G(s). In practice these might often not be completely known, or may change with time. We can consider changes in the model to be disturbances, represented by the signals di (t) and do (t) below. di (t) do (t) θi (t) e(t) + + θ(t) + K + G(s) + − In the transform domain this system satisfies Θ(s) = G(s)(K(Θi (s) − Θ(s)) + Di (s)) + Do (s) This can be rearranged to Θ(s) = KG(s)Θi (s) + G(s)Di (s) + Do (s), so KG(s) G(s) 1 Θ(s) = Θi (s) + Di (s) + Do (s). 1 + KG(s) 1 + KG(s) 1 + KG(s) We observe that if K is large then the last two terms will be small, and the disturbances are effectively rejected. Thus closed-loop control can reduce the sensitivity to mismatch between the system model and the actual physical system. This section has discussed the very simple case of a proportional controller with unity feedback. Real control systems can be considerably more complicated. For example, a more general controller K(s) can be used, and system element could also be used in the feedback path. The use of Laplace transforms makes it possible to deal with all of these. 7.7 Second-order system Consider a second-order transfer function of the form ωn2 H(s) =. s2 + 2ζωn s + ωn2 The quantity ζ ≥ 0 is called the damping ratio, and ωn is the natural frequency. Using thepformula for quadratic roots you can see that the poles of the system H(s) satisfy s = −ζωn ± ωn ζ 2 − 1. The behaviour of the system is very different depending on whether the term ζ 2 − 1 is positive or negative. The complicated but more interesting case occurs when ζ 2 − 1 < 0, which since ζ is positive corresponds to ζ < 1. In p this case the system is said to be underdamped, and the poles can be written as s = −ζωn ± jωn 1 − ζ 2. Im jω p ωd = ωn 1 − ζ2 ωn cos−1 ζ Re −ζωn σ 7-16 The terminology regarding damping can be justified by considering the step response of the system, or the output when the input is x(t) = u(t). In this case X(s) = 1/s and the output is given by ωn2 1 s + 2ζωn G(s) = H(s)X(s) = 2 2 = − 2. s(s + 2ζωn s + ωn ) s s + 2ζωn s + ωn2 This last step follows from a partial fraction expansion. We would like to find the corresponding signal in the time domain. One might find this inverse in a table of transforms, but it’s informative to derive the response explicitly. By completing the square the denominator can be reformulated as s2 + 2ζωn s + ωn2 = (s + ζωn )2 − (ζωn )2 + ωn2 = (s + ζωn )2 + ωn2 (1 − ζ 2 ). p For convenience we define the quantity ωd = ωn 1 − ζ 2 , often called the damped natural frequency. The transfer function can then be written as 1 s + 2ζωn 1 s + ζωn ζωn G(s) = − = − −. s (s + ζωn )2 − ωd2 s (s + ζωn )2 + ωd2 (s + ζωn )2 + ωd2 We split this last term into two is because it is easier to handle using standard Laplace tables. L Firstly, applying the frequency shift property to the pair cos(bt)u(t)←→s/(s2 + b2 ) gives L (s + a) e−at cos(bt)u(t) ←→ , (s + a)2 + b2 so L s + ζωn e−ζωn t cos(ωd t)u(t) ←→. (s + ζωn )2 + ωd2 L Similarly, frequency shift applied to sin(bt)u(t)←→b/(s2 + b2 ) can be used to find ζωn −ζωn t L ζωn e sin(ωd t)u(t) ←→ ωd (s + ζωn )2 + ωd2 The step response can then be found as the inverse Laplace transform:   ζωn −ζωn t g(t) = 1 − e−ζωn t cos(ωd t) − e sin(ωd t) u(t) ωd " # −ζωn t ζ −ζωn t = 1−e cos(ωd t) − p e sin(ωd t) u(t) 1 − ζ2 " # e−ζωn t p  = 1− p 1 − ζ 2 cos(ωd t) + ζ sin(ωd t) u(t). 1 − ζ2 p If we let ζ = cos(θ) and note that 1 − ζ 2 = sin(θ), this can be written as " # e−ζωn t g(t) = 1 − p (sin(θ) cos(ωd t) + cos(θ) sin(ωd t)) u(t). 1 − ζ2 When θ = 0 then the inner term in brackets is just sin(ωd t), when θ = 1 it is cos(ωd t), and for θ in between it’s a combination of the two. The step response can finally be written as " # e−ζωn t g(t) = 1 − p sin(ωd t + θ) u(t) 1 − ζ2 since sin(a + b) = sin a cos b + cos a sin b, where θ = cos−1 (ζ). Plots of the step response for ωn = 1 and three different values of ζ are shown below. 7-17 2 ζ = 0.99 ζ = 0.5 1.5 ζ = 0.1 Response 1 0.5 0 -2 0 2 4 6 8 10 t As the damping factor tends towards zero the response rises more steeply but exhibits overshoot and becomes more oscillatory. This is sometimes known as ringing. The system is p overdamped when ζ 2 − 1 > 0, or ζ > 1. In this case there are two real poles at s = −ζωn ±ωn ζ 2 − 1, and the step response consists of two exponential terms with no oscillation. If one of the poles is much closer to the origin than the other then the response is dominated by this slower pole. Exercise: Find and plot the step response of an overdamped second order system for ωn = 1 and ζ = 2, 4, 8. 7.8 Stability Lathi section 6.7-6. 7-18

Use Quizgecko on...
Browser
Browser