Document Details

HeartwarmingArtDeco2461

Uploaded by HeartwarmingArtDeco2461

Jack Rogers

Tags

complex numbers mathematics complex analysis calculus

Summary

This document covers complex numbers, including their applications to calculating real integrals and various concepts. It explains how complex numbers can be used to simplify real-world problems and provides examples of their use.

Full Transcript

5 - COMPLEX NUMBERS 11 the different values of 2kπi 3 is not 2πi, a full rotation, but 2πi 3 , a rotation by one third of a circle. Let’s evaluate this quantity for differe...

5 - COMPLEX NUMBERS 11 the different values of 2kπi 3 is not 2πi, a full rotation, but 2πi 3 , a rotation by one third of a circle. Let’s evaluate this quantity for different values of k: 2kπi k=0:e 3 = e0i = 1 √ 2kπi 2πi 1 i 3 k=1:e 3 =e 3 =− + 2 √2 2kπi 4πi 1 i 3 k=2:e 3 =e 3 =− − 2 2 2kπi 6πi k=3:e 3 =e 3 = e2πi = 1. Here we see that for k = 0, 1, 2 we get distinct results, and for k = 3 we have ‘looped back’ around to the k = 0 value. Each time we increase k by 1 we rotate by 2π 3 in the Argand diagram, and so at k = 3 we have completed a full circle. 2 Im(z) 1 2π 2π 3 Re(z) 3 −2 −1 2π 1 2 3 −1 −2 Figure 4. The three complex cube roots of 1. Each is a rotation of 2π 3 away from the other two. Example 8. Find all 4 complex fourth roots of −4. Solution: Write −4 = 4eiπ in exponential form. We need to solve: z 4 = 4eiπ = 4eiπ+2kπi for an integer k. Hence: √ iπ+2kπi √ √ iπ = 2ei( 4 + 2 ) = 2e 4 (1+2k). π kπ z = 2e 4 We want to find four roots and these will correspond to k = 0, 1, 2, 3. Once we hit k = 4 we will complete a full circle and have looped back around to the k = 0 solution. We have: √ iπ z0 = 2e 4 = 1 + i, √ 3iπ z1 = 2e 4 = −1 + i, √ 5iπ z2 = 2e 3 = −1 − i, √ 7iπ z3 = 2e 3 = 1 − i. On an Argand diagram, we can see that these are all at right-angles to each other, i.e. an angle of π2 : 12 JACK ROGERS 2 Im(z) 1 Re(z) −2 −1 1 2 −1 −2 Figure 5. The four complex fourth roots of -4. Each is a rotation of π2 away from the other two. Note that all four lie on the circle √ 1 of radius 2 = 4 4. 4. Other applications Here we discuss some miscellaneous other applications of complex numbers which are of use in various contexts. 4.1. Calculating real integrals. Using complex numbers can make it easier to solve certain real integrals. We saw a similar idea earlier where we used complex numbers to prove identities about sin and cos even when their outputs are real. Euler’s formula is very helpful here, as is the fact that if two complex numbers z and w are equal, then both their real parts and their imaginary parts must be equal (separately!), i.e.: If z = w then Re (z) = Re (w) and Im (z) = Im (w). This sometimes allows us to calculate two real quantities at the same time, as we did with cos (3θ) and sin (3θ) above. Suppose we want to calculate the integrals: Z I1 = ex sin x dx, Z I2 = ex cos x dx. We know already how to do this - but it involves doing integration by parts four times in total to calculate both integrals. However, we can use complex numbers to do both integrals at once! Consider the integral: Z J = I1 + iI2 = ex (cos x + i sin x) dx. We haven’t discussed integrals of complex functions properly, but it works just the same as it does for real functions in this type of case, so we don’t need to worry. The above integral can be simplified using Euler’s formula: Z Z J = ex eix dx = ex(1+i) dx. 5 - COMPLEX NUMBERS 13 This is now a very easy integral of a single exponential function - theR rules for integrating ekx where k is complex are the same as when k is real, i.e. ekx dx = 1 kx ke + C. Hence: Z 1 x(1+i) J = ex(1+i) dx = e + C. 1+i Beware! The constant C is now also complex, i.e. C = C1 + iC2 where C1 and C2 1 are real. We can calculate 1+i as normal: 1 1−i 1−i = =. 1+i (1 + i)(1 − i) 2 Hence, again using Euler’s formula:     1 − i x(1+i) 1−i x J= e +C = e (cos x + i sin x) + C. 2 2 Expanding a bit more: 1 1 J = (1 − i)ex (cos x + i sin x) + C = ex (sin x + cos x + i(sin x − cos x)) + C. 2 2 Hence in terms of real and imaginary parts: 1 i J = ex (sin x + cos x) + C1 + ex (sin x − cos x) + iC2. 2 2 However since J = I1 + iI2 we now can equate Re (J) = I1 and Im (J) = I2 , giving: ex I1 = (sin x + cos x) + C1 , 2 ex I2 = (sin x − cos x) + C2. 2 We can do this with similar types of integrals involving exponentials and/or trigonometric functions: Example 9. Calculate the integrals: Z I1 = e3x cos (4x) dx, Z I2 = e3x sin (4x) dx. Solution We calculate instead: J = I1 + iI2 Z = e3x (cos (4x) + i sin (4x)) dx Z = e3x e4ix dx Z = ex(3+4i) dx 1 = ex(3+4i) + C 3 + 4i 3 − 4i x(3+4i) = 2 e +C 3 + 42 3 − 4i 3x = e (cos (4x) + i sin (4x)) + C 25 e3x = [3 cos (4x) + 4 sin (4x) + i(3 sin (4x) − 4 cos (4x))] + C1 + iC2. 25 14 JACK ROGERS Hence: e3x I1 = (3 cos (4x) + 4 sin (4x)) + C1 , 25 e3x I2 = (3 sin (4x) − 4 cos (4x)) + C2. 25 4.2. Phasors. Due to the relationship between complex multiplication, exponen- tiation and rotations, complex numbers are also very useful tools for discussing rotational or oscillatory motion of physical systems. This includes various physical phenomena involving any kind of periodic motion e.g. vibrations, springs, electro- magnetic radiation, sound waves, etc. Generally speaking we describe such motion with sine and cosine functions. Consider the function: y = A cos (ωt + φ). This is a prototypical ‘sinusoidal’ function which occurs in all of the areas mentioned above. We can interpret the variables A, ω and φ in terms of a general wave: A is the amplitude. It is generally positive and tells us how high the ‘peaks’ and how low the ‘troughs’ are for our function. ω is the frequency (in radians/second) and tells us how quickly the sys- tem undergoes a full cycle of rotation/oscillation/etc. The period T of the system will be T = 2πω. φ is the phase of the system. It lies in the interval (−π, π] and tells us how much the function is ‘offset’ from a normal cos (ωt) function to the left or right. A positive phase φ means the function looks like cos (ωt) shifted by φ ω to the left and a negative phase gives a shift to the right. φ y ω A t 2π T = ω Figure 6. A sinusoidal function y = A cos (ωt + φ). The ampli- tude A gives the height of the waves, the angular frequency ω tells us how many waves pass per unit time (and is inversely propor- tional to the period T , the duration of a single full wave), and the phase φ (scaled by ω) tells us how far to the left of a standard cos(ωt) function our wave is located. 5 - COMPLEX NUMBERS 15 y y 3 3 2 2 1 1 t t −π − π2 π π −π − π2 π π 2 2 −1 −1 −2 −2 −3 −3 Figure 7. Left in blue: the sinusoidal function y = 2 cos 3t − π2.  Right in red: the sinusoidal function y = 3 cos 2t + π6. We can see for example that the blue wave completes a full cycle in time 2π 3 instead of 2π due to having frequency 3, and that the red wave π is offset by 12 = π/6 2 φ = ω to the left compared to a standard cos (2t) function. Note that we can make any sinusoidal function by varying A, ω and/or φ, in- cluding sin (t) = cos t − π2. We can use complex numbers to make manipulating different sinusoidal functions easier. If we are interested in the sinusoidal function: y = A cos (ωt + φ), then consider the complex function: X(t) = Aei(ωt+φ) = A(cos (ωt + φ) + i sin (ωt + φ)). Now y(t) = Re (X(t)). We can rewrite X(t): X(t) = Aei(ωt+φ) = (Aeiφ )eiωt. If we are interested in different sinusoidal functions of a fixed frequency ω, we may write: X = (Aeiφ )eiωt = Zeiωt where Z = Aeiφ. This is now a complex constant with all the time-dependence shifted into the other term. We call this constant Z the phasor of the sinusoidal function y = A cos (ωt + φ). It contains all the information about the amplitude and phase of y. In particular, the amplitude is |Z| and the phase is arg (Z). phasors for the sinusoidal functions y1 = 2 cos 3t + π3  Example 10. Find the and y2 = 3 sin 2t − π4.  Solution: For y1 we are already in the form y = A cos (ωt + φ) and we can read off A = 2, ω = 3 and φ = π3. We have: iπ y = Re (Aei(ωt+φ) ) = Re (Aeiφ eiωt ) = Re (2e 3 e3it ), iπ so the phasor is the time-independent part of the above, i.e. Z1 = 2e 3. For the second example, since our function is presented to us as a sine, we need to rewrite it into a cosine in order to properly find the phasor. Using sin (x) = 16 JACK ROGERS cos (x − π2 ) we write:    π  π π 3π y2 = 3 sin 2t − = 3 cos 2t − − = 3 cos 2t −. 4 4 2 4 3iπ 3π Hence y2 = Re (3e− 4 e2it ) and so the phasor of y2 is Z2 = 3e− 4. Alternatively we could find the phasor of y2 as follows: since we want the sinu- soidal function we are interested in to be the real part of a complex number, we can multiply by −i to switch whether the real part is the sine or the cosine, since: Re (−iei(ωt+φ) ) = Re (−i(cos (ωt + φ) + i sin (ωt + φ))) = Re (−i cos (ωt + φ) + sin (ωt + φ)) = sin (ωt + φ). iπ Writing −i = e− 2 in exponential form, we see that given a function y = A sin (ωt + φ) we can find its phasor by writing: y = Re (Ae− 2 eiφ eiωt ) = Re (Aei(φ− 2 ) eiωt ). iπ π Hence the phasor is: Z = Aei(φ− 2 ). π This is the same as if we used sin (t) = cos t − π2.  Now any sinusoid with frequency ω has a phasor which uniquely specifies it among such functions, and we can manipulate such functions via their phasors. For example, we often add different sinusoidal functions together (imagine con- structive/destructive interference of waves for example) to get a third function. If y1 = A cos (ωt + φ) and y2 = B cos (ωt + ψ) then: y1 + y2 = A cos (ωt + φ) + B cos (ωt + ψ) = Re (Aeiφ eiωt ) + Re (Beiψ eiωt ) = Re (Aeiφ eiωt + Beiψ eiωt ) = Re ((Aeiφ + Beiψ )eiωt ). Hence the phasor of y1 + y2 is Aeiφ + Beiψ , i.e. the sum of the phasors of y1 and y2. The phasor of a sum is the sum of the phasors! Note: it is essential here that the two functions have the same angular frequency ω. the phasor for the sum of the waves y1 = 4 cos 2t − π3 and  Example 11. Find y2 = 2 sin 2t + π4.  Solution: Since the angular frequency is the same, the phasor of the sum is the sum of the phasors. The phasor of y1 is a complex number with modulus 4 and argument − π3 , hence: iπ Z1 = 4e− 3. For y2 we convert to a cosine which subtracts − π2 from the argument, so the phasor has modulus 2 and argument π4 − π2 = − π4. Hence: iπ Z2 = 2e− 4. It follows that the phasor of y1 + y2 is Z1 + Z2 , i.e.: iπ iπ Z = Z1 + Z2 = 4e− 3 + 2e− 4. Hence:    iπ iπ y1 + y2 = Re 4e− 3 + 2e− 4 e2it 6 - ORDINARY DIFFERENTIAL EQUATIONS JACK ROGERS In this chapter we will discuss an extremely important topic in applications of calculus to real-world problems in physics and engineering (among many other fields), namely differential equations. An ordinary differential equation (or “ODE”) is a relationship that we want to exist between a variable, a function of that variable, and the derivatives of that function. When presented with a differential equation, we want to find a solution, i.e. a function which satisfies the equation. There are also partial differential equations (“PDEs”) which relate the partial derivatives of multivariable functions together. These are also incredibly important (arguably more important!) in applications, but their theory is considerably more complicated and will not be covered in this course. One very common and widely applied differential equation is Newton’s second law, although you may not currently think of it in that way. Since acceleration is the second derivative of position, and force often depends on both position and time, we can write Newton’s second law as: d2 x F (x, t) = m. dt2 This is a differential equation because it relates the independent variable t, the function x which is a function of t, and the second derivative of x with respect to t to each other. Newtonian mechanics mostly amounts to two steps: (i) correctly identifying what the net forces acting on a body actually are, and (ii) solving the above differential equation to find the equation of motion x(t). Hence we will discuss here some general theory of ODEs, and then describe vari- ous special types of ODEs which can be directly solved in more or less complicated ways. It is important that many ODEs (most of them in fact) cannot be solved exactly, like we have previously discussed with integrals, and they must instead be approached using numerical methods. We will not discuss these methods in detail here, instead focussing on the types of ODEs which can be solved directly. 1. Introduction Suppose we have some variable x which we call the independent variable, and a function y(x) which we call the dependent variable (because it depends on the value of x). We can form the derivatives y ′ (x), y ′′ (x), etc. of y(x) with respect to x. An ordinary differential equation is any equation involving at least one of the derivatives of y, as well as possibly including other functions of y or x. Some examples include: y ′′ + y = ex , (y ′ )2 − y = 2yx2 , 2y ′′′ + cos x = yy ′ , p ln (y ′′ ) + exy = 4. 1 2 JACK ROGERS In general an ordinary differential equation is of the form: f (x, y, y ′ , y ′′ ,...) = 0 for some function f which determines the overall form of the equation. To get the above equations into that form we can simply subtract the right hand side from the left. For example, the first of the above four examples is given by: f (x, y, y ′′ ) = y ′′ + y − ex = 0. A function y(x) satisfying f (x, y, y ′ , y ′′ ,...) = 0 is called a solution to the ODE, and the ultimate goal of studying an ODE is to find all of its possible solutions. The highest derivative which appears explicitly in a differential equation is the order of the equation. So the above four examples are (respectively) second order, first order, third order and second order. ODEs of order 3 and higher are rare in applications and difficult to solve, so we will mostly concentrate in this course on first and second order ODEs. A first order ODE has the form: f (x, y, y ′ ) = 0, and a second order ODE has the form: f (x, y, y ′ , y ′′ ) = 0. Note that we are labelling the independent variable as x and the dependent variable as y here out of convention, but they can of course be labelled anything else - what matters is not their name but their relationship to each other. We could write an equation: dx − xy = 2y 2 , dy and this is a perfectly valid first order ODE with independent variable y and de- pendent variable x. Often the independent variable may be time t instead, and the dependent variable will be position x, but we can use any letters we want. 2. Classifying ODEs There is no one-size-fits-all approach to solving ODEs. There are too many different things we are allowed to do when writing one down, and as has been mentioned, most are essentially impossible to solve directly. What we do is identify different sub-types of ODEs by placing further restrictions on their overall forms which make them easier to solve for one reason or another. There will be different methods that we can use to solve different classes of ODE, so it is important to be able to correctly classify a given ODE into one of the classes that we can solve. 2.1. Linear ODEs. One of the most important types of ODEs are the linear ODEs. A second order ODE is linear if it has the form: d2 y dy a2 (x) 2 + a1 (x) + a0 (x)y = f (x) dx dx for some functions f, a0 , a1 , a2 of x. The crucial element is that these functions depend only on the independent variable x and not at all on the dependent variable y(x) or its derivatives. Note that y and its derivatives only appear in the equation multiplied by functions of x - there are no y 2 or cos (y ′ ) terms or anything similar to this. 6 - ORDINARY DIFFERENTIAL EQUATIONS 3 We can obtain an arbitrary first order linear ODE from the above by simply setting a2 (x) = 0 to remove the dependence on the second derivative y ′′. Hence an arbitrary first order linear ODE looks like: dy a1 (x) + a0 (x)y = f (x). dx Example 1. The following ODEs are linear: dy + y = sin t dt d2 y dy − − x2 y = 0. dx2 dx This is because y and its derivatives only appear in these equations on their own or multiplied by functions of the independent variable (t in the first case, x in the second), i.e. they are not themselves the subjects of any functions. Comparing to the above general forms, the first of these linear ODEs corresponds to the case a2 (t) = 0, a1 (t) = 1, a0 (t) = 1 and f (t) = sin t. The second corresponds to a2 (x) = 1, a1 (x) = −1, a0 (x) = x2 and f (x) = 0. The following ODEs are nonlinear : dy + ey = sin x dx √ x′′ + xt = 2. In the first case, the ey term prevents the equation from being linear since we are applying a function directly to the dependent variable y. In the second case, the dependent variable is x and we have non-linearity because we take the square root of one of its derivatives. In general, non-linear ODEs are considerably more difficult to solve than linear ODEs, so we want to look out for linear ODEs in particular. Many numerical methods for approximating solutions to ODEs begin by ‘linearising’ a nonlinear ODE - which essentially amounts to replacing the nonlinear ODE with a linear one whose solutions in a certain region are similar. We call a linear ODE homogeneous if f (x) = 0 in the above general form, i.e. if all terms explicitly depend on the dependent variable. Hence: dy + yt = 0 dt d2 x dx −t + 2t2 x = 0 dt2 dt are homogeneous linear ODEs, while: dy x − 2y cos (x) = 0 dx d2 y dy √ 3 2 + 2et −y t−3=0 dt dt are linear but inhomogeneous. Warning: above we are discussing homogeneous linear ODEs. Shortly we will use the word ‘homogeneous’ (or ‘homogeneous type’) to refer to a different class of (usually) non-linear ODEs. This is an unfortunate but firmly established limitation of terminology, so be careful! 4 JACK ROGERS 2.2. Autonomous ODEs. An autonomous ODE is one in which the independent variable does not appear explicitly. Examples include: y′ + y = 0 dy = y2 − 2 dx d2 x √ x − x = −1. dt2 Non-autonomous ODEs include: ty ′ + y = 0 dy = x2 − 2. dx A general first-order autonomous ODE can be written in the form: y ′ = f (y). Note that the solution e.g. y(t) will depend on the independent variable, it just does not appear explicitly in the ODE itself. Given a solution y(t) of an autonomous ODE, then y(t − t0 ) is also a solution for any t0. Essentially if we interpret the independent variable as t = time, then an autonomous ODE describes a time-independent system since the solutions can be moved forward or backward in time without ceasing to be valid solutions. For this reason many laws of physics are described by autonomous ODEs - since we expect that the laws of physics do not change over time, the equations that govern the universe generally do not explicitly depend on time themselves. 2.3. Separable ODEs. A separable ODE is one in which we can ‘separate’ the independent and dependent variables from each other. This means being able to write the ODE in the form (say for first order): dy f (y) = g(x). dx Here we have moved all y-dependence, including the derivative of y, onto the left hand side of the equation, and all x-dependence onto the right hand side, hence “separating” x and y. Often it will take some rearrangement to show that an ODE is separable. Example 2. Demonstrate that the ODE: dy 1−x x2 = dx y is separable. Solution: Assuming that x ̸= 0 we can divide both sides by x2 to get: dy 1−x = 2 dx x y and then multiply through by y to get: dy 1−x y =. dx x2 6 - ORDINARY DIFFERENTIAL EQUATIONS 5 First order separable ODEs can often be solved fairly straightforwardly despite generally being non-linear, so they are a very important class of ODEs to look out for. Sometimes we can take a non-separable ODE and perform a change of variables which transforms the equation into a separable one. This is most notably the case for the ODEs of homogeneous type (sometimes just called “homogeneous ODEs”, but not to be confused with linear homogeneous ODEs!). If we can write our ODE in the form: dy y =F dx x then the equation is of homogeneous type. We will see that such equations can be rewritten into separable ODEs by introducing a new variable v = xy. Example 3. The ODE: dy x − 2y = dx x is of homogeneous type. We can divide the top and bottom of the left-hand side fraction by x to get: dy 1 − 2y x 2y y = =1− =F. dx 1 x x Another ODE of homogeneous type is: dy y2 = y 2 − 3xy. dx Dividing both sides by y 2 gives: y 2 − 3y  dy y 2 − 3xy x x y = = y 2 =F. dx y2 x  x Generally speaking an ODE of homogeneous type can be spotted if we can write dy dx on the left hand side, leaving the right hand side as a ratio of two polynomial functions of x and y where all terms have the same total degree. For example: x2 y − 3y 3 2xy 2 is such a function - all of the terms have degree 3 (obtained by adding up the power of x and the power of y in a given term). Then dividing the whole thing by x3 will return a function of xy. A non-example might be: x4 − 3y 2 x + y 2 x2. y4 Now the terms mostly have degree 4, but the middle term on the numerator −3y 2 x only has degree 3. If we divide by x4 we will get: 2 2 1 − x1 xy + xy y 4 . x y 1  This is not a function F x because of the x in the numerator. 6 JACK ROGERS 2.4. Exact ODEs. The final subtype of ODEs we will discuss are called exact ODEs. If we can write our ODE in the form: dy P (x, y) + Q(x, y) =0 dx with the additional condition that: ∂P ∂Q = ∂y ∂x then the ODE is exact. These ODEs arise in the following manner: suppose we start with a function Ψ(x, y) called the potential function. We can take partial derivatives and rename them: ∂Ψ = P (x, y) ∂x ∂Ψ = Q(x, y). ∂y Then by interchangability of mixed partial derivatives we have: ∂P ∂2Ψ ∂2Ψ ∂Q = = =. ∂y ∂y∂x ∂x∂y ∂x It follows that ∂P ∂Q − = 0. ∂y ∂x But then if D is any region of the x-y plane where P and Q are defined, then we have:     ∂P ∂Q − dA = 0 dA = 0. D ∂y ∂x D However Green’s theorem then tells us that if C is the boundary of this region D, then:    ∂P ∂Q 0= − dA = (P dx + Q dy). D ∂y ∂x C The area integral is 0 for any region D and hence the line integral is 0 for any closed curve C, which can only mean that the integrand itself is equal to 0. Hence: P (x, y) dx + Q(x, y) dy = 0, or, dividing by dx: dy P (x, y) + Q(x, y) = 0. dx Here we have derived the exact differential equation directly from the potential function Ψ(x, y), and any example of such a potential function will give rise to some exact ODE. Example 4. Show that the ODE: dy 4xy + cos y =− 2 dx 2x − x sin y is exact. Solution: We can multiply out the denominator to obtain: dy (2x2 − x sin y) = −4xy − cos y. dx Then moving the RHS over to the left gives: dy 4xy − cos y + (2x2 − x sin y) = 0. dx 6 - ORDINARY DIFFERENTIAL EQUATIONS 7 If we want this to be: dy P (x, y) + Q(x, y) =0 dx then we set: P (x, y) = 4xy − cos y and Q(x, y) = 2x2 − x sin y. ∂P ∂Q The equation is now exact if ∂y = ∂x. We have: ∂P ∂Q = 4x − sin y = ∂y ∂x as required, so the equation is indeed exact. 3. Solving ODEs 3.1. Simple ODEs, initial conditions. We will now discuss the various methods which exist to solve some of the subtypes of ODEs which we discussed in the previous section. Starting with the simplest possible example, consider the ODE: dy = f (x). dx We can solve this by simply integrating both sides with respect to x and using the Fundamental Theorem of Calculus - indeed if:   dy dx = f (x) dx dx then FTC tells us that the left hand side is simply y(x), so we have:  y(x) = f (x) dx + C for some constant C. As long as we know how to perform the integral of f (x), we will have fully solved the ODE. This gives a general solution, which covers every possible solution of the ODE. Note that the integration constant C means that the ODE in fact has a whole family of solutions depending on the (arbitrary) value of C. This will usually be the case for first-order ODEs. If we want our ODEs to have unique solutions we need to add extra conditions to help narrow down the value of C. Most commonly we apply an initial condition, e.g. if y is a function of time we might specify the value we want y to have at the beginning of our experiment (t = 0), i.e. we require y(0) = y0 for some predetermined constant y0. Then once we find our general solution, any arbitrary constant C which arises may be found by plugging t = 0 into y(t) and solving for C in terms of the (known) value y0. Example 5. Find the specific solution to the ODE: dy = 3t2 + cos t dt subject to the initial condition y(0) = 1. Solution: First, by the above we can solve this ODE by integration, since the RHS is entirely a function of t and the LHS is simply dy dt. This gives a general solution:  y(t) = (3t2 + cos t) dt = t3 + sin t + C.

Use Quizgecko on...
Browser
Browser