EE 240: Introduction to Linear Systems PDF

Summary

This document is a course outline for EE 240: Introduction to Linear Systems. It covers topics like vectors, matrices, linear equations, vector spaces, and orthogonality. The course appears to be at an undergraduate level.

Full Transcript

Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices TABLE OF CONTENTS I. Vectors and Matrices....................................................................................................... 5 I.1. Vectors and Linear Combinations...................

Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices TABLE OF CONTENTS I. Vectors and Matrices....................................................................................................... 5 I.1. Vectors and Linear Combinations.......................................................................................... 5 I.2. Operations on Vectors.......................................................................................................... 5 I.3. Dot Products and Lengths..................................................................................................... 8 I.3.A. Dot Product.........................................................................................................................................8 I.3.B. Length..................................................................................................................................................9 I.3.C. Perpendicular Vectors.......................................................................................................................10 I.4. Matrices............................................................................................................................. 11 I.4.A. Linear Equations................................................................................................................................12 I.4.B. The Inverse Matrix.............................................................................................................................12 I.4.C. Independence and Dependence.......................................................................................................13 II. Solving Linear Equations................................................................................................ 14 II.1. Vectors and Linear Equations.............................................................................................. 14 II.1.A. Row Picture.......................................................................................................................................14 II.1.B. Column Picture..................................................................................................................................15 II.1.C. Matrix Picture....................................................................................................................................16 II.1.D. Three Equations in Three Unknowns................................................................................................17 II.2. Elimination......................................................................................................................... 20 II.2.A. Procedure..........................................................................................................................................20 II.3. Elimination Using Matrices................................................................................................. 21 II.4. Matrix Operations............................................................................................................... 25 II.4.A. Block Matrices...................................................................................................................................25 II.5. Inverse Matrices................................................................................................................. 26 II.5.A. Gauss‐Jordan Elimination..................................................................................................................27 II.6. Elimination = Factorization.................................................................................................. 29 II.7. Transposes and Permutations............................................................................................. 31 II.7.A. Symmetric Matrices..........................................................................................................................31 I.1-Vectors and Linear Combinations 1 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices II.7.B. Permutation Matrices.......................................................................................................................32 III. Vector Spaces and Subspaces......................................................................................... 33 III.1.A. Subspaces..........................................................................................................................................33 III.1.B. Column Space....................................................................................................................................34 III.2. The Nullspace..................................................................................................................... 35 III.2.A. Echelon Matrices...............................................................................................................................38 III.2.B. Row‐Reduced Echelon Matrices........................................................................................................38 III.3. The Rank and the Row Reduced Form................................................................................. 39 III.3.A. The Pivot Columns.............................................................................................................................40 III.3.B. The Special Solutions.........................................................................................................................40 III.4. The Complete Solution........................................................................................................ 41 III.5. Independence, Basis and Dimension................................................................................... 43 IV. Orthogonality............................................................................................................. 46 IV.1. Projections.......................................................................................................................... 48 IV.2. Projection Onto a Line........................................................................................................ 49 IV.3. Projection Onto a Subspace................................................................................................ 50 IV.4. Least Squares Approximations............................................................................................ 51 IV.5. Fitting by a Parabola........................................................................................................... 53 IV.6. Orthogonal Bases and Gram‐Schmidt.................................................................................. 53 IV.7. The Gram‐Schmidt Process.................................................................................................. 54 V. Determinants................................................................................................................. 57 V.1. Determinant by Cofactors................................................................................................... 58 V.2. Cramer's Rule...................................................................................................................... 58 V.2.A. Finding Inverse..................................................................................................................................59 V.2.B. Volume..............................................................................................................................................60 VI. Eigenvalues and Eigenvectors..................................................................................... 61 VI.1. Diagonalizing a Matrix........................................................................................................ 62 VI.2. Applications to Differential Equations................................................................................. 63 VI.3. Second Order Equations...................................................................................................... 64 I.1-Vectors and Linear Combinations 2 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices VI.4. Symmetric Matrices:........................................................................................................... 65 VI.5. Positive Semidefinite Matrices............................................................................................ 66 VI.6. Similar Matrices.................................................................................................................. 66 VII. Complex Matrices....................................................................................................... 68 I.1-Vectors and Linear Combinations 3 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices EE 240 Introduction to Linear Systems 3C,3H Gaussian elimination. Theory of simultaneous linear equations. Orthogonal projections and least squares. Determinants. Complex-valued vectors and matrices. Eigenvalues and eigenvectors. Singular value decomposition. Introduction to state-space modeling. Computer applications.. TOPICS  Vectors and Matrices  Linear Equations  Vector Spaces  Orthogonality  Determinants  Eigenvalues and Eigenvectors  Complex Vectors and Matrices I.1-Vectors and Linear Combinations 4 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices I. VECTORS AND MATRICES I.1. Vectors and Linear Combinations  v1  v  v   2    v1 v 2  v n  T (I.1)     v n  I.2. Operations on Vectors Let v  w   v  w1  v   1, w   1  v  w   1   w v (I.2) v 2  w 2  v 2  w 2   cv  cv   1  (I.3) cv 2  Zero vector 0  0  (I.4) 0  Linear Combinations If c and d are constants, then a linear combination of v and w is given by: x  cv  dw (I.5) I.1-Vectors and Linear Combinations 5 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices When v and w are not along the same line, their linear combinations span the whole two dimensional plane. For one vector u , the only linear combinations are the multiples cu. For two vectors, the combinations are cu  dv. For three vectors, the combinations are cu  dv  ew. Spaces  1 , Line   i   2 , Plane (I.6)  3  , 3D Space I.2-Operations on Vectors 6 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices Example 0  1   d  Let a1  1 , a 2  1  ca1  da 2  c  d   Plane in  3.           1  0   c  Find a vector that is not a linear combination of a1, a 2. 1  Answer: g  7 .  a1, a 2 do not span the whole of  3.    2  1  1 is perpendicular to the plane. Note that for all c and d we have    1  1  d   .   d  (1)(c  d )  c  1 c  d   1   c  0 The zero vector is a linear combination and it belongs to the plane. I.2-Operations on Vectors 7 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices I.3. Dot Products and Lengths I.3.A. DOT PRODUCT  a1   b1  n a     , b      a  b  a T b   a i b i   (I.7)  a n  b n  i 1 Example 1  1 Let a1   2  , a 2   2   a1  a 2  (1)(1)  (2)(2)  (3)(1)  6.    3   1  I.3-Dot Products and Lengths 8 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices Example 1  1 Let a1  2 , a 2   2   a1  a 2  (1)(1)  (2)(2)  (3)(1)  0.        3   1 a1 and a 2 are perpendicular. a1  a 2 ba  ab (I.8) I.3.B. LENGTH  a1  n a      a  a  a T a   a i2  a 2 (I.9)  a n  i 1 Squared Length n  a  a  a T a   a i2 2 a (I.10) i 1 Length n a  aa  a a  T  a i2 (I.11) i 1 a is also called the norm of a ; norm( a )  a. Unit Vectors A unit vector u has a length of unity. u 1 2 (I.12) u 1 We can generate a unit vector by dividing an arbitrary vector by its own length, unless the vector length is zero. I.3-Dot Products and Lengths 9 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices Example  2  5 Let u     u 2  39  u  39 1   3  2    u 1 5 The vector a   u 39  1    3 is a unit vector. We can use this method to generate unit vectors as long as u  0 Angle Between Two Vectors v.w  v w cos  (I.13) I.3.C. PERPENDICULAR VECTORS Vectors a and b are perpendicular when ab  0 (I.14) This written as: a  b. When a  b  they form the two perpendicular sides of a right triangle. a  b forms the hypotenuse. From the Pythagoras Law, 2 2 2 a  b  ab  ab  0 (I.15) Schwartz Inequality a  b  a b cos  ab  a  b  a b (I.16) I.3-Dot Products and Lengths 10 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices Triangle Inequalty ab  a  b (I.17) I.4. Matrices A matrix is a set of vectors grouped together in one mathematical entity. A matrix consists of a number ( n ) of columns: A   a1  a n  (I.18) A matrix can also be considered to consist of a number ( m ) of rows:  q1T    A   (I.19)  T  q m  Note that A is lower triangular. I.4-Matrices 11 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices This can be put as a description of a linear system with an input vector x   x 1 x3 T x2 and an output vercor x   x 1 x 3  , as follows T x2 I.4.A. LINEAR EQUATIONS The input-output relationship of a linear system with an input x , output b and coefficient matrix A can be written in the form Ax  b (I.20) Usually, it is required to find an input vector x , given an output vector b has been observed. Having found a solution to Ax  b means that A is invertible. 0  0  When b  0  , x  0 .   0  0  1  1  When b  3 , x   4 .   5 9  We can have Cx  0 even if C  0, x  0. I.4.B. THE INVERSE MATRIX The solution to Ax  b is I.4-Matrices 12 Mohammad M. Banat – EE 240: Introduction to Linear Systems I: Vectors and Matrices  x1   b1  1   b1  x   x 2    b1  b 2   1 1  b 2   A 1b      (I.21)    x 3  b1  b 2  b 3  1 1 1  b 3  I.4.C. INDEPENDENCE AND DEPENDENCE Two vectors a and b that are not on the same line ( a  cb ) span a plane. Consider a third vector d. If d is in the plane, then it is dependent on (linear combination of) a and b. If d is not in the plane, then it is independent of a and b. We cannot have three independent vectors in  2. For independent vectors  1,  2 ,  3 , c 1 1    c n n  0 is true only when c1    c n  0. I.4-Matrices 13 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II. SOLVING LINEAR EQUATIONS II.1. Vectors and Linear Equations II.1.A. ROW PICTURE We begin a row at a time. The first equation produces a straight line in the xy plane. The point 1, 0  is on the line because it solves that equation. The point  3,1 is also on the line. If we choose x  101 we find y  50. 1 The slope of this particular line is because y increases by 1 when x changes by 2. But slopes 2 are important in calculus and this is linear algebra! The second line in this "row picture" comes from the second equation. Observe the intersection point where the two lines meet. The point  3,1 lies on both lines. That point solves both equations, at once. This is the solution to our system of linear equations. Row picture: two lines meeting at a single point (the solution). II.1-Vectors and Linear Equations 14 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.1.B. COLUMN PICTURE We recognize the same linear system as a "vector equation". Instead of numbers we need to see vectors. If you separate the original system into its columns instead of its rows, you get a vector equation 1  2   1  x  y      3  2  11 (I.22) b This has two column vectors on the left side. The problem is to find the combination of those vectors that equals the vector on the right. The column picture combines the column vectors on the left side to produce the vector b on the right side. We are multiplying the first column by x and the second column by y , and adding. With the right choices x  3 and y  1 (the same numbers as before), this produces 3  column 1 + 1 column 2   b  3  2  1 (I.23) 9  +        2 11 II.1-Vectors and Linear Equations 15 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations Note that if the components of a vector v are v1 and v 2 then v  v   1 v 2  (I.24)  cv  cv   1  cv 2  Note also that vector addition means adding the vector components separately: v  v   1 (I.25) v 2  w  w   1 (I.26) w2   v  w1  vw 1  (I.27) v 2  w 2  II.1.C. MATRIX PICTURE II.1-Vectors and Linear Equations 16 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.1.D. THREE EQUATIONS IN THREE UNKNOWNS Row Picture Each equation produces a plane in three-dimensional space. That plane crosses the x and y and z axes at the points (6,0,0) and (0,3,0) and (0,0,2). Those three points solve the equation and they determine the whole plane. Note that a line is determined by specifying two points on it, while a plane is determined by specifying three points on it. Note that the intersection of two planes (e.g., first two equations) is a line (call it L). Any point on the line satisfies the first two equations. The plane of the third equation insects L at one point. This point is the solution of the system. Column Picture Correct combination (solution):  x  0  y   0 (I.28)      z   2  Matrix Picture 1 2 3 A   2 5 2  (I.29)  6 3 1  Ax  b (I.30) We can perform multiplication by rows or multiplication by columns: II.1-Vectors and Linear Equations 17 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations Identity Matrix 1   1  I   (I.31)       1 Matrix Notation  a11 a 12  a1n  a a 22  a 2n  A   21 (I.32)          a m1 a m 2  a mn  II.1-Vectors and Linear Equations 18 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations System with No Solution II.2-Elimination 19 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.2. Elimination II.2.A. PROCEDURE Pivots: 1, 8. Let 2x  4 y  2z  2 2x  4 y  2z  2 4 x  9 y  3z  8  yz 4 2 x  3 y  7 z  10 4z  8 Ax  b  Ux  c II.2-Elimination 20 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations  1 Solution: x   2 .    2  II.3. Elimination Using Matrices Subtracting twice the first row from the second row is equivalent to multiplying A by: II.3-Elimination Using Matrices 21 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations  1 0 0  2 4 2  2   E 21  2 1 0  E 21 A   0 1 1  , E 21b   4         0 0 1   2 3 7  10    1 0 0  1 E 21  2 1 0    0 0 1  Note that 1 0 0  2 4 2  2       E 31  0 1 0  E 31E 21 A   0 1 1  , E 31E 21b   4    1 0 1  0 1 5  12    1 0 0 1 E 31   0 1 0    1 0 1  1 0 0  2 4 2  2   E 32  0 1 0   E 32 E 31E 21 A   0 1   1  , E 32 E 31E 21b   4    0 1 1    8  0 0 4 1 0 0  1 E 32  0 1 0    0 1 1  E  E 32 E 31E 21 1 0 0  1 0 0   1 0 0  1 0 0   1 0 0   1 0 0  E  0 1 0  0 1 0   2 1 0   0 1 0   2 1 0    2 1 0           0 1 1  1 0 1   0 0 1  0 1 1   1 0 1   3 1 1   1 0 0   2 4 2   2 4 2    EA   2 1 0   4 9 3   0 1 1     3 1 1   2 3 7   0 0 4   II.3-Elimination Using Matrices 22 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations  1 0 0  2  2 Eb   2 1 0   8    4        3 1 1  10  8  Note the inverse of E is 1 1 1 E  E 21 E 31 E 32 1 0 0  1 0 0  1 0 0   2 1 0   0 1 0  0 1 0    0 0 1   1 0 1  0 1 1  1 0 0  1 0 0  2 1 0   0 1 0    00 1   1 1 1   1 0 0   2 1 0    1 1 1  Note that EE 1  E 1E  I Permutation Matrices 0 1  P21    1 0  II.3-Elimination Using Matrices 23 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations 1 0 0  P23   0 0 1     0 1 0  The Augmented Matrix AB  A  b1 b2  bm    Ab1 Ab2  Abm   a1T    a T  A B   2 B T    a T   n II.3-Elimination Using Matrices 24 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.4. Matrix Operations II.4.A. BLOCK MATRICES II.4-Matrix Operations 25 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.5. Inverse Matrices II.5-Inverse Matrices 26 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations ( ABC ) 1  C 1B 1 A 1 II.5.A. GAUSS-JORDAN ELIMINATION II.5-Inverse Matrices 27 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations A triangular matrix is invertible if and only if no diagonal entries are zero. A is invertible if and only if it has n pivots (row exchanges allowed). If Ax  0 for a nonzero vector x , then A has no inverse. II.5-Inverse Matrices 28 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations Consider the following set of matrices A, B, C , D, S , E II.6. Elimination = Factorization Elimination matrix is LT, and after elimination U is UT  EA  U  A  E 1U. But, E 1 is also LT A  LU. Ax  b  Ux  c  c  Ux c  Ux  Lc  LUx  Ax  b Lc  b Therefore, solve Lc  b to find c , then solve Ux  c to find x. II.6-Elimination = Factorization 29 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.6-Elimination = Factorization 30 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations II.7. Transposes and Permutations II.7.A. SYMMETRIC MATRICES The inverse of a symmetric matrix is also symmetric. A T A is always symmetric. II.7-Transposes and Permutations 31 Mohammad M. Banat – EE 240: Introduction to Linear Systems II: Solving Linear Equations The symmetric factorization of a symmetric matrix is A  LDLT. II.7.B. PERMUTATION MATRICES P 1  P T II.7-Transposes and Permutations 32 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces III. VECTOR SPACES AND SUBSPACES The space  n consists of all column vectors with n components. 1  1  4    1 i  2     ,  0   5 ,    2      2  3i  2 3  M The vector space of all real 2 by 2 matrices. Z The vector space that consists only of a zero vector. III.1.A. SUBSPACES A plane through the origin (0,0,0) is a subspace of  3. Every subspace contains the zero vector (point of the origin). Lines through the origin are subspaces. II.7-Transposes and Permutations 33 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces  n is a subspace of  n. III.1.B. COLUMN SPACE C ( A) contains all columns of A and all their linear combinations Ax. The system Ax  b is solvable if and only if b is in the column space of A. 1 0  1  0 Let A  4 3  Ax  b means b  x1 4  x 2  3  plane in  3.            2 3   2   3 II.7-Transposes and Permutations 34 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces Find C ( A) for 1 0  1 2  1 2 3  0 1  ,  2 4  , 0 0 4        III.2. The Nullspace Vectors x that are solutions to Ax  0 are in the null space of A. 1 0  What is the null space of A   ? 0 1  III.2-The Nullspace 35 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces N ( A)  Z , N ( B)  Z. III.2-The Nullspace 36 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces 1 1 2 3  Let A   2 2 8 10     3 3 10 13  Pivot columns: 1,3 Free columns: 2,4 The special solutions are in the nullspace, and their combinations fill out the whole nullspace. III.2-The Nullspace 37 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces III.2.A. ECHELON MATRICES III.2.B. ROW-REDUCED ECHELON MATRICES III.2-The Nullspace 38 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces III.3. The Rank and the Row Reduced Form Rank r = number of pivots. III.3-The Rank and the Row Reduced Form 39 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces r ( A)  r (U )  r ( R). Note that Ax  0  uv T x  0. Note that v T x is a scalar,  v T x  0. This means that vectors in the null space of A are orthogonal to v. Note that v is in the row space of A. Rank = number of independent columns = number of independent rows. Rank = dimension of the column space = dimension of the row space. III.3.A. THE PIVOT COLUMNS Pivot columns of A, U , R are the same. Column spaces are different. Pivot columns of R form an r  r identity matrix, sitting above m  r rows of zeros. III.3.B. THE SPECIAL SOLUTIONS Columns of null space matrix are the special solutions. AN  0 N is n  (n  r ). III.3-The Rank and the Row Reduced Form 40 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces III.4. The Complete Solution Ax  b  Rx  d  x1  1 3 0 2     b1  1 3 0 2 b1  Let 0 0 1 4   2   b 2   0 0 1 4 b 2    A b   x   x  1 3 1 6   3  b 3  1 3 1 6 b 3  x4  1 3 0 2 b1  0 0 1 4  R d  b2     0 0 0 0 b 3  b1  b 2  Then from the last row, the equations are consistent if 0  b 3  b1  b 2  b1  b 2  b 3. 1   1 3 0 2 1  1  Let b  6  0 0 1 4 6   R d   d  6            7   0 0 0 0 0  0  III.4-The Complete Solution 41 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces Pivot variables x1, x 3. Free variables x 2 , x 4. Particular solution: x 2  x 4  0  x1  1, x 3  6. Free variables are zeros and pivot variables are from d. 1  0  xp    6    0  Note that Rx p  d is satisfied. Ax p  b  A x p  xn   b Ax n  0 1   3  2  0  1 0 x  x p  xn     x2    x4   6  0  4        0  0 1 1 1 Let  1 2  x  b  2 3  1 1 b1   1 1 b1   1 0 2b1  b 2   1 2 b   0 1 b  b   0 1 b 2  b1   2  2 1    2 3 b 3   0 1 b 3  2b1   0 0 b 3  b 2  b1  The system is solvable when b 3  b 2  b1  0. Two pivot variables, no free variables  r  2. No special solutions.  2b1  b 2  x  xp    This is only one solution.  b 2  b1   I  A has full column rank. R   . N ( A)  Z.  m  n rows of zeros  III.4-The Complete Solution 42 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces Row space of A is C  A T . III.5. Independence, Basis and Dimension Basis vectors: 1. Independent 2. Span the space There is one and only one way to write a vector as a combination of the basis vectors. The basis is not unique. The columns of every invertible n  n matrix give a basis for  n. The pivot columns of A are a basis for its column space. The pivot rows of A are a basis for its row space. So are the pivot rows of its echelon form R. 2 4 1 2  Let A    R  3 6 0 0  2 Basis of column space =   3 1  Basis of row space =   2 Set of basis vectors is not unique. If v i  i 1 and w i  i 1 are both sets of basis vectors, then n  m. m n Dimension of a space is the number of basis vectors. Dimension of column space=rank of matrix. Space M contains all 2  2 matrices. One basis is: 1 0  0 1  0 0  0 0  A1    , A2    , A3    , A4    0 0 0 0  1 0  0 1   c1 c 2  Note that c1 A1  c 2 A2  c 3 A3  c 4 A4     A =arbitrary matrix in M. c 3 c 4  III.5-Independence, Basis and Dimension 43 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces Note that A  0 only if c1  c 2  c 3  c 4  0. M is 4-dimensional. A1 , A 2 , A 4 are basis of the 3-dimansional space of upper triangular matrices. A is m  n. III.5-Independence, Basis and Dimension 44 Mohammad M. Banat – EE 240: Introduction to Linear Systems III: Vector Spaces and Subspaces Left Null Space = N  A T . Solve A T y  0. A T is n  m. Fundamental Theorem of Linear Algebra – Part I: N  A T  is a subspace of  m. Dimension m  r. N  A  is a subspace of  n. Dimension n  r. The row space and column space have the same dimension r. 1 3 5 0 7  1 3 5 0 7  Let A  0 0 0 1 2  R  0 0 0 1 2 .       1 3 5 1 9   0 0 0 0 0  Pivot rows: 1,2 – Pivot columns: 1,4 - r  2. Rows space dimension = 2 – Pivot rows are basis. Column space dimension = 2 – Pivot columns are basis. Null space dimension = 3 – Special solutions are basis. Left null space dimension = 1. R T y  0  y T R  0  This means that y T is a linear combination of the rows that is zero. Pivot rows of R are independent – their combination should have zero coefficients. y T   0 0 1  y is a basis for the left null space. Last m  r rows of I. The number of independent columns equals the number of independent rows. A and R have same row space. C ( A)  C ( R). Pivot columns of A are basis for C ( A). A and R have same null space. III.5-Independence, Basis and Dimension 45 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality IV. ORTHOGONALITY Right triangle: a 2  b 2  c 2 vTw  0 2   v  w  v  w  v 2  2 v T w  w 2  T vw 2 2 2 vw  v  w Every row of A is perpendicular to every solution of Ax  0  The row space is perpendicular to the null space. Every column of A is perpendicular to every solution of y T A  0  The column space is perpendicular to the left null space. Spaces V and W are orthogonal ( V  W ) if for every v V and every w W , v T w  0. 1 1 3 4  Let A    , x   1   Ax  0. 5 2 7     1 x  N ( A). Rows of A are perpendicular to x. Row space and null space are orthogonal complements because their dimensions add to n. Fundamental Theorem of Linear Algebra – Part II: N  A  is the orthogonal complement of C  A T . N  A T  is the orthogonal complement of C  A. Let x be split into a row space component and a null space component, x  x n  x r. Ax  Ax n  Ax r  Ax r Vectors in the row space can be transformed into vectors in the column space by Ax. III.5-Independence, Basis and Dimension 46 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality  A11 0 0 0 0 Let A   0 A 22 0 0 0  , this is a diagonal matrix.  0 0 A33 0 0  If A11 , A 22 , A33  0  r  3   A11 0 0  A includes the 3  3 invertible diagonal submatrix  0 A 22 0   0 0 A33  3 0 0 0 0 3 0 Let A  0 5 0 0 0   r  2  Submatrix=     0 5  0 0 0 0 0  1 2 3 4 5   1 2 3 4 5  1 2 3 4 5  Let B  1 2 4 5 6    0 0 1 1 1   0 0 1 1 1         1 2 4 5 6   0 0 1 1 1  0 0 0 0 0  1 3  Pivot cols: 1,3 – Submatrix=  . 1 4  1 2  4 Let A    , x   . 3 6  3 2 2 Since x r    is in the row space, then x n  x  x r    is in the null space. 4  1 Note that x r  x n. III.5-Independence, Basis and Dimension 47 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality IV.1. Projections p  Pb is a projection of b. 2 Note that b   3  corresponds to a points in the xyz space.    4  When b is projected onto a line, the projection is the part of b along that line. When b is projected onto a plane, the projection is the part of b along that plane. p 1 : projection onto z axis. IV.1-Projections 48 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality p 2 : projection onto xy plane. p1  p 2  b Note that p 1  p 2. Note that p 1 has only the third row of b , while p 2 has the first two rows. 0 0 0 1 0 0  P1  0 0 0 , P2   0 1 0 .        0 0 1   0 0 0  P1  P2  I. IV.2. Projection Onto a Line Determine the point p on a that is closest to b. The line from b to p is perpendicular to the vector a. This is e. ˆ. e  b  xa Note that we have a vector p that is a multiple of a. Let p  xa ˆ. ea. 2 ab aTb a  ( b  xa ˆ )  0  xˆ a  a  b  xˆ  2 . a aTa aTb a b cos  p T a p  2 a  b cos . a a a IV.2-Projection Onto a Line 49 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality aTb aTb aa T p aa  b  Pb  aTa aTa aTa aa T P aTa P 2  P : Projecting a second time doesn't change anything. ( I  P)b  b  Pb  I  P is a projection matrix too. IV.3. Projection Onto a Subspace Let a1, a 2 ,, a n   m. Let a1 , a 2 ,  , a n be independent. a1 , a 2 ,  , a n span an n -dimensional SS of  m. Let p  xˆ 1a 1  xˆ 2 a 2    xˆ n a n ; this is a vector in the SS. Find p that is closest to b.  xˆ 1   xˆ  Note that p  Axˆ , where A   a 1  a n  , xˆ   . 2 a2      xˆ n  Let e  b  Axˆ. e  SS  e  a1 , a 2 , , a n. The line and plane are orthogonal complements. Note that the z axis is C ( P1 ) , while the xy plane is C ( P2 ).  a1T     a 2T     b  Axˆ   0  A ( b  Axˆ )  0  A Axˆ  A b. T T T    a T   n xˆ  ( A T A) 1 A T b  p  A( A T A) 1 A T b  Pb  P  A( A T A) 1 A T. SS is the column space of A. The error vector b  Axˆ is perpendicular to that column space. IV.3-Projection Onto a Subspace 50 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality Therefore b  Axˆ is in the null space of A T. This means that A T ( b  Axˆ )  0. The vector b is being split into the projection p and the error b  p  e. p  e. A T A is invertible if A has linearly independent columns. In this case A T A is symmetric, square ( n  n )and invertible. IV.4. Least Squares Approximations When A has more rows than columns, Ax  b can have no solution. When there is no solution, b  C ( A) , e  b  Ax  0. Some equations may be inconsistent because of measurement noise. We try to make e as small as possible (in terms of its squared length) using a least squares solution x̂. b can be split into a part p  C ( A) and a part e  N  A T . Note that even if Ax  b cannot be solved, Axˆ  p can be solved. Ax  b  Ax  p  e.  Ax  p   C ( A)  e   Ax  p   2 2 2 Ax  b  Ax  p  e. A T Axˆ  A T b. Fitting a straight line to m points: Let’s fit the points ( t , b ): (0,6), (1,1), (2,3) to a straight line. Let the line equation be b  tx1  x 2. Unknowns: x1, x 2. x2  6 x1  x 2  1. 2 x1  x 2  3  0 1 6   x1  A  1 1 , x    , b  1  , Ax  b has no solution.        2 1 x2   3  IV.4-Least Squares Approximations 51 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality xˆ  ( A T A) 1 A T b 1   0 1  6 1  0 1 2     0 1 2     5 3  7   1 1   1 .  1 1 1     1 1 1    3 3 10    2 1   3  1 3 3  7  1  9     6  3 5  10  6  29  e  b  Axˆ 6  0 1 6  29    1   9    1    1  1 1    1  20.   6  29   6    3   2 1    3  11  1 7   2 6   1  e 2  2e 3  0 Note that e  a1, e  a 2 . e1  e 2  e 3  0 In general,  a11 1  b1  a  b  1  x1  A   21 , x    , b   2  , Ax  b has no solution.    x2          a m1 1 b m   a 11 1  m 2 m    a i1  i1  a a a 21  a m1   a 21 1  i 1 i 1  A T A   11  .  1 1  1      m 1   i1    a m   a m1  i 1   b1    m a    a m1   b 2   i 1 a i1b i  a 21  A T b   11   .  1 1  1     m     bi  b m   i 1  IV.4-Least Squares Approximations 52 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality m 2 m  m    a i1  a i1   x   a i1bi   i 1 i 1  1  i 1  m  x    m .   a i1  2   i 1 m   b   i 1 i  m When the points ( t , b ) are such that  t i  0 , the columns of A are orthogonal. i 1 m 2   t 0 Note that in the above, a i1  t i , therefore, A T A is diagonal and is equal to  i 1 i .  0 m   IV.5. Fitting by a Parabola b  t 2 x1  tx 2  x 3. t 12 x 1  t 1 x 2  x 3  b1 t 22 x 1  t 2 x 2  x 3  b 2  t m2 x1  t m x 2  x 3  b m  t 12 t1 1  b1     x1    t 2 t2 1  x  , b   b 2  , Ax  b has no solution. Solve A T Axˆ  A T b. A 2  , x   2        x 3    t 2 1 b m   m tm IV.6. Orthogonal Bases and Gram-Schmidt 0, i  j q i  i 1 are orthonormal if q iT q j  1, n.  i j Let Q   q1 q 2  q n   Q T Q  I. When Q is square, Q 1  Q T. When q i  i 1 are only orthogonal, Q T Q is diagonal. n Q T Q  I : when Q is rectangular, Q T is the left inverse. IV.5-Fitting by a Parabola 53 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality The rows of a square Q are orthonormal. Square Q is called orthogonal. cos   sin   Q rotates every vector in the plane clockwise by an angle . Q is a rotation  sin  cos   matrix.  cos  sin   Q 1  Q T   . Columns of Q are orthonormal basis of  2.   sin  cos   0 1 0  P  0 0 1  is a permutation matrix. P is orthogonal. P 1  P T.   1 0 0  Let u be a unit vector. Q  I  2uu T. Q T Q  I. Q is symmetric and orthogonal. See examples in book. Rotation, reflection and permutation ( Q ) preserve vector length. Qx  x. Q preserves dot products: (Qx )  (Qy )  x  y. Suppose we have the system Qx  b. If this has no solution, we have to solve Q T Qxˆ  Q T b  xˆ  Q T b  p  QQ T b  P  QQ T.  q1T b     q 2T b  p   q1 q 2  q n              q1T b q1  q 2T b q 2    q nT b q n.  T   q n b  The above has n 1-dimensional projections. When Q is square,      b  p  q1T b q1  q 2T b q 2    q nT b q n.  IV.7. The Gram-Schmidt Process Start with three independent vectors a1 , a 2 , a 3. IV.7-The Gram-Schmidt Process 54 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality u1 a Let u 1  a1. q1   1. We must have u 2  u 1. u1 a1 u1T a 2 Project a 2 onto u1 to get p 21  , then set u1T u1 u2 u 2  a 2  p 21u1. u 2  u 1. q 2 . q 2  q1. u2 u1T a 3 u 2T a 3 Project a 3 onto u1 to get p 31  , and a 3 onto u 2 to get p 32  ,then set u1T u1 u 2T u 2 u3 u 3  a 3  p 31u1  p 32 u 2. u 3  u 1 , u 3  u 2. q 3 . q 3  q1. q 3  q 2. u3 1 2 3 Let a1   1 , a 2   0  , a 3   3  0   2   3  1 1   1   u 1  a 1  1 , q 1  1   2   0   0  2 1 1 1       1   p 21  1, u 2  0  1  1 , q 2  1       6   2   0   2   2  3  1   1  1 1         1  p 31  3 , p 32   1 , u 3  3  3 1  1  1 , q 3  1         3   3   0   2  1 1 Note that A   a1 a2 a 3   Q   q1 q2 q 3   A  QR. IV.7-The Gram-Schmidt Process 55 Mohammad M. Banat – EE 240: Introduction to Linear Systems IV: Orthogonality  q1T    Let A  QR  R  Q T A   q 2T   a1 a2 a3 .    q 3T     q1T a1 q1T a 2 q1T a 3   q1T a1 q1T a 2 q1T a 3      R   q 3T a1 q 2T a 2 q 2T a 3    0 q 2T a 2 q 2T a 3 .      q 3T a1 q 3T a 2 q 3T a 3   0 0 q 3T a 3   A  QR  q1T a1 q1T a 2 q1T a 3   .   q1 q 2 q 3   0 q 2T a 2 q 2 a3  T    0 0 q 3 a 3  T  The least squares setup becomes: A T Axˆ  A T b  R T Q T QRxˆ  R T Q T b  R T Rxˆ  R T Q T b.  Rxˆ  Q T b IV.7-The Gram-Schmidt Process 56 Mohammad M. Banat – EE 240: Introduction to Linear Systems V: Determinants V. DETERMINANTS When A 1 exists, det  A 1   1 det( A). Determinant=product of pivots. 1. The determinant of the n  n identity matrix is 1. 2. The determinant changes sign when two rows (or two columns) are exchanged. For permutation matrices, P  1 for an even number of exchanges and P  1 for an odd number of exchanges. det( A  B)  det( A)  det( B). a 11  a 1n det( A)  A    . det( AB)  det( A) det( B). det  A T   det( A). a m1  a mn 1. If a row is multiplied by a number, while other rows are not changed, the determinant is multiplied by the same number: ta tb a b t. c d c d If a row of the first matrix is added to the corresponding row of another matrix, while other rows are not changed, determinants add: a b a b a b  . cg d h c d g h det(tA)  t n det( A). 2. If two rows are equal, the determinant is zero. 3. Subtracting a multiple of one row from another row leaves the determinant unchanged. det( A)  det(U ). 4. A matrix with a row of zeros has zero determinant. 5. Determinant of a triangular matrix equals the product of diagonal elements. 6. Matrix with zero det is singular First k pivots come from a k  k matrix A k in the top left corner of A. Ak k th pivot  A k 1. IV.7-The Gram-Schmidt Process 57 Mohammad M. Banat – EE 240: Introduction to Linear Systems V: Determinants V.1. Determinant by Cofactors V.2. Cramer's Rule Ax  b  A  x e2 e3    b a2 a 3   B1.  x1 0 0   b1 a12 a13  A  x 2 1 0   b 2 a 22 a 23   B1.  x 3 0 1   b 3 a 32 a 33  det( B1 ) det( A) x 1  det( B1 )  x 1 . det( A) det( B 2 ) To get x 2 , put x in the second column of I  x 2 . det( A) V.1-Determinant by Cofactors 58 Mohammad M. Banat – EE 240: Introduction to Linear Systems V: Determinants det( B j ) Similarly, x j . det( A) 6 3 4 6  4 3   x1   6  2 2 6 1 2 2 Let         x1   , x1  . 1 2  x 2   2  5 5 5 5 V.2.A. FINDING INVERSE Let A 1  G  g 11 g 12 g 13  1 0 0   g 11  1   g 11 0 0  1 a12 a13  A  g 21 g 22 g 23   0 1 0  A  g 21   0  A  g 21 1 0   0 a 22        a 23 .      g 31 g 32 g 33  0 0 1   g 31  0   g 31 0 1  0 a 33 a 33  1 a 12 a 13 0 a 22 a 23 1 a 12 a 13 0 a 33 a 33 C 11 det( A) g 11  0 a 22 a 23  g 11  . det( A) det( A) 0 a 33 a 33 C ji CT g ij   A 1 . det( A) det( A) V.2-Cramer's Rule 59 Mohammad M. Banat – EE 240: Introduction to Linear Systems V: Determinants 3 1 i j k  1 Let u   2 , v   3  u  v  3 2 1  1 i  5 j  7  u  v   5.      1   2  1 3 2  7  u  v  u, v. u  v  v  u. u  u  0. u  v  u v sin( )  area of the parallelogram with sides u and v.. V.2.B. VOLUME wT uT (u  v )  w  u T  v T  volume of the u , v , w box. vT wT V.2-Cramer's Rule 60 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors VI. EIGENVALUES AND EIGENVECTORS Generally, vectors change direction when left-multiplied by a matrix. Ax   x means that the vector does not change direction when multiplied by the matrix.  is a eigenvalue of the matrix. x is an eigenvector. Ax   x  0 Ax   Ix  0 det( A   I )  0 (A  I )x  0  x N ( A  I ).  0.8 0.3  0.8   0.3 Let A     (0.8   )(0.7   )  0.06  0  0.5  0.15   2  0.  0.2 0.7  0.2 0.7   0.5  0.15   2  0  (1   )(0.5   )  0  1,2  1,0.5. To find the eigenvectors, note that A  1I and A   2 I are singular.  0.2 0.3   x 11  3  0.6   0.2 0.3  x   0  2 x 11  3 x 12  0  x 11  2 x 12  x1  0.4 .    12     0.3 0.3  x 21  1  0.2 0.2   x   0  3 x 21  3 x 22  0  x 21   x 22  x 2   1.    22    Eigenvectors are not unique. Ax i   i x i  A( Ax i )   i Ax i   i2 x i  A 2 x i   i2 x i  A n x i   in x i  A n has same eigenvectors as A. A n has  in  eigenvalues. Ax i   i x i   i x i is a linear combination of the columns of A. Each column of A is a linear combination of the eigenvectors. In the above, a1  x1  0.2 x 2 , a 2  x1  0.3 x 2.  0.7  Note that Aa1  A  x1  0.2 x 2   1 x1  0.2 2 x 2  x1  0.1x 2      0.3  A 99 a 1  x1  0.2 299 x 2  column 1 of A 100  x1. V.2-Cramer's Rule 61 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors x1 is a steady state, while x 2 is a decaying mode. All entries of A are positive, and the sum of every column is 1. This is a Markov matrix. When columns add to 1,   1 is an eigenvalue. When the matrix is singular,   0 is an eigenvalue. When the matrix is symmetric, eigenvectors are orthogonal. A  LU. Eigenvalues of U are the pivots; and they are not the eigenvalues of A. The product of the eigenvalues equals the determinant. The sum of the eigenvalues equals the sum of the diagonal entries (trace). 0 1 1 i  Example: Rotation matrix Q     1,2  i  x1,2    ,  . 1 0  i  1 Q is an orthogonal matrix   i  1. Q is a skew symmetric matrix   i is pure imaginary. VI.1. Diagonalizing a Matrix Let the n  n matrix A have n linearly independent eigenvectors. Let S have its columns as the eigenvectors of A. S   x1 x 2  x n . AS  A  x1 x 2  x n     1 x1  2 x2  3xn .  1   2  1 x1  2 x 2   3 x n    x1 x 2  xn       S.     n  AS  S   A  S S 1  A n  S  n S 1. 1 5  1  1 Let A     1,2  1, 6  x1,2    ,  . These are independent. 0 6  0  1 1 1 1 1 1 1  S  ,S    ,   . 0 1 0 1   6 VI.1-Diagonalizing a Matrix 62 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors 1 1 1  1 1 1 1 1  1 1 1 6 k  1 k A     A  0 1   k   . 0 1   6  0 1    6 k  0 1 0 6 k  Any matrix that has no repeated eigenvalues has independent eigenvectors, and it can be diagonalized. Let u k 1  Au k  u k  A k u 0  S  k S 1u 0.  c1  c  Let c   2   u 0  c1 x1  c 2 x 2    c n x n  Sc  c  S 1u 0.     c n  u k  A k u 0  S  k S 1u 0  c11k x1  c 2 2k x 2    c n  nk x n If AB  BA , then A and B share the same S. VI.2. Applications to Differential Equations d t Note that  e    e t. dt du (t )   u (t )  u (t )  Ce t. Note that u (0)  C  u (t )  u (0)e t. dt d Consider u (t )  Au (t ). dt Let u (t )  e t x , where  is an eigenvalue of A and x is an eigenvector of A : Ax   x. d  u (t )    e t x  Ae t x  Au (t ). dt d 0 1   4 Solve u (t )    u (t ), u (0)   . dt 1 0   2 1  1  1 1  1,2  1, 1 , x1,2    ,    u1 (t )  e t   , u 2 (t )  e t  . 1  1 1  1 VI.2-Applications to Differential Equations 63 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors Ce t  De t  C  D   4  u (t )  Cu1 (t )  Du 2 (t )         C  3, D  1. Ce t  De t  C  D   2  VI.3. Second Order Equations my  by  k  0. Let y (t )  e  t  m 2  b  k  0. If  1   2  y (t )  c 1e 1t  c 2 e  2t. dy  y dt d y  0 1  y du Let m  1         =Au. dy  dt  y    k b   y   dt  ky  by  dt  1  0   2  b  k  0. k b   1 1 x1    , x 2   . u (t )  c 1e 1t x1  c 2 e  2t x 2.  1   2  y    y y   y  0 Consider the equation:  1 . y (0)  1, y(0)  0 u (0)    0 dy  y dt d  y   0 1  y  1 1        1  i,  2  i  x1    , x 2   . dy  dt  y   1 0   y  i   i   y dt 1 1 1  1 1 1  1   cos t  u (t )  c 1e it    c 2 e it  . u (0)     u (t )  e it    e it     . i   i  0 2 i  2  i   sin t  VI.3-Second Order Equations 64 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors e 1t    Eigenvalues of f ( A) are f ( ). Eigenvalues of e t are e  it   e t   .    e  nt  e At  Se t S 1. 1 1 e t 0.5e 3t  0.5e t  Let A   e  At .  0 3  0 e 3t  VI.4. Symmetric Matrices: Eigenvalues of A T and A are the same. Eigenvectors of a real symmetric matrix (when the eigenvalues are different) are always perpendicular. For a symmetric matrix with real number entries, the eigenvalues are real numbers and it’s possible to choose a complete set of eigenvectors that are perpendicular (or even orthonormal). Spectral Theorem: If A  A T  S  Q  A  QQ 1  Q Q T. The eigenvalues of A + bI are just b more than the eigenvalues of A. This is true for all matrices. a b  b    c  A   x1    , x2   2 . b c   1  a   b     x1  T A  QQ T   x1 x2   1     1 x1 x1T   2 x 2 x 2T. Generalize - x i x iT : projection   2   x 2  T matrix. product of pivots = determinant = product of eigenvalues. The number of positive eigenvalues of a symmetric matrix equals the number of positive pivots. Symmetric matrices are always diagonalizable. VI.4-Symmetric Matrices: 65 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors Positive definite matrices A positive definite matrix is a symmetric matrix A for which all eigenvalues are positive. A good way to tell if a matrix is positive definite is to check that all its pivots are positive. Example: 2x2 symmetric matrix with positive pivots. Ax   x  x T Ax   x T x  positive scalar. When A is positive definite, eigenvectors are independent and x T Ax  0 for all x. If A and B are PD then so is A  B. If R (possibly rectangular) has independent columns, then A  R T R is PD because: x T Ax  x T R T Rx   Rx  Rx  0. T VI.5. Positive Semidefinite Matrices x T Ax  0 (for eigenvectors only) VI.6. Similar Matrices Let B  M 1 AM  AM  MB  AMx B  MBx B   B Mx B   B is an eigenvalue of A. We say that A and B are similar. Eigenvectors of A are Mx B. Singular Value Decomposition (SVD) Let the eigenvectors of A T A be vi  , and the eigenvectors of AA T be u i . Both sets of eigenvectors can be chosen to be orthonormal. U  u1 u 2  u r  , V   v1 v2  vr . VI.5-Positive Semidefinite Matrices 66 Mohammad M. Banat – EE 240: Introduction to Linear Systems VI: Eigenvalues and Eigenvectors  2 2  5 3 1 1 1  1 ,  12   2 2   8. 2 Let A     AT A     v1    , v2     1 1  3 5 2 1 21 2 2  1  0  0    u1    , Av 2     u 2    ,  2   2   2. 2 2 Av1    0  0   2 1   1 1  1 0   2 2 0  2 2 A  U V T     .  0 1   0 2    1 1   2 2  VI.6-Similar Matrices 67 Mohammad M. Banat – EE 240: Introduction to Linear Systems VII: Complex Matrices VII. COMPLEX MATRICES A H   A* . T n n  x H x   x i* x i   x i 2 2 x. This is the inner product. i 1 i 1 If A H  A , then A is Hermitian and z H Az is real. Every eigenvalue of a Hermitian matrix is real. The eigenvectors of a Hermitian matrix are orthogonal. A unitary matrix U is a (complex) square matrix that has orthonormal columns. U H U  I 1  1 1 i U   is unitary. 3 1  i 1  VI.6-Similar Matrices 68

Use Quizgecko on...
Browser
Browser