Lecture 2: Matrix Algebra CEN1004 PDF

Document Details

LightHeartedHarmonica

Uploaded by LightHeartedHarmonica

Maastricht University

Martijn Boussé

Tags

matrix algebra linear algebra mathematics lectures

Summary

This document is a lecture on matrix algebra, part of a broader course on linear algebra. It covers basic concepts and operations, including systems of linear equations and matrix-vector products.

Full Transcript

CEN1004 Linear Algebra Lecture 2: Matrix Algebra Dr. ir. Martijn Boussé Maastricht University Engineers use science, technology, and math to solve problems. 2 Solving linear systems of equations is fundamental. Key idea of...

CEN1004 Linear Algebra Lecture 2: Matrix Algebra Dr. ir. Martijn Boussé Maastricht University Engineers use science, technology, and math to solve problems. 2 Solving linear systems of equations is fundamental. Key idea of lecture 1. 3 Refining your matrix algebra skills is crucial. Key idea of lecture 2. Today! 4 A system of linear equations (or a linear system) is a collection of one or more linear equations involving the same variables. ( 2x1 − x2 + 23 x3 =8 x1 − 4x3 = −7 5 A linear system of equations can be written as a linear combination. (       a11 x1 + a12 x2 = b1 a11 a b1 ⇔ x1 + 12 x2 = ⇔ a1 x1 + a2 x2 = b a21 x1 + a22 x2 = b2 a21 a22 b2 i.e., they are basically the same thing, see next slide. 6 A linear combination of vectors can also be written as a matrix-vector product.   x1  x2    Ax = a1 a2 · · · an .  = x1 a1 + x2 a2 + · · · + xn an ..  xn 7 If A is an m × n matrix, with columns a1 ,... , an , and if b ∈ Rm , the matrix equation Ax = b has the same solution set as the vector equation x1 a1 + x2 a2 + · · · + xn an = b which, in turn, has the same solution set as the system of linear equations whose augmented matrix is  a1 a2 · · · an b Theorem 3, p. 62. 8 Terminology: An m × n matrix has m rows and n columns.   a11 · · · a1j ··· a1n ......  ...     A= ai1 · · · a ij · · · ain  = a1 · · · aj · · · an ......   ...  am1 · · · amj · · · amn The (i, j)th element is aij where 1 ≤ i ≤ m and 1 ≤ j ≤ n. 9 Before we dive into matrix algebra, let’s have a look at some common matrices. Zero matrix Identity matrix Diagonal matrix Triangular matrix (Hessenberg matrix) 10 The zero matrix is a (rectangular) matrix that contains all zeros.   0 ··· 0 0m×n = .........    0 ··· 0 The zero matrix encodes the linear transformation that maps all vectors to the zero vector: T (x) = 0. 11 The identity matrix is a square matrix that contains all zeros excepts for ones on the diagonal.   1 0 0 ··· 0 0  1 0 · · · 0  ..  0 Im =  0. 0 ........   .... 0 0 ··· 0 1 The identiy matrix encodes the linear transformations that maps all vectors onto themselves: T (x) = x. 12 A diagonal matrix is a square matrix that contains all zeros except for non-zeros on the diagonal.   a1 0 0 · · · 0  0 a2 0 · · · 0    ..  D = 0 0 . 0 ......  .. .... 0 0 · · · 0 an The diagonal matrix encodes the linear transformations that multiplies each element of a vector with a scalar. 13 An upper-triangular matrix is a square matrix that contains zeros below the main diagonal.   a11 a12 a13 · · · a1n  0 a22 a23 · · · a2n    U 0  0 a33 · · · a3n  ..........  .....  0 0 · · · 0 ann And likewise for a lower-triangular matrix. 14 A Hessenberg matrix is a square matrix that contains zeros below the first off diagonal.   a11 a12 a13 ··· a1,n−1 a1n a21 a22  a23 ··· a2,n−1 a2n    0 a32 a33 ··· a3,n−1 a3n  U 0    0 a43 ··· a4,n−1 a4n   ............  ......  0 0 ··· 0 an,n−1 ann 15... and many more! Symmetric matrix Orthogonal matrix Tri-diagonal matrix Toeplitz matrix Hankel matrix Löwner matrix... You can forget about the gray ones. Yay! 16 Matrix algebra Invertible matrix theorem LU factorization 17 In order to work/compute with matrices, we need some operations. Enter, matrix algebra. sum and scalar multiplication matrix-vector product matrix-matrix product power transpose inverse 18 Recall: A (column) vector is a matrix with only one column.     v1 w1 v= w= v2 w2 Two vectors are equal if and only if their corresponding elements are equal. For example, when v1 = w1 and v2 = w2. For now, a vector is an ordered list of numbers. In lecture 3, we will study vectors more profoundly. 19 Recall: Two principal vector operations are addition and scalar multiplication.     u1 v Suppose u = and v = 1 , and c ∈ R, then u2 v2       u1 v1 u1 + v1 u+v = + = u2 v2 u2 + v2 and     u1 cu1 cu = c =. u2 cu2 20 Two matrices are equal (if they have the same size and) if their corresponding elements are equal.    a11 = b11      a a11 a12 b11 b12 12 = b12 = if and only if a21 a22 b21 b22 a21  = b21   a = b22 22 21 Two basic operations are addition and scalar multiplication.     a11 a12 b b Suppose A = and B = 11 12 , and c ∈ R, then a21 a22 b21 b22       a11 a12 b11 b12 a11 + b11 a12 + b12 A+B= + = a21 a22 b21 b22 a21 + b21 a22 + b22 and     a11 a12 ca11 ca12 cA = c = a21 a22 ca21 ca22 22 Let A, B, and C be matrices of the same size, and let r and s be scalars, then A+B=B+A commutativity (A + B) + C = A + (B + C) associativity A+0=A identity element r (A + B) = r A + r B distributivity (r + s)A = r A + sA distributivity r (sA) = (rs)A associativity Verify each equality by showing that the matrix on the left side is equal to the one on the right side. 23 Verify each quality by showing that the matrix on the left side is equal to the one on the right side. Let’s check A + B = B + A. 24 Verify each quality by showing that the matrix on the left side is equal to the one on the right side. Let’s check A + B = B + A. The (i, j)th element of A + B is aij + bij. 24 Verify each quality by showing that the matrix on the left side is equal to the one on the right side. Let’s check A + B = B + A. The (i, j)th element of A + B is aij + bij. The (i, j)th element of B + A is bij + aij. 24 Verify each quality by showing that the matrix on the left side is equal to the one on the right side. Let’s check A + B = B + A. The (i, j)th element of A + B is aij + bij. The (i, j)th element of B + A is bij + aij. Note that aij + bij = bij + aij because of the commutative property of the real numbers. This is relatively easy at this point, but it is crucial you develop this mindset/approach for later, when the properties become more involved/harder to check. 24 Matrix algebra sum and scalar multiplication matrix-vector product matrix-matrix product power transpose inverse 25 The matrix-vector product Ax can be computed via the row-vector rule.   x1 x2    Ax = a1 a2 · · · an .  = x1 a1 + x2 a2 + · · · + xn an ..  xn      a11 a12 ··· a1n x1 a11 x1 + a12 x2 + · · · + a1n xn  a21 a22 ··· a2n  x2   a21 x1 + a22 x2 + · · · + a2n xn      =...  ..  =   ...... .. ... .  .  am1 am2 · · · amn xn am1 x1 + am2 x2 + · · · + amn xn 26 Matrix algebra sum and scalar multiplication matrix-vector product matrix-matrix product power transpose inverse 27 First, let’s think of matrix-matrix multiplication using the matrix-vector product. Matrix-vector multiplication can be written as a linear combination: Bx = x1 b1 + · · · xp bp Assuming A is m × n, B is n × p, and x ∈ Rp. 28 First, let’s think of matrix-matrix multiplication using the matrix-vector product. Matrix-vector multiplication can be written as a linear combination: Bx = x1 b1 + · · · xp bp By the linearity of multiplication by A: A (Bx) = A (x1 b1 ) + · · · + A (xp bp ) = x1 Ab1 + · · · + xp Abp Assuming A is m × n, B is n × p, and x ∈ Rp. 28 First, let’s think of matrix-matrix multiplication using the matrix-vector product. Matrix-vector multiplication can be written as a linear combination: Bx = x1 b1 + · · · xp bp By the linearity of multiplication by A: A (Bx) = A (x1 b1 ) + · · · + A (xp bp ) = x1 Ab1 + · · · + xp Abp Hence the vector A (Bx) is a linear combination of the vectors Ab1 ,... , Abp. Assuming A is m × n, B is n × p, and x ∈ Rp. 28 First, let’s think of matrix-matrix multiplication using the matrix-vector product. Matrix-vector multiplication can be written as a linear combination: Bx = x1 b1 + · · · xp bp By the linearity of multiplication by A: A (Bx) = A (x1 b1 ) + · · · + A (xp bp ) = x1 Ab1 + · · · + xp Abp Hence the vector A (Bx) is a linear combination of the vectors Ab1 ,... , Abp.   In other words, AB = Ab1 · · · Abp x. Assuming A is m × n, B is n × p, and x ∈ Rp. 28 Interpretation: Each column of AB is a linear combination of the columns of A using weights from the corresponding column of B.   AB = A b1 b2 · · · bp   = Ab1 Ab2 · · · Abp This type of interpretation is really important for the exam! 29 Next, let’s think of matrix multiplication as a linear transformation. Ax x Au u Domain Range See section 1.8 and 1.9. 30 Recall: A function f from a set D to a set Y is a rule that assigns a unique value f (x) in Y to each x in D. f (x1 ) x1 f (x2 ) x2 Domain Range See Calculus, Section 1.1. 31 Recall: A function f from a set D to a set Y is a rule that assigns a unique value f (x) in Y to each x in D. [ ] [ ] x1 x2 R f (x1 ) f (x2 ) R Domain Range Here, D ⊂ R and Y ⊂ R. 31 Matrix-matrix multiplication can be interpreted as a composition of linear transformations. A (Bx) = (AB) x 32 Sanity check: Keep track of matrix dimensions! AB (m × n)(n × p) 33 Recall: The matrix-vector product Ax can be computed via the row-vector rule.   x1 x2    Ax = a1 a2 · · · an .  = x1 a1 + x2 a2 + · · · + xn an ..  xn      a11 a12 ··· a1n x1 a11 x1 + a12 x2 + · · · + a1n xn  a21 a22 ··· a2n  x2   a21 x1 + a22 x2 + · · · + a2n xn      =...  ..  =   ...... .. ... .  .  am1 am2 · · · amn xn am1 x1 + am2 x2 + · · · + amn xn 34 The matrix-matrix product AB can be computed via the row-column rule.    a11 a12 ··· a1n b11 b12 · · · b1p  a21 a22 ···  b21 b22 · · · b2p  a2n    AB = . ......  ...... ..... ......   am1 am2 · · · amn bn1 bn2 · · · bnp   a11 b11 + a12 b21 + · · · + a1n bn1 ··· a11 b1p + a12 b2p + · · · + a1n bnp  a21 b11 + a22 b21 + · · · + a2n bn1 ··· a21 b1p + a22 b2p + · · · + a2n bnp  =  ....  ..  am1 b11 + am2 b21 + · · · + amn bn1 · · · am1 b1p + am2 b2p + · · · + amn bnp Pn Or, more, compactly: (AB)ij = ai1 b1j + ai2 b2j + · · · + ain bnj = k=1 aik bkj. 35 Let A ∈ Rm×n , and B, C have appropriate sizes: A (BC) = (AB) C associativity A (B + C) = AB + AC left distributivity (B + C) A = BA + CA right distributivity r (AB) = (r A)B = A (r B) Im A = A = AIn identify element Verify each property! 36 Let’s check AIn.   1 0 0 ··· 0 0  1 0 · · · 0  ..  0 AIn = A  0. 0 ........   .... 0 0 ··· 0 1 37 Let’s check AIn.   1 0 0 ··· 0 0  1 0 · · · 0  ..    0 AIn = A  0. 0 = A e1 e2 · · · en ........  .... 0 0 ··· 0 1 37 Let’s check AIn.   1 0 0 ··· 0 0  1 0 · · · 0  ..    0 AIn = A  0. 0 = A e1 e2 · · · en ........  .... 0 0 ··· 0 1   = Ae1 Ae2 · · · Aen 37 Let’s check AIn.   1 0 0 ··· 0 0  1 0 · · · 0  ..    0 AIn = A  0. 0 = A e1 e2 · · · en ........  .... 0 0 ··· 0 1   = Ae1 Ae2 · · · Aen   = a1 a2 · · · an 37 Let’s check AIn.   1 0 0 ··· 0 0  1 0 · · · 0  ..    0 AIn = A  0. 0 = A e1 e2 · · · en ........  .... 0 0 ··· 0 1   = Ae1 Ae2 · · · Aen   = a1 a2 · · · an =A 37 Watch out! I get really pissed if you do this! In general, AB ̸= BA. The cancellation laws do NOT hold for matrix multiplication, i.e., is AB = AC, then it is not true in general that B = C. If AB = 0, you can NOT conclude in general that either A = 0 or B = 0. 38 But why do you get pissed if I do this? We can think of matrix-matrix multiplication as a composition of linear transformations. 39 But why do you get pissed if I do this? We can think of matrix-matrix multiplication as a composition of linear transformations. Does f (g (x)) = g (f (x)) hold in general? No, consider f (x) = x 2 and g (x) = sin(x), then: sin2 (x) ̸= sin(x 2 ). 39 But why do you get pissed if I do this? We can think of matrix-matrix multiplication as a composition of linear transformations. Does f (g (x)) = g (f (x)) hold in general? No, consider f (x) = x 2 and g (x) = sin(x), then: sin2 (x) ̸= sin(x 2 ). Does f (g (x)) = f (h(x)) hold in general? No, consider f (x) = x 2 , g (x) = sin(x), and h(x) = cos(x), then sin2 (x) ̸= cos2 (x). 39 Now you try! The multiplication of two triangular matrices is always triangular. T/F? This is a (hard) exam question. 40 Now you try! The multiplication of two triangular matrices is always triangular. T/F? First, let’s check a simple case:      a b d e ad ae + bf = 0 c 0 f 0 cf This is a (hard) exam question. 40 The statement said “always” so one example is not enough, we need to be sure in general! 41 The statement said “always” so one example is not enough, we need to be sure in general! What does it mean for a matrix A to be triangular? 41 The statement said “always” so one example is not enough, we need to be sure in general! What does it mean for a matrix A to be triangular? A matrix A is triangular if and only if aij = 0 for i > j. 41 Consider two matrices A ∈ Rm×n and B ∈ Rn×k : A with aij = 0, i > j, B with bij = 0, i > j. 42 Consider two matrices A ∈ Rm×n and B ∈ Rn×k : A with aij = 0, i > j, B with bij = 0, i > j. Pn The (i, j)th element of the product is given by (AB)ij = k=1 aik bkj. 42 Consider two matrices A ∈ Rm×n and B ∈ Rn×k : A with aij = 0, i > j, B with bij = 0, i > j. Pn The (i, j)th element of the product is given by (AB)ij = k=1 aik bkj. Suppose i > j, then we can write: n X i−1 X n X (AB)ij = aik bkj = aik bkj + aik bkj k=1 k=1 k=i =0+0=0 42 Consider two matrices A ∈ Rm×n and B ∈ Rn×k : A with aij = 0, i > j, B with bij = 0, i > j. Pn The (i, j)th element of the product is given by (AB)ij = k=1 aik bkj. Suppose i > j, then we can write: n X i−1 X n X (AB)ij = aik bkj = aik bkj + aik bkj k=1 k=1 k=i =0+0=0 because in the first sum k ≤ i − 1 ⇔ k < i ⇔ aik = 0 and in the second sum k ≥ i ⇔ k ≥ i > j ⇔ bkj = 0. 42 Learning objectives 1. Solve linear systems of equations. 2. Compute matrix factorizations. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically by interleaving various definitions, theorems, and properties about vectors and matrices, linear systems and factorizations, eigenvalues and eigenvectors, singular values and singular vectors, linear transformations, orthogonality, determinants, etc. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. 43 Matrix algebra sum and scalar multiplication matrix-vector product matrix-matrix product power transpose inverse 44 Matrix powers are useful in both theory and applications (see later). Ak = AA · · · A} | {z k 45 Matrix algebra sum and scalar multiplication matrix-vector product matrix-matrix product power transpose inverse 46 The transpose of a matrix flips the matrix over its diagonal.     a b T a c A= A = c d b d In other words, switch the row and column indices, i.e., [AT ]ij = [A]ji. 47 Assuming appropriate dimensions for all matrices, we have:  T AT =A (A + B)T = AT + BT (r A)T = r AT (AB)T = BT AT The transpose of a product of matrices equals the product of their transposes in the reverse order. 48 Don’t just take these properties for granted, but play with the properties to see if they make sense to you! Do the dimensions make sense? (AB)T = BT AT ((m × n) · (n × k))T = (n × k)T · (m × n)T (m × k)T = (k × n) · (n × m) (k × m) = k × m You should always do this while studying. Yes, this takes time, but algebra builds concepts upon concepts. A bad foundation will lead to wobbly house. Moreover, algebra is useful for nearly all other courses. 49 Now you try! The previous slide is not a proof, but a sanity check. For the proof, you can approach the problem from two sides: (AB)T : The (i, j)th entry of (AB)T is the (j, i)th entry of AB by definition of the transpose, hence, aj1 b1i + · · · + ajn bni. BT AT : How would the (i, j)the entry in BT AT look like? Think about it and post your answer on the discussion forum. I will give the correct answer after a serious attempt has been made and/or some discussion has occured. This is an exam question. 50 A symmetric matrix A has the property that A = AT.       a b 1 0 0 1 b c 0 1 1 0 51 Now you try! The matrix AB + BT AT is always symmetric. T/F? This is an exam question. 52 Now you try! The matrix AB + BT AT is always symmetric. T/F? A matrix C is symmetric if and only if C = CT. Let’s check. This is an exam question. 52 Now you try! The matrix AB + BT AT is always symmetric. T/F? A matrix C is symmetric if and only if C = CT. Let’s check.  T AB + BT AT = AB + BT AT  T = (AB)T + BT AT = BT AT + AB = AB + BT AT This is an exam question. 52 Learning objectives 1. Solve linear systems of equations. 2. Compute matrix factorizations. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically by interleaving various definitions, theorems, and properties about vectors and matrices, linear systems and factorizations, eigenvalues and eigenvectors, singular values and singular vectors, linear transformations, orthogonality, determinants, etc. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. 53 Matrix algebra sum and scalar multiplication matrix-vector product matrix-matrix product power transpose inverse 54 Recall: The inverse function f −1 undoes, or inverts, the effect of a function f. f −1 (f (x)) = x for all x in the domain of f Calculus, Section 1.5. 55 Recall: The inverse function f −1 undoes, or inverts, the effect of a function f. f −1 (f (x)) = x for all x in the domain of f f (f −1 (y )) = y for all y in the domain of f −1 (i.e., the range of f ) Calculus, Section 1.5. 55 The inverse of a matrix is analogous to the reciprocal, or multiplicative inverse, of a nonzero number. 5−1 · 5 = 1 5 · 5−1 = 1 We need both equations because matrix multiplication is not commutative: A−1 A = I AA−1 = I The inverse only makes sense for square matrices, however, we will see the concept of a pseudo-inverse for rectangular matrices in Lecture 6. 56 Watch out! Yet another thing that really pisses me off! We do not divide by a matrix! We multiply by the left- or right inverse. The left- and right inverse can be different and matrix multiplication is not commutative! 57 Terminology: Invertible and singular An invertible matrix is also called a nonsingular matrix. An non-invertible matrix is also called a singular matrix. 58 How do we compute the inverse? 59 Example: 2 × 2 matrices  a b Let A =. If ad − bc ̸= 0, then A is invertible and c d   −1 1 d −b A = ad − bc −c a The value ad − bc is the determinant of the matrix, see Chapter 3. The above formula can be extended to n × n matrices, see later. We omit the proof. 60... but what about n × n matrices? 61... but what about n × n matrices? An n × n matrix A is invertible if and only if A is row equivalent to In , and in this case, any sequence of elementary row operations that reduces A to In also transforms In into A−1. Read p. 138–140 for full details. 61 Procedure: Computing A−1.   Row reduce the augmented matrix A I. I → IA−1     A 62 Example: Compute the inverse of a 2 × 2 matrix using row reduction.   3 4 1 0 5 6 0 1 63 Example: Compute the inverse of a 2 × 2 matrix using row reduction.     3 4 1 0 r1 /3 1 4/3 1/3 0 −−→ 5 6 0 1 5 6 0 1 63 Example: Compute the inverse of a 2 × 2 matrix using row reduction.     3 4 1 0 r1 /3 1 4/3 1/3 0 −−→ 5 6 0 1 5 6 0 1   r2 −5r1 1 4/3 1/3 0 −−−−→ 0 −2/3 −5/3 0 63 Example: Compute the inverse of a 2 × 2 matrix using row reduction.     3 4 1 0 r1 /3 1 4/3 1/3 0 −−→ 5 6 0 1 5 6 0 1   r2 −5r1 1 4/3 1/3 0 −−−−→ 0 −2/3 −5/3 0   r2 /(−3/2) 1 4/3 1/3 0 −−−−−−→ 0 1 5/2 −3/2 63 Example: Compute the inverse of a 2 × 2 matrix using row reduction.     3 4 1 0 r1 /3 1 4/3 1/3 0 −−→ 5 6 0 1 5 6 0 1   r2 −5r1 1 4/3 1/3 0 −−−−→ 0 −2/3 −5/3 0   r2 /(−3/2) 1 4/3 1/3 0 −−−−−−→ 0 1 5/2 −3/2   r1 −(4/3)r2 1 0 −3 2 −−−−−−→ 0 1 5/2 −3/2 63 Sanity check: Check if A−1 A = I.      3 4 −3 2 3 · (−3) + 4 · 5/2 3 · 2 + 4 · (−3/2) = 5 6 5/2 −3/2 5 · (−3) + 6 · (5/2) 5 · 2 + 6 · (−3/2)   1 0 = 0 1 To illustrate the importance of this trick, consider the following anecdote. I once had to review a paper for an internationally renowned journal, as is customary in high-end research. At some point the authors of the paper computed an inverse of a complicated matrix and it was so that the above sanity check failed. As a researcher, this is quite a painful incident (and an unprofessional one). 64 Now you try!   1 2 3 The matrix 2 4 6 is not invertible. T/F? 3 6 9 This is an exam question. 65 Now you try!   1 2 3 The matrix 2 4 6 is not invertible. T/F? 3 6 9 True. The matrix is not row equivalent to the identity matrix:     1 2 3 1 2 3 r2 −2r1 2 4 6 − −−−→ 0 0 0 r −3r 3 6 9 3 1 0 0 0 This is an exam question. 65 Verify the following properties of the inverse! −1 A−1 =A −1 (AB) = B−1 A−1  −1 T AT = A−1 Again, verifying properties is an excellent exercise to nail the T/F statements. 66 If A is an invertible n × n matrix, then for each b ∈ Rn , the equation Ax = b has the unique solution x = A−1 b. Consider the linear system and its matrix representation: (     3x1 + 4x2 = 3 3 4 3 ⇔ x= 5x1 + 6x2 = 7 5 6 7 | {z } =A Check if you get the same result via row reduction. 67 If A is an invertible n × n matrix, then for each b ∈ Rn , the equation Ax = b has the unique solution x = A−1 b. Consider the linear system and its matrix representation: (     3x1 + 4x2 = 3 3 4 3 ⇔ x= 5x1 + 6x2 = 7 5 6 7 | {z } =A The inverse is given by     −1 1 6 −4 −3 2 A = = −2 −5 3 5/2 −3/2 Check if you get the same result via row reduction. 67 If A is an invertible n × n matrix, then for each b ∈ Rn , the equation Ax = b has the unique solution x = A−1 b. Consider the linear system and its matrix representation: (     3x1 + 4x2 = 3 3 4 3 ⇔ x= 5x1 + 6x2 = 7 5 6 7 | {z } =A The inverse is given by     −1 1 6 −4 −3 2 A = = −2 −5 3 5/2 −3/2     3 5 Hence, the solution is x = A−1 =. 7 −3 Check if you get the same result via row reduction. 67 Procedure: Solving a linear system using the inverse. Write the system as a matrix equation Ax = b. Compute the inverse of A. Compute the matrix-vector product x = A−1 b. 68 Learning objectives 1. Solve linear systems of equations using various methods such as Gaussian elimination, LU factorization, the least-squares method, QR factorization, (pseudo)-inverse matrix, and Cramer’s rule. 2. Compute matrix factorizations. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. In practice, we seldom compute the inverse to solve linear systems because it takes about three times as many arithmetic operations and it is less accurate. 69 Matrix algebra Invertible matrix theorem LU factorization 70 The Invertible Matrix Theorem (IMT) (a) A is invertible. (b) A is row equivalent with the n × n identity matrix. (c) A has n pivot positions. (d) Ax = 0 has only the trivial solution. (e) The columns of A form a linearly independent set. (f) The linear transformation encoded by A is one-to-one. (g) Ax = b has at least one solution for each b ∈ Rn. (h) The columns of A span Rn. (i) The linear transformation encoded by A maps Rn to Rn. (j) There is an n × n matrix C such that CA = I. (k) There is an n × n matrix D such that AD = I. (l) AT is invertible. The statements are logically equivalent, assuming A ∈ Rn×n. 71 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). 72 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). a ⇒ j If (a) is true, then A−1 works in (j). 72 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). a ⇒ j If (a) is true, then A−1 works in (j). j ⇒ d Consider Ax = 0 ⇒ A−1 Ax = A−1 0 ⇒ Ix = 0. 72 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). a ⇒ j If (a) is true, then A−1 works in (j). j ⇒ d Consider Ax = 0 ⇒ A−1 Ax = A−1 0 ⇒ Ix = 0. d ⇒ c If Ax = 0 has only the trivial solution, then there are no free variables, hence, A has n pivot positions. 72 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). a ⇒ j If (a) is true, then A−1 works in (j). j ⇒ d Consider Ax = 0 ⇒ A−1 Ax = A−1 0 ⇒ Ix = 0. d ⇒ c If Ax = 0 has only the trivial solution, then there are no free variables, hence, A has n pivot positions. c ⇒ b If a matrix has n pivot positions, then it is row equivalent with an identity matrix. 72 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). a ⇒ j If (a) is true, then A−1 works in (j). j ⇒ d Consider Ax = 0 ⇒ A−1 Ax = A−1 0 ⇒ Ix = 0. d ⇒ c If Ax = 0 has only the trivial solution, then there are no free variables, hence, A has n pivot positions. c ⇒ b If a matrix has n pivot positions, then it is row equivalent with an identity matrix. b ⇒ a If a matrix is row equivalent with an identity matrix, then the matrix has an inverse, see before. 72 Let’s check (a) ⇒ (j) ⇒ (d) ⇒ (c) ⇒ (b) ⇒ (a). a ⇒ j If (a) is true, then A−1 works in (j). j ⇒ d Consider Ax = 0 ⇒ A−1 Ax = A−1 0 ⇒ Ix = 0. d ⇒ c If Ax = 0 has only the trivial solution, then there are no free variables, hence, A has n pivot positions. c ⇒ b If a matrix has n pivot positions, then it is row equivalent with an identity matrix. b ⇒ a If a matrix is row equivalent with an identity matrix, then the matrix has an inverse, see before. Hence, all these statements are logically equivalent. 72 The 8-step approach to IMTing the hell out of things. 1. Print the IMT. 2. Fix it to your bedroom ceiling. 3. Look at it every night. 4. Dream about it. 5. Hate it. 6. Love it. 7. Transcend it. 8. Know it inside-out. 73 Now you try! The following set of equations is singular for all k ∈ R: ( x −y =3 2x − 2y = k True/False? This is an exam question. 74 True! Row reduce the augmented matrix:     1 −1 3 r2 −2r1 1 −1 3 → 2 −2 k 0 0 k −6 We can see that for k = 6, the system has a free variable, hence, it’s not invertible (i.e., singular). for k ̸= 6, the system is inconsistent, hence, it’s not invertible (i.e., singular). Hence, the matrix is singular for all k. 1 point for True/False and 1 poin for the motivation. 75 Learning objectives 1. Solve linear systems of equations. 2. Compute matrix factorizations. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically by interleaving various definitions, theorems, and properties about vectors and matrices, linear systems and factorizations, eigenvalues and eigenvectors, singular values and singular vectors, linear transformations, orthogonality, determinants, etc. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. 76 Matrix algebra Invertible matrix theorem LU factorization 77 An LU-factorization decomposes a matrix into a unit lower triangular matrix and an upper triangular matrix.    1 0 0 0 □ ⋆ ⋆ ⋆ ⋆ 1 0 00 □ ⋆ ⋆   A= ⋆ ⋆ 1 0   0 0 □ ⋆ ⋆ ⋆ ⋆ 1 0 0 0 □ | {z }| {z } L U 78 If we have the LU factorization of a matrix, then we can solve a linear system of equations more efficiently. Consider a linear system Ax = b and an LU factorization of A = LU, then we can write:   L |{z} Ux  = b y 79 If we have the LU factorization of a matrix, then we can solve a linear system of equations more efficiently. Consider a linear system Ax = b and an LU factorization of A = LU, then we can write:   L |{z} Ux  = b y Hence, we can solve the linear system in two steps: 1. First, solve Ly = b. 2. Then, solve Ux = y. This is especially useful if you have multiple right hand sides! 79 Learning objectives 1. Solve linear systems of equations using various methods such as Gaussian elimination, LU factorization, the least-squares method, QR factorization, (pseudo)-inverse matrix, and Cramer’s rule. 2. Compute matrix factorizations. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. 80 Is this really used in practice? Exhibit A. 81 The LU factorization can be computed by row reduction! (What a surprise!) 82 The LU factorization can be computed by row reduction! (What a surprise!) Apply row operations to A to obtain an echelon form: Ep Ep−1 · · · E1 A = U 82 The LU factorization can be computed by row reduction! (What a surprise!) Apply row operations to A to obtain an echelon form: Ep Ep−1 · · · E1 A = U Elementary matrices are invertible, hence,  −1 A = Ep Ep−1 · · · E1  U | {z } 82 The LU factorization can be computed by row reduction! (What a surprise!) Apply row operations to A to obtain an echelon form: Ep Ep−1 · · · E1 A = U Elementary matrices are invertible, hence,  −1 A = Ep Ep−1 · · · E1  U | {z } =L 82 Procedure: LU factorization 1. Reduce A to echelon form U by a sequence of row replacement operations, if possible. 2. Place entries in L such that the same sequence of row operations reduces to L. Study Example 2 and try it yourself! 83 Learning objectives 1. Solve linear systems of equations. using various methods such as Gaussian elimination, LU factorization, the least-squares method, QR factorization, (pseudo)-inverse matrix, and Cramer’s rule. 2. Compute matrix factorizations such as the LU factorization, QR factorization, eigenvalue decomposition, and singular value decomposition. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. 84 Refining your matrix algebra skills is crucial. 85 Learning objectives 1. Solve linear systems of equations using various methods such as Gaussian elimination, LU factorization, the least-squares method, QR factorization, (pseudo)-inverse matrix, and Cramer’s rule. 2. Compute matrix factorizations such as the LU factorization, QR factorization, eigenvalue decomposition, and singular value decomposition. 3. Solve and compute various linear algebra problems using Matlab. 4. Communicate and reason about linear algebra problems algebraically and geometrically by interleaving various definitions, theorems, and properties about vectors and matrices, linear systems and factorizations, eigenvalues and eigenvectors, singular values and singular vectors, linear transformations, orthogonality, determinants, etc. 5. Understand that many, seemingly disconnected, problems from various disciplines reduce to linear algebra problems. 6. Perform calculations with complex numbers. 86 Useful Matlab commands Matlab Zero matrix zeros Identity matrix eye Diagonal matrix diag Upper triangular matrix triu Lower triangular matrix tril Compute inverse inv Transpose.’ watch out: ’ and.’ are different things! LU factorization lu In Matlab we always use x = A\b instead of x = inv(A)*b. 87

Use Quizgecko on...
Browser
Browser