Linear Algebra Lecture Notes PDF

Summary

This document is a set of lecture notes on Linear Algebra, covering topics such as matrices, vectors, matrix operations, determinants, and solving linear systems of equations. The notes include definitions, examples, and explanations of key concepts in linear algebra.

Full Transcript

Linear Algebra Topic 1 Matrices and Vectors Linear Algebra Matrices Matrix Matrix Matrix Matrix – Example 1 Matrix – Example 1 General Concepts and Notations General Concepts and Notations Vectors Column Vector Addition and Scalar Multiplication of Matrices and Vec...

Linear Algebra Topic 1 Matrices and Vectors Linear Algebra Matrices Matrix Matrix Matrix Matrix – Example 1 Matrix – Example 1 General Concepts and Notations General Concepts and Notations Vectors Column Vector Addition and Scalar Multiplication of Matrices and Vectors Definition Example Definition Example Definition Example Additional Rules Additional Rules Matrix Multiplication Definition Definition Definition Example Example Example Motivations of Multiplication by Linear Transformations Linear Algebra Topic 1 Transposition Transposition Example Example Definition Definition Special Matrices Example Diagonal Matrices Example Some Applications of Matrix Multiplication Linear Systems of Equations Linear Systems Linear Systems, Coefficient Matrix, Augmented Matrix Linear Systems, Coefficient Matrix, Augmented Matrix Linear Systems, Coefficient Matrix, Augmented Matrix Linear Systems, Coefficient Matrix, Augmented Matrix Linear Systems, Coefficient Matrix, Augmented Matrix Example Example Gauss Elimination and Back Substitution Gauss Elimination and Back Substitution Gauss Elimination and Back Substitution Eliminate x1. Example Elementary Row Operations. Row Equivalent Systems Theorem 1 Row Equivalent Systems Gauss Elimination: The Possible Cases Gauss Elimination: The Possible Cases Gauss Elimination: The Possible Cases Gauss Elimination: The Possible Cases Row Echelon Form Row Echelon Form Row Echelon Form Row Echelon Form Determinants Determinants DETERMINANT The determinant of a matrix is a scalar representation of matrix Only square matrices have determinants. Determinants are also useful because they tell us whether or not a matrix can be inverted (next). Not all square matrices can be inverted (must be full rank, non-singular matrix) DETERMINANT A determinant of a matrix represents a single number. We obtain this value by multiplying and adding its elements in a special way. We can use the determinant of a matrix to solve a system of simultaneous equations. For example, if we have the (square) 2 × 2 matrix: The determinant of this matrix is written within vertical lines as follows: DETERMINANT A determinant is a square array of numbers (written within a pair of vertical lines) which represents a certain sum of products. Below is an example of a 3 × 3 determinant (it has 3 rows and 3 columns). The result of multiplying out, then simplifying the elements of a determinant is a single number (a scalar quantity). DETERMINANT Calculating a 2 × 2 Determinant In general, we find the value of a 2 × 2 determinant with elements a, b, c, d as follows: We multiply the diagonals (top left × bottom right first), then subtract. DETERMINANT Example 1: The final result is a single number. PROPERTIES OF DETERMINANTS 1. Let A be a square matrix. The determinants of A and its transpose are equal , that is, |A| = |AT|  2 1 3  2 1 4 A  1 1 1 A  1 1 2 4 2 0 3 1 0 A  0  4  6  12  4 A  0  6  4  12  4 A  6 A  6 2. If each element of any row(column) of A is zero, then the determinant of A is zero. 2 1 A 0 0 0 PROPERTIES OF DETERMINANTS 3. If any two rows (column) of A are proportonal, that is , identical or multiples of one another, then the determinant of A is zero. 2 1 4 A  1 2 0  0 since the third is the same as the first row when divided by 2 4 2 8 4. If any two rows(columns) of A are interchanged, the |A| is simply changed in sign.  2 3 A  where A  5 1 4  Suppose the rows are interchend, then 1 4  B  where B  5  2 3  PROPERTIES OF DETERMINANTS 5. If the matrix B is obtained from A by multiplying a non-zero scalar k to any row(column) of A then |B|= k|A|  2 7  4 7 A  and B      1 4   2 4 where the first column of A is multiplied by 2, then B  2times A  2(15)  30 6. If the matrix B is obtained from A by adding kth times rth row to sth row to produce the new sth row provided that r≠s, then |B|=|A|.  1 2 A   3 5 applying the elementray row operation 2R 1  R2  R2 yield the matrix B as follows.  1 2 B   1 9  Hence, it is evident that A  B  11 PROPERTIES OF DETERMINANTS 7. If A is nonsingular, that is |A|≠0, then the determinant of its inverse is equal to the reciprocal of its determinant. 4 3 A  where the the A  2  2 1 also, it can be found that the inverse of A is  1 / 2 3 / 2 A 1    1  2  Furthemore, it can be observed that 1/ A  1 / 2 A 1  1 / 2 8. If A is an upper/lower triangular matrix then the determinant of A is the product of the diagonal elements.  2 3 1 A  0 2 1 0 0 5 getting the product of the main diagonal os A we get its determinant as A  (2)(2)(5)  20 EVALUATION OF DETERMINANTS 1. Basket Method (applicable for matrix of order 2 or 3) n 2,3  a11 a12  A   a11a22  a12 a21 a21 a22  valuation ofDeterminants 1 Basketweavemethod Example : throughexpansion II 2 co anyExpansion  1 2  3 27 A   0 4 5  24 15 32 20 3 UppertriangularEliminationmethod  2 3 8  4 ChiosMethod 3212030 5 pivotalMethod EVALUATION OF DETERMINANTS 2. Cofactor Expansion n 3 throughexpansion by minors 3 × 3 Determinants A 3 × 3 determinant can be evaluated in various ways. We will use the method called "expansion by minors". But first, we need a definition. Cofactors The 2 × 2 determinant EVALUATION OF DETERMINANTS is called the cofactor of a1 for the 3 × 3 determinant: The cofactor is formed from the elements that are not in the same row as a1 and not in the same column as a1. Similarly, the determinant is called the cofactor of a2. It is formed from the elements not in the same row as a2 and not in the same column as a2. EVALUATION OF DETERMINANTS Expansion by Minors We evaluate our 3 × 3 determinant using expansion by minors. This involves multiplying the elements in the first column of the determinant by the cofactors of those elements. We subtract the middle product and add the final product Note that we are working down the first column and multiplying by the cofactor of each element. EVALUATION OF DETERMINANTS Example 3 Evaluate Y I I I a1 2 449 I EVALUATION OF DETERMINANTS Answer: = -2[(-1)(2) − (-8)(4)] − 5[(3)(2) − (-8)(-1)] + 4[(3)(4) − (-1)(-1)] = -2(30) − 5(-2) + 4(11) = -60 + 10 + 44 = -6 EVALUATION OF DETERMINANTS Determinant Exercises 1. Evaluate by expansion of minors: 0 to 218 1 3 IO 1018 2 3112 80 36 116 EVALUATION OF DETERMINANTS Answer 1. = 10[(−4)(2) − (0)(1)] + 2[(0)(2) − (0)(−3)] + 3[(0)(1) − (-4)(-3)] = 10(-8) + 2(0) + 3(−12) = −80 − 36 = −116 LARGE DETERMINANTS Expanding 4×4 Determinants -find the cofactors of each element and then multiply each element by its cofactor. at I el I i m I LARGE DETERMINANTS Notice the pattern of First term: positive Second term: negative Third term: positive Fourth term: negative To get the final answer, we expand out these 3×3 determinants, multiply then simplify. LARGE DETERMINANTS Example Expand the 4×4 determinant: I iiiI si il ii i.l iii 22024 0 5 1 A H Y H Answer s The first step is to find the cofactors of each of the elements in the first column. We then multiply by the elements of the first row and assign plus and minus in the order: plus, minus, plus, minus LARGE DETERMINANTS Now, we expand out each of those 3 × 3 cofactors using the method that we saw before: LARGE DETERMINANTS Now, putting it all together, we have: LARGE DETERMINANTS 3. Upper-Triangular elimination Method when the size of a matrix is very large, cofactor expansion becomes inconvenient to use. Upper-triangular elimination method helps to find the determinants of a matrix in easier way. It is because that the determinant of A is only the product of the diagonal elements when A is reduce to an upper- triangular form Example : 1 2  1 A   4 6  2 then, the upper triangular matrix  1 3 3  1 2  1 applicableforupper A  2 0 1  1 triangularmatrix 0 7  0 146 A  14 7621 LARGE DETERMINANTS 4. Chios Method let A be a square matrix and let be a11be not equal to zero  a11 a12... a1n    a a22... a2 n  A   21     an1 an 2... ann  then, the determinant of A a11 a12 a11 a13 a11 a1n... a21 a22 a21 a23 a21 a22 a11 a12 a31 a33 a11 a1n A  (a11 ) 2 n a... 31 a32 a21 a22 a31 a3n : : : : a11 a12 a11 a13 a11 a1n... an1 an 2 an1 an 3 an1 ann LARGE DETERMINANTS Example of Chios Method:  1 2  1 A   4 6  2  1 3 3  Applying Chios Method 1 2 1 1 2 3 4 6 4 2 2 2 A  (1)  1 2 1 1 5 2 1 3 1 3 = -14 LARGE DETERMINANTS 5. Pivotal Method This method reduces the order of the matrix by 1. Thus, if A is a matrix of order n, upon applying pivotal method will reduce the matrix to n-1.  a11 a12 a13  A  a21 a22 a23  a31 a32 a33  Let one of the elements, say, a21 be equal to 1  a11 a12 a13  A   1 a22 a23  a31 a32 a33  LARGE DETERMINANTS By pivotal method, we obtain the determinant of A as follows: a12  a11a 22 a13  a11a 23 A  (1) 21 Evaluate : a32  a31a 22 a33  a31a 23 3  4 0 4  2  1 8 0  934 A  we can see that there are two elements both equal to 1. Consider the third and fourth column 4 2 3 1    1 0 5  6  3  (4)(4)  4  (2)(4) 0  (3)(4) A  (1) 3 4 2  (4)(0)  1  (2)(0) 8  (3)(0) 1  (4)(6) 0  (2)(6) 5  (3)(6  13  12  12 A  (1) 2 1 8 25 12 23 factoring from the sec ond row  13  12  12 A  (1)(1)  2 1 8 25 12 23 applying pivotal method  13  (2)(12)  12  (8)(12) A  (1)(1)(1) 2 2 25  (2)(120 23  (8)(12) A  889 LARGE DETERMINANTS Evaluate : 3  4 0 4  2  1 8 0  A  we can see that there are two elements both equal to 1. Consider the third and fourth column 4 2 3 1    1 0 5  6  3  (4)(4)  4  (2)(4) 0  (3)(4) A  (1) 3 4 2  (4)(0)  1  (2)(0) 8  (3)(0) 1  (4)(6) 0  (2)(6) 5  (3)(6  13  12  12 A  (1) 2 1 8 25 12 23 factoring from the sec ond row  13  12  12 A  (1)(1)  2 1 8 25 12 23 applying pivotal method  13  (2)(12)  12  (8)(12) A  (1)(1)(1) 2 2 25  (2)(120 23  (8)(12) A  889 Inverse of a Matrix Definition: Properties of A-1 1) (A-1)-1 = A 2) (AB)-1 = B-1A-1 3) (A-1)T = (AT)-1  d  b  c a  1  d  b A  1  det A ad  bc  c a  Example: 1) Find the inverse, A-1, of A  1 2 3 4   using Method 1.  d  b Solution:  c a  1  d  b A  1  det A ad  bc  c a   4  2  3 1  1  4  2 1  4  2 A  1    det A (1)(4)  (2)(3)  3 1  2  3 1  2 1    3 / 2  1 / 2 Example: 2) Find the inverse, A-1, of A  2  3 4  7  using Method 1.  d  b Solution:  c a  1  d  b A  1  det A ad  bc  c a    7 3   4 2 1   7 3 1   7 3 A  1    det A (2)(7)  (3)(4)  4 2 2  4 2 7 / 3  3 / 2    2  1  Checking: Check the inverse of A by multiplying our inverse by the original matrix. If we get the identity matrix (I) for our answer, then we must have the correct answer Method 2 – By using Adjoint Matrix and Determinant (can be extended to any size) The inverse of a 3×3 matrix is given by: "adj A" is short for "the adjoint of A". We use cofactors to determine the adjoint of a matrix. Recall: The cofactor of an element in a matrix is the value obtained by evaluating the determinant formed by the elements not in that particular row or column. Formula 1 1 A  adjA det A where : adjA  (cofactorA) T We find the adjoint matrix by replacing each element in the matrix with its cofactor and applying a + or - sign as follows: and then finding the transpose of the resulting matrix. The transpose means the 1st column becomes the 1st row; 2nd column becomes 2nd row, etc. Example 1: Find the inverse of the following by using method 2: 5 6 1 A  0 3  3 4  7 2  Step 1: Replace elements with cofactors and apply + and - Step 2: Transpose the resulting matrix to get the adjoint.  15  19  21 adjA   12 6 15   12 59 15  Step 3: Before we can find the inverse of matrix A, we need det A Step 4: Now we have what we need to apply the formula 1 A 1  adjA det A   15  19  21 5 / 53 19 / 159 7 / 53  1  12 6   4 / 53  2 / 53  5 / 53 A 1  15  159      12 59 15  4 / 53  59 / 159  5 / 53   0.094 0.119 0.132   0.075  0.038  0.094 0.075  0.371  0.094 Example 2: Find the inverse of the following by using method 2: 1 2 3 A  0 4 5 1 0 6 Step 1: Replace elements with cofactors and apply + and -  24 5  4 cofactor of A   12 3 2    2  5 4  Step 2: Transpose the resulting matrix to get the adjoint.  24 12  2 adjA   5 3  5  4 2 4  Step 3: Before we can find the inverse of matrix A, we need det A 1 2 3 A  0 4 5  1(24)  0  1(2)  22 1 0 6 Step 4: Now we have what we need to apply the formula 1 1 A  adjA det A  24 12  2  12 / 11  6 / 11  1 / 11  1     5 / 22  A 1  5 3  5 3 / 22  5 / 22 22      4 2 4   2 / 11 1 / 11 2 / 11  Method 3: Gauss Method for Inversion  A I   a11 a12 a13  1 0 0 Note: the superscript a a a  0 1 0 “-1” denotes that  21 22 23  a31 a32 a33  0 0 1  the original values have been converted 1 0 0  a111 a121 a131  to the matrix inverse,   not 1/aij 0 1 0  a211 a221 a231   0 0 1  a311 a 32 1 a331  I   A1 1 2 1 0 1 2 1 0 3 4 0 1  0  2  3 1   R 2 3R1  R2new   1 2 1 0  R 1 2 R2  R1new 1 0  2 1     0 1 3 / 2  1 / 2  1 / 2 R2old  R2new 0 1 3 / 2  1 / 2     1 1 2 2 1  then :     3 4  3 / 2  1 / 2   2 1 1 1 0 0 1 / 2 R1old  R1new 1 1 / 2 1 / 2 1 / 2 0 0 A I   1 2 1 0 1 0 1 2  1 0 1 0 1 1 2 0 0 1 1 1 2 0 0 1 R1new 1 1 / 2 1 / 2 1 / 2 0 0 R2old  R1new  R2new 0 3 / 2 1 / 2  1 / 2 1 0  2 / 3R2   R3old  R1new  R3new 0 1 / 2 3 / 2  1 / 2 0 1 1 1 / 2 1 / 2 1 / 2 0 0 R2new 0 1 1 / 3  1 / 3 2 / 3 0    0 1 / 2 3 / 2  1 / 2 0 1 R1old  R1new  R1new 1 1 / 2 1 / 2 1 / 2 0 0 R new 2 0 1 1 / 3  1 / 3 2 / 3 0    R3old  1 / 2 R2new  R3new 0 0 4 / 3  1 / 3  1 / 3 1 3 / 4 R3 R1old  1 / 2 R2new  R1new 1 1 / 2 0 5 / 8 1 / 8  3 / 8 R2old  1 / 2 R3new  R2new 0 1 0  1 / 4 3 / 4  1 / 4    R3new 0 0 1  1 / 4  1 / 4 3 / 4  R1old  1 / 2 R2new  R1new 1 0 0 3 / 4  1 / 4  1 / 4 0 1 0  1 / 4 3 / 4  1 / 4    0 0 1  1 / 4  1 / 4 3 / 4   3 / 4  1 / 4  1 / 4 A 1   1 / 4 3 / 4  1 / 4  1 / 4  1 / 4 3 / 4  LU Factorization LU Factorization We present three related methods that are modifications of the Gauss elimination, which require fewer arithmetic operations. They are named after Doolittle, Crout, and Cholesky and use the idea of the LU- factorization of A. LU Factorization LU Factorization LU Factorization Example – Doolittle’s Method Example – Doolittle’s Method Example – Doolittle’s Method Compact Version of Doolittle’s Method Cholesky’s Method 300 Cholesky’s Method Example – Cholesky’s Method Example – Cholesky’s Method Seatwork Simplex method Simplex method is the method to solve ( LPP ) models which contain two or more decision variables. Basic variables: Are the variables which coefficients One in the equations and Zero in the other equations. Non-Basic variables: Are the variables which coefficients are taking any of the values, whether positive or negative or zero. Slack, surplus & artificial variables: a) If the inequality be ≤ (less than or equal, then we add a slack variable + S to change ≤ to =. b) If the inequality be ≥ (greater than or equal, then we subtract a surplus variable - S to change ≥ to =. c) If we have = we use artificial variables. The steps of the simplex method: Step 1: Determine a starting basic feasible solution. Step 2: Select an entering variable using the optimality condition. Stop if there is no entering variable. Step 3: Select a leaving variable using the feasibility condition. Optimality condition: The entering variable in a maximization (minimization) problem is the non-basic variable having the most negative (positive) coefficient in the Z-row. The optimum is reached at the iteration where all the Z-row coefficient of the non-basic variables are non-negative (non-positive). Feasibility condition: For both maximization and minimization problems the leaving variable is the basic associated with the smallest non-negative ratio (with strictly positive denominator). Pivot row: a) Replace the leaving variable in the basic column with the entering variable. b) New pivot row equal to current pivot row divided by pivot element. c) All other rows: New row=current row - (pivot column coefficient) *new pivot row. Example 1: Use the simplex method to solve the (LP) model: 𝒎𝒂𝒙 𝒁 = 𝟓𝒙𝟏 + 𝟒𝒙𝟐 Subject to 𝟔𝒙𝟏 + 𝟒𝒙𝟐 ≤ 𝟐𝟒 𝒙𝟏 + 𝟐𝒙𝟐 ≤𝟔 −𝒙𝟏 + 𝒙𝟐 ≤𝟏 𝒙𝟐 ≤𝟐 𝒙𝟏 , 𝒙𝟐 ≥𝟎 Solution: 𝒎𝒂𝒙 𝒁 − 𝟓𝒙𝟏 + 𝟒𝒙𝟐 = 𝟎 Subject to 𝟔𝒙𝟏 + 𝟒𝒙𝟐 + 𝑺𝟏 = 𝟐𝟒 𝒙𝟏 + 𝟐𝒙𝟐 + 𝑺𝟐 = 𝟔 −𝒙𝟏 + 𝒙𝟐 + 𝑺𝟑 = 𝟏 𝒙𝟐 + 𝑺𝟒 = 𝟐 Table 1: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 𝑺𝟒 Sol. 𝑺𝟏 6 4 1 0 0 0 24 𝑺𝟐 1 2 0 1 0 0 6 𝑺𝟑 -1 1 0 0 1 0 1 𝑺𝟒 0 1 0 0 0 1 2 Max Z -5 -4 0 0 0 0 0 𝟐𝟒 =𝟒 𝟔 𝟔 =𝟔 𝟏 𝟏 = −𝟏 (ignore) 𝟏 𝟐 =∞ (ignore) 𝟎 The entering variable is 𝒙𝟏 and 𝑺𝟏 is a leaving variable. Table 2: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 𝑺𝟒 Sol. 𝒙𝟏 1 2/3 1/6 0 0 0 4 𝑺𝟐 0 4/3 -1/6 1 0 0 2 𝑺𝟑 0 5/3 1/6 0 1 0 5 𝑺𝟒 0 1 0 0 0 1 2 Max Z 0 -2/3 5/6 0 0 0 20 𝟏 Pivot row or new 𝒙𝟏 -row=𝟔 [current 𝑺𝟏 –row] 𝟏 = [𝟔 𝟒 𝟏 𝟎 𝟎 𝟎 𝟐𝟒] 𝟔 𝟐 𝟏 = [𝟏 𝟎 𝟎 𝟎 𝟒] 𝟑 𝟔 - New 𝑺𝟐 -row=[ current 𝑺𝟐 –row]-(1)[ new 𝒙𝟏 –row] =[1 2 0 1 0 0 6]- (1)[1 2/3 1/6 0 0 0 0 4] =[0 4/3 -1/6 1 0 0 2] - New 𝑺𝟑 -row=[ current 𝑺𝟑 –row]-(1)[ new 𝒙𝟏 –row] =[-1 1 0 0 1 0 1]- (1)[1 2/3 1/6 0 0 0 0 4] =[0 5/3 1/6 0 1 0 5] - New 𝑺𝟒 -row=[ current 𝑺𝟒 –row]-(0)[ new 𝒙𝟏 –row] =[0 1 0 0 0 1 2]- (0)[1 2/3 1/6 0 0 0 0 4] =[0 1 0 0 0 1 2] - New 𝒁-row=[ current 𝒁 –row]-(-5)[ new 𝒙𝟏 –row] =[-5 -4 0 0 0 0 0]-(-5)[1 2/3 1/6 0 0 0 0 4] =[0 -2/3 5/6 0 0 0 20] Now: 𝟒 =𝟔 𝟐 𝟑 𝟐 𝟔 𝟑 = = 𝟒 𝟒 𝟐 𝟑 𝟓 𝟓 =𝟑 𝟑 𝟐 =𝟐 𝟏 The entering variable is 𝒙𝟐 and 𝑺𝟐 is a leaving variable. Table 3: (optimal solution): Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 𝑺𝟒 Sol. 𝒙𝟏 1 0 1/4 -1/2 0 0 3 𝒙𝟐 0 1 -1/8 3/4 0 0 3/2 𝑺𝟑 0 0 3/8 -5/4 1 0 5/2 𝑺𝟒 0 0 1/8 -3/4 0 1 1/2 Max Z 0 0 5/6 1/2 0 0 21 𝟏 Pivot row or new 𝒙𝟐 -row= 𝟒 [current 𝑺𝟐 –row] 𝟑 𝟏 = 𝟒 [0 4/3 -1/6 1 0 0 2] 𝟑 =[0 1 -1/8 ¾ 0 0 3/2] - New 𝒙𝟏 -row=[ current 𝒙𝟏 –row]-(2/3)[ new 𝒙𝟐 –row] =[1 2/3 1/6 0 0 0 4]- (2/3)[0 1 -1/8 ¾ 0 0 3/2] =[1 0 ¼ -1/2 0 0 3] - New 𝑺𝟑 -row=[ current 𝑺𝟑 –row]-(5/2)[ new 𝒙𝟐 –row] =[0 5/3 1/6 0 1 0 5]-(5/3)[0 1 -1/8 ¾ 0 0 3/2] =[0 0 3/8 -5/4 1 0 5/3] - New 𝑺𝟒 -row=[ current 𝑺𝟒 –row]-(1)[ new 𝒙𝟐 –row] =[0 1 0 0 0 1 2]-(1)[0 1 -1/8 ¾ 0 0 3/2] =[0 0 1/8 -3/4 0 1 ½] New 𝒁-row=[ current 𝒁 –row]-(-2/3)[ new 𝒙𝟐 –row] =[0 -2/3 5/6 0 0 0 20]-(-2/3)[0 1 -1/8 ¾ 0 0 3/2] =[0 0 ¾ ½ 0 0 21] Then the solution is: 𝟑 𝟓 𝟏 𝒙𝟏 = 𝟑 & 𝒙𝟐 = & 𝑺𝟑 = & 𝑺𝟒 = 𝟐 𝟐 𝟐 𝑺𝟏 = 𝟎 , 𝑺𝟐 = 𝟎 Example 2: Use the simplex method to solve the (LP) model: 𝒎𝒂𝒙 𝒁 = 𝟐𝒙𝟏 + 𝟑𝒙𝟐 Subject to 𝟎. 𝟐𝟓𝒙𝟏 + 𝟎. 𝟓𝒙𝟐 ≤ 𝟒𝟎 𝟎. 𝟒𝒙𝟏 + 𝟎. 𝟐𝒙𝟐 ≤ 𝟒𝟎 𝟎. 𝟖𝒙𝟐 ≤ 𝟒𝟎 𝒙𝟏 , 𝒙𝟐 ≥𝟎 Solution: 𝒎𝒂𝒙 𝒁 − 𝟐𝒙𝟏 + 𝟑𝒙𝟐 = 𝟎 Subject to 𝟎. 𝟐𝟓𝒙𝟏 + 𝟎. 𝟓𝒙𝟐 + 𝑺𝟏 = 𝟒𝟎 𝟎. 𝟒𝒙𝟏 + 𝟎. 𝟐𝒙𝟐 + 𝑺𝟐 = 𝟒𝟎 𝟎. 𝟖𝒙𝟐 + 𝑺𝟑 = 𝟒𝟎 𝒙𝟏 , 𝒙𝟐 , +𝑺𝟏 , +𝑺𝟐 , +𝑺𝟑 ≥ 𝟎 Table 1: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 0.25 0.5 1 0 0 40 𝑺𝟐 0.4 0.2 0 1 0 40 𝑺𝟑 0 0.8 0 0 1 40 Max Z -2 -3 0 0 0 0 𝟒𝟎 = 𝟖𝟎 𝟎. 𝟓 𝟒𝟎 = 𝟐𝟎𝟎 𝟎.𝟐 𝟒𝟎 = 𝟓𝟎 𝟎.𝟖 𝟏 Pivot row or new 𝑺𝟑 -row=𝟎.𝟖 [0 0.8 0 0 1 40] =[0 1 0 0 1.25 50] New 𝑺𝟏 -row=[ current 𝑺𝟏 –row]-(0.5)[ new 𝒙𝟐 –row] =[0.25 0.5 1 0 0 40]-(0.5)[0 1 0 0 1.25 50] =[0 0.5 0 0 -0.625 15] New 𝑺𝟐 -row=[ current 𝑺𝟐 –row]-(0.2)[ new 𝒙𝟐 –row] =[0.4 0.2 0 1 0 40]-(0.2)[0 1 0 0 1.25 50] [0.4 0 0 1 -0.25 30] New 𝒁-row=[ current 𝒁 –row]-(-3)[ new 𝒙𝟐 –row] =[-2 -3 0 0 0 0]-(-3)[0 1 0 0 1.25 50] =[-2 0 0 0 3.75 150] Table 2: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 0.25 0 1 0 -0.625 15 𝑺𝟐 0.4 0 0 1 -0.25 30 𝒙𝟐 0 1 0 0 1.25 50 Max Z -2 0 0 0 3.75 150 𝟏𝟓 = 𝟔𝟎 𝟎.𝟐𝟓 𝟑𝟎 = 𝟕𝟓 𝟎. 𝟒 𝟓𝟎 =∞ (ignore) 𝟎 𝟏 Pivot row or new 𝑺𝟏 -row=𝟎.𝟐𝟓 [0.25 0 1 0 -0.625 15] =[1 0 4 0 -2.5 60] New 𝑺𝟐 -row=[ current 𝑺𝟐 –row]-(0.4)[ new 𝒙𝟏 –row] =[0.4 0 0 0 -0.25 30]-(0.4)[1 0 4 0 -2. 5 60] [0 0 -1.6 0 -0.75 6] New 𝒙𝟐 -row=[0 1 0 0 1.25 50]-(0)[1 0 4 0 -2. 5 60] =[0 1 0 0 1.25 50] New 𝒁-row=[ current 𝒁 –row]-(-2)[ 1 0 4 0 -2.5 60] =[-2 0 0 0 3.75 150]-(-2)[1 0 4 0 -2. 5 60] [0 0 8 0 -1.25 270] Table 3: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝒙𝟏 1 0 4 0 -2.5 60 𝑺𝟐 0 0 -1.6 1 0.75 6 𝒙𝟐 0 1 0 0 1.25 50 Max Z 0 0 8 0 -1.25 270 𝟔𝟎 = −𝟐𝟒 (ignore) 𝟐.𝟓 𝟔 =𝟖 𝟎. 𝟕𝟓 𝟓𝟎 = 𝟒𝟎 𝟏.𝟐𝟓 𝟏 𝟏 New 𝑺𝟐 -row=𝟎.𝟕𝟓 =[current 𝑺𝟐 -row] =𝟎.𝟕𝟓 [0 0 -1.6 0 0.75 6] =[0 0 -2.133 0 1 8] New 𝒙𝟏 -row= [1 0 4 0 -2.5 60]-(-2.5)[ 0 0 -2.133 0 1 8] =[1 0 -1.333 0 0 80] New 𝒙𝟐 -row= [0 1 0 0 1.25 50]-(-1.25)[ 0 0 -2.133 0 1 8] =[0 1 -2.76 0 0 40] New 𝒁-row= [0 0 8 0 -1.25 270]-(-2.5)[ 0 0 -2.133 0 1 8] =[0 0 5.33 0 0 280] Table 3: (optimal solution): Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝒙𝟏 1 0 -1.333 0 0 80 𝑺𝟑 0 0 -2.133 0 1 8 𝒙𝟐 0 1 -2.67 0 0 40 Max Z 0 0 5.33 0 0 280 The optimal solution : 𝒙𝟏 =80 , 𝒙𝟐 = 𝟒𝟎 , 𝑺𝟏 = 𝟎 & 𝑺𝟐 = 𝟎 // Z=280 Example 3: Use the simplex method to solve the (LP) model: 𝒎𝒊𝒏 𝒁 = −𝟔𝒙𝟏 − 𝟏𝟎𝒙𝟐 − 𝟒𝒙𝟑 Subject to 𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 ≤ 𝟏𝟎𝟎𝟎 𝒙𝟏 + 𝒙𝟐 ≤ 𝟓𝟎𝟎 𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟕𝟎𝟎 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 ≥𝟎 Solution: 𝒎𝒊𝒏 𝒁 + 𝟔𝒙𝟏 + 𝟏𝟎𝒙𝟐 + 𝟒𝒙𝟑 = 𝟎 Subject to 𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 + 𝑺𝟏 = 𝟏𝟎𝟎𝟎 𝒙𝟏 + 𝒙𝟐 + 𝑺𝟐 = 𝟓𝟎𝟎 𝒙𝟏 + 𝟐𝒙𝟐 + 𝑺𝟑 = 𝟕𝟎𝟎 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 , 𝑺𝟏, 𝑺𝟐 𝑺𝟑 ≥ 𝟎 Table 1: Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 1 1 1 1 0 0 1000 𝑺𝟐 1 1 0 0 1 0 500 𝑺𝟑 1 2 0 0 1 1 700 Max Z 6 10 4 0 0 0 0 𝟏𝟎𝟎𝟎 = 𝟏𝟎𝟎𝟎 𝟏 𝟓𝟎𝟎 = 𝟓𝟎𝟎 𝟏 𝟕𝟎𝟎 = 𝟑𝟓𝟎 𝟐 𝟏 New 𝑺𝟑 -row or 𝒙𝟐 -row =𝟐 [1 2 0 0 0 1 700] 𝟏 𝟏 =[𝟐 1 0 0 0 350] 𝟐 𝟏 𝟏 New 𝑺𝟏 -row = [1 1 1 1 0 0 1000]-(1)[ 𝟐 1 0 0 0 350] 𝟐 𝟏 𝟏 =[𝟐 0 1 1 0 - 𝟐 650] 𝟏 𝟏 New 𝑺𝟐 -row = [1 1 0 0 1 0 500]-(1)[ 𝟐 1 0 0 0 𝟐 350] 𝟏 𝟏 =[𝟐 0 0 0 1 - 𝟐 150] 𝟏 𝟏 New 𝒁-row = [6 10 4 0 0 0 0]-(10)[ 𝟐 1 0 0 0 350] 𝟐 =[1 0 4 0 0 - 𝟓 -3500] Table 2: Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 1/2 0 1 1 0 -1/2 650 𝑺𝟐 1/2 0 0 0 1 -1/2 150 𝒙𝟐 1/2 1 0 0 0 1/2 350 Max Z 1 0 4 0 0 -5 -3500 𝟔𝟓𝟎 = 𝟔𝟓𝟎 𝟏 𝟏𝟓𝟎 𝟎 = ∞ (ignore) 𝟑𝟓𝟎 =∞ (ignore) 𝟎 𝟏 𝟏 New 𝑺𝟏 -row or 𝒙𝟑 -row =𝟏[𝟐 0 1 1 0 -𝟐 650] 𝟏 𝟏 =[𝟐 0 1 1 0 -𝟐 650] 𝟏 𝟏 𝟏 𝟏 New 𝑺𝟐 -row = [𝟐 0 0 0 1 - 𝟐 150]-(0)[ 𝟐 0 1 1 0 -𝟐 650] 𝟏 𝟏 =[ 0 0 0 1 - 150] 𝟐 𝟐 𝟏 𝟏 𝟏 𝟏 New 𝒙𝟐 -row = [𝟐 1 0 0 0 350]-(0)[ 𝟐 0 1 1 0 -𝟐 650] 𝟐 𝟏 𝟏 =[𝟐 1 0 0 0 350] 𝟐 𝟏 𝟏 New 𝒁-row = [𝟏 0 4 0 0 -5 -3500]-(4)[ 𝟐 0 1 1 0 -𝟐 650] =[-1 0 0 -4 0 -3 -6100] Table 3: (optimal solution): Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝒙𝟑 1/2 0 1 1 0 -1/2 650 𝑺𝟐 1/2 0 0 0 1 -1/2 150 𝒙𝟐 1/2 1 0 0 0 1/2 350 Max Z -1 0 0 -4 0 -3 -6100 The optimal solution : 𝒙𝟑 =650 , 𝒙𝟐 = 𝟑𝟓𝟎 , 𝑺𝟏 = 𝟎 𝑺𝟑 & 𝑺𝟐 = 𝟏𝟓𝟎 𝒙𝟏 = 𝟎 // Z=280 Example 4: Use the simplex method to solve the (LP) model: 𝒎𝒂𝒙 𝒁 = 𝟒𝒙𝟏 − 𝒙𝟐 Subject to 𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟒 𝟐𝒙𝟏 + 𝟑𝒙𝟐 ≤ 𝟏𝟐 𝒙𝟏 − 𝒙𝟐 ≤𝟑 𝒙𝟏 , 𝒙𝟐 ≥𝟎 Solution: 𝒎𝒂𝒙 𝒁 − 𝟒𝒙𝟏 + 𝒙𝟐 = 𝟎 Subject to 𝒙𝟏 + 𝟐𝒙𝟐 + 𝑺𝟏 = 𝟒 𝟐𝒙𝟏 + 𝟑𝒙𝟐 + 𝑺𝟐 = 𝟏𝟐 𝒙𝟏 − 𝒙𝟐 + 𝑺𝟑 = 𝟑 𝒙𝟏 , 𝒙𝟐 , 𝑺𝟏 , 𝑺𝟐 , 𝑺𝟑 ≥ 𝟎 Table 1: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 1 2 1 0 0 4 𝑺𝟐 2 3 0 1 0 12 𝑺𝟑 1 -1 0 0 1 3 Max Z -4 1 0 0 0 0 𝟒 =𝟒 𝟏 𝟏𝟐 = 𝟔 𝟐 𝟑 =𝟑 𝟏 New 𝑺𝟑 -row or 𝒙𝟏 -row =𝟏[1 -1 0 0 1 3] =[1 -1 0 0 1 3] New 𝑺𝟏 -row = [1 2 1 0 0 4]-(1)[ 𝟏 − 𝟏 𝟎 𝟎 𝟏 𝟑] =[0 3 1 0 -1 1] New 𝑺𝟐 -row = [2 3 0 1 0 12]-(2)[ 𝟏 − 𝟏 𝟎 𝟎 𝟏 𝟑] =[0 5 0 1 -2 6] New 𝒁-row = [-4 1 0 0 0 0]-(-4)[ 𝟏 − 𝟏 𝟎 𝟎 𝟏 𝟑] =[0 -3 0 0 4 12] Table 2: Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 0 3 1 0 -1 1 𝑺𝟐 0 5 0 1 -2 6 𝒙𝟏 1 -1 0 0 1 3 Max Z 0 -3 0 0 4 12 𝟏 𝟏 = 𝟑 𝟑 𝟔 𝟔 =𝟓 𝟓 𝟑 = −𝟑 (ignore) 𝟏 𝟏 New 𝑺𝟏 -row or 𝒙𝟐 -row =(𝟑)[0 3 1 0 -1 1] =[0 1 1/3 0 -1/3 1/3] New 𝑺𝟐 -row = [0 5 0 1 -2 6]-(5)[ 𝟎 𝟏 𝟏/𝟑 𝟎 − 𝟏/𝟑 𝟏/𝟑] =[0 0 -2/3 1 11/3 13/3] New 𝒙𝟏 -row = [1 -1 0 0 1 3]-(-1)[ 𝟎 𝟏 𝟏/𝟑 𝟎 − 𝟏/𝟑 𝟏/𝟑] =[1 0 1/3 0 2/3 10/3] New 𝒁-row = [0 -3 0 0 4 12]-(-3)[ 𝟎 𝟏 𝟏/𝟑 𝟎 − 𝟏/𝟑 𝟏/𝟑] =[0 0 1 0 3 13] Table 3: (optimal solution): Basic 𝒙𝟏 𝒙𝟐 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝒙𝟐 0 1 1/3 0 -1/3 1/3 𝑺𝟐 0 0 -2/3 1 11/3 13/3 𝒙𝟏 1 0 1/3 0 2/3 10/3 Max Z 0 0 1 0 3 13 The optimal solution : 𝒙𝟏 =10/3 , 𝒙𝟐 = 𝟏/𝟑 , 𝑺𝟐 = 𝟏𝟑/𝟑 𝑺𝟏 & 𝑺𝟑 = 𝟎 // Z=13 Example 5: Use the simplex method to solve the (LP) model: 𝒎𝒂𝒙 𝒁 = 𝟏𝟔𝒙𝟏 + 𝟏𝟕𝒙𝟐 + 𝟏𝟎𝒙𝟑 Subject to 𝒙𝟏 + 𝟐𝒙𝟐 + 𝟒𝒙𝟑 ≤ 𝟐𝟎𝟎𝟎 𝟐𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 ≤ 𝟑𝟔𝟎𝟎 𝒙𝟏 + 𝟐𝒙𝟐 + 𝟐𝒙𝟑 ≤ 𝟐𝟒𝟎𝟎 𝒙𝟏 ≤ 𝟑𝟎 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 ≥ 𝟎 Solution: 𝒎𝒂𝒙 𝒁 − 𝟏𝟔𝒙𝟏 − 𝟏𝟕𝒙𝟐 − 𝟏𝟎𝒙𝟑 = 𝟎 Subject to 𝒙𝟏 + 𝟐𝒙𝟐 + 𝟒𝒙𝟑 + 𝑺𝟏 = 𝟐𝟎𝟎𝟎 𝟐𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 + 𝑺𝟐 = 𝟑𝟔𝟎𝟎 𝒙𝟏 + 𝟐𝒙𝟐 + 𝟐𝒙𝟑 + 𝑺𝟑 = 𝟐𝟒𝟎𝟎 𝒙𝟏 + 𝑺𝟒 = 𝟑𝟎 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 ≥ 𝟎, 𝑺𝟏 , 𝑺𝟐 , 𝑺𝟑 , 𝑺𝟒 ≥ 𝟎 Table 1: Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 𝑺𝟒 Sol. 𝑺𝟏 1 2 4 1 0 0 0 2000 𝑺𝟐 2 1 1 0 1 0 0 3600 𝑺𝟑 1 2 2 0 0 1 0 2400 𝑺𝟒 1 0 0 0 0 0 1 30 Max Z -16 -17 -10 0 0 0 0 0 𝟐𝟎𝟎𝟎 = 𝟏𝟎𝟎𝟎 𝟐 𝟑𝟔𝟎𝟎 = 𝟑𝟔𝟎𝟎 𝟏 𝟐𝟒𝟎𝟎 = 𝟏𝟐𝟎𝟎 𝟐 𝟑𝟎 = ∞ (ignore) 𝟎 𝟏 New 𝑺𝟏 -row or 𝒙𝟏 -row =(𝟐)[1 2 4 1 0 0 0 2000] =[1/2 1 2 1/2 0 0 0 1000] New 𝑺𝟐 -row = [2 1 1 0 1 0 0 3600] -(1)[ 𝟏/𝟐 𝟏 𝟐 𝟏/𝟐 𝟎 𝟎 𝟎 𝟏𝟎𝟎𝟎] =[3/2 0 -1 -1/2 1 0 0 2600] New 𝑺𝟑 -row = [1 2 2 0 0 1 0 2400] -(2)[ 𝟏/𝟐 𝟏 𝟐 𝟏/𝟐 𝟎 𝟎 𝟎 𝟏𝟎𝟎𝟎] =[0 0 -2 -1 0 1 0 400] New 𝑺𝟒 -row = [1 0 0 0 0 0 1 30] -(0)[ 𝟏/𝟐 𝟏 𝟐 𝟏/𝟐 𝟎 𝟎 𝟎 𝟏𝟎𝟎𝟎] =[1 0 0 0 0 0 1 30] New 𝒁-row = [-16 -17 -10 0 0 0 0 0] -(-17)[ 𝟏/𝟐 𝟏 𝟐 𝟏/𝟐 𝟎 𝟎 𝟎 𝟏𝟎𝟎𝟎] =[15/2 0 24 17/2 0 0 0 17000] Table 2: (optimal solution): Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 𝑺𝟒 Sol. 𝒙𝟐 1/2 1 2 1/2 0 0 0 1000 𝑺𝟐 3/2 0 -1 -1/2 1 0 0 2600 𝑺𝟑 0 0 -2 -1 0 1 0 400 𝑺𝟒 1 0 0 0 0 0 1 30 Max Z 15/2 0 24 17/2 0 0 0 17000 The optimal solution : 𝒙𝟐 =1000 , 𝑺𝟐 = 𝟐𝟔𝟎𝟎 , 𝑺𝟑 = 𝟒𝟎𝟎, 𝑺𝟒 = 𝟑𝟎 𝒙𝟏 , 𝒙𝟐 𝑺𝟏 = 𝟎 / Z=17000 Example 6: Use the simplex method to solve the (LP) model: 𝒎𝒂𝒙 𝒁 = 𝟑𝒙𝟏 + 𝟓𝒙𝟐 + 𝟒𝒙𝟑 Subject to 𝟐𝒙𝟏 + 𝟑𝒙𝟐 ≤𝟖 𝟐𝒙𝟏 + 𝟓𝒙𝟐 ≤ 𝟏𝟎 𝟑𝒙𝟏 + 𝟐𝒙𝟐 + 𝟒𝒙𝟑 ≤ 𝟏𝟓 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 ≥ 𝟎 Solution: 𝒎𝒂𝒙 𝒁 − 𝟑𝒙𝟏 − 𝟓𝒙𝟐 − 𝟒𝒙𝟑 = 𝟎 Subject to 𝟐𝒙𝟏 + 𝟑𝒙𝟐 + 𝑺𝟏 ≤𝟖 𝟐𝒙𝟏 + 𝟓𝒙𝟐 + 𝑺𝟐 ≤ 𝟏𝟎 𝟑𝒙𝟏 + 𝟐𝒙𝟐 + 𝟒𝒙𝟑 + 𝑺𝟑 ≤ 𝟏𝟓 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑, 𝑺𝟏 , 𝑺𝟐 , 𝑺𝟑 ≥ 𝟎 Table 1: Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 2 3 0 1 0 0 8 𝑺𝟐 2 5 0 0 1 0 10 𝑺𝟑 3 2 4 0 0 1 15 Max Z -3 -5 -4 0 0 0 0 𝟖 = 𝟐. 𝟕 𝟑 𝟏𝟎 =𝟐 𝟓 𝟏𝟓 = 𝟕. 𝟓 𝟐 𝟏 New 𝑺𝟐 -row or 𝒙𝟐 -row =(𝟓)[2 5 0 0 1 0 10 ] =[2/5 1 0 0 1/5 0 2] New 𝑺𝟏 -row = [2 3 0 1 0 0 8 ] -(3)[ 𝟐/𝟓 𝟏 𝟎 𝟎 𝟏/𝟓 𝟎 𝟐] =[4/5 0 0 1 -3/5 0 2] New 𝑺𝟑 -row = [3 2 4 0 0 1 15 ] -(2)[ 𝟐/𝟓 𝟏 𝟎 𝟎 𝟏/𝟓 𝟎 𝟐] =[11/5 0 4 0 -2/5 1 11] New 𝒁-row = [-3 -5 -4 0 0 0 0 ] -(-5)[ 𝟐/𝟓 𝟏 𝟎 𝟎 𝟏/𝟓 𝟎 𝟐] =[-1 0 -4 0 1 0 10] Table 2: Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 4/5 0 0 1 -3/5 0 2 𝒙𝟐 2/5 1 0 0 1/5 0 2 𝑺𝟑 11/5 1 4 0 -2/5 1 11 Max Z -1 0 -4 0 1 0 10 𝟏 New 𝑺𝟑 -row or 𝒙𝟑 -row =(𝟒)[11/5 0 4 0 -2/5 1 11] =[11/20 0 1 0 -1/10 1/4 11/4] New 𝑺𝟏 -row = [4/5 0 0 1 -3/5 0 2 ] -(0)[ 𝟏𝟏/𝟐𝟎 𝟎 𝟏 𝟎 − 𝟏/𝟏𝟎 𝟏/𝟒 𝟏𝟏/𝟒] =[4/5 0 0 1 -3/5 0 2] New 𝒙𝟐 -row = [2/5 1 0 0 1/5 0 2 ] New 𝒁-row = [-1 0 -4 0 1 0 10 ] -(-4)[ 𝟏𝟏/𝟐𝟎 𝟎 𝟏 𝟎 − 𝟏/𝟏𝟎 𝟏/𝟒 𝟏𝟏/𝟒] =[6/5 0 0 0 3/5 1 21] Table 3: (optimal solution): Basic 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝑺𝟏 𝑺𝟐 𝑺𝟑 Sol. 𝑺𝟏 4/5 0 0 1 -3/5 0 2 𝒙𝟐 2/5 1 0 0 1/5 0 2 𝒙𝟑 11/20 0 1 0 -1/10 1/4 11/4 Max Z 6/5 0 0 0 3/5 1 21 The optimal solution : 𝒙𝟐 =2 , 𝒙𝟑 = 𝟏𝟏/𝟒 , 𝑺𝟏 = 𝟐, Z=21 𝒙𝟏 = 𝟎 , 𝑺𝟐 = 𝟎, 𝑺𝟑 = 𝟎, View publication stats ADVANCED ENGINEERING MATHEMATICS xvi Contents 5.3 Extended Power Series Method: Frobenius Method 180 5.4 Bessel’s Equation. Bessel Functions J! (x) 187 5.5 Bessel Functions of the Y! (x). General Solution 196 Chapter 5 Review Questions and Problems 200 Summary of Chapter 5 201 CHAPTER 6 Laplace Transforms 203 6.1 Laplace Transform. Linearity. First Shifting Theorem (s-Shifting) 204 6.2 Transforms of Derivatives Taeand Integrals. ODEs 211 6.3 Unit Step Function (Heaviside Function). Second Shifting Theorem (t-Shifting) 217 6.4 Short Impulses. Dirac’s Delta Function. Partial Fractions 225 6.5 Convolution. Integral Equations 232 6.6 Differentiation and Integration of Transforms. ODEs with Variable Coefficients 238 6.7 Systems of ODEs 242 6.8 Laplace Transform: General Formulas 248 6.9 Table of Laplace Transforms 249 Chapter 6 Review Questions and Problems 251 Summary of Chapter 6 253 PART B C Algebra. Vector Calculus 255 Linear CHAPTER 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems 256 7.1 Matrices, Vectors: Addition and Scalar Multiplication 257 7.2 Matrix Multiplication 263 7.3 Linear Systems of Equations. Gauss Elimination 272 7.4 Linear Independence. Rank of a Matrix. Vector Space 282 Effected 7.5 Solutions of Linear Systems: Existence, Uniqueness 288 7.6 For Reference: Second- and Third-Order Determinants 291 7.7 Determinants. Cramer’s Rule 293 7.8 Inverse of a Matrix. Gauss–Jordan Elimination 301 7.9 Vector Spaces, Inner Product Spaces. Linear Transformations. Optional 309 Chapter 7 Review Questions and Problems 318 Summary of Chapter 7 320 SESTRIERE CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems 322 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors 323 8.2 Some Applications of Eigenvalue Problems 329 8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices 334 8.4 Eigenbases. Diagonalization. Quadratic Forms 339 8.5 Complex Matrices and Forms. Optional 346 Chapter 8 Review Questions and Problems 352 Summary of Chapter 8 353 Contents xix CHAPTER 17 Conformal Mapping 736 17.1 Geometry of Analytic Functions: Conformal Mapping 737 17.2 Linear Fractional Transformations (Möbius Transformations) 742 17.3 Special Linear Fractional Transformations 746 17.4 Conformal Mapping by Other Functions 750 17.5 Riemann Surfaces. Optional 754 Chapter 17 Review Questions and Problems 756 Summary of Chapter 17 757 CHAPTER 18 Complex Analysis and Potential Theory 758 18.1 Electrostatic Fields 759 18.2 Use of Conformal Mapping. Modeling 763 18.3 Heat Problems 767 18.4 Fluid Flow 771 18.5 Poisson’s Integral Formula for Potentials 777 18.6 General Properties of Harmonic Functions. Uniqueness Theorem for the Dirichlet Problem 781 Chapter 18 Review Questions and Problems 785 Summary of Chapter 18 786 PART E Numeric Analysis 787 Software 788 CHAPTER 19 Numerics in General 790 19.1 Introduction 790 be ore 19.2 Solution of Equations by Iteration 798 19.3 Interpolation 808 19.4 Spline Interpolation 820 19.5 Numeric Integration and Differentiation 827 Chapter 19 Review Questions and Problems 841 Summary of Chapter 19 842 CHAPTER 20 Numeric Linear Algebra 844 20.1 Linear Systems: Gauss Elimination 844 20.2 Linear Systems: LU-Factorization, Matrix Inversion 852 20.3 Linear Systems: Solution by Iteration 858 20.4 Linear Systems: Ill-Conditioning, Norms 864 20.5 Least Squares Method 872 20.6 Matrix Eigenvalue Problems: Introduction 876 20.7 Inclusion of Matrix Eigenvalues 879 20.8 Power Method for Eigenvalues 885 20.9 Tridiagonalization and QR-Factorization 888 Chapter 20 Review Questions and Problems 896 Summary of Chapter 20 898 CHAPTER 21 Numerics for ODEs and PDEs 900 21.1 Methods for First-Order ODEs 901 21.2 Multistep Methods 911 21.3 Methods for Systems and Higher Order ODEs 915 xx Contents 21.4 Methods for Elliptic PDEs 922 21.5 Neumann and Mixed Problems. Irregular Boundary 931 _Ie 21.6 Methods for Parabolic PDEs 936 21.7 Method for Hyperbolic PDEs 942 Chapter 21 Review Questions and Problems 945 Summary of Chapter 21 946 PART F Optimization, Graphs 949 CHAPTER 22 Unconstrained Optimization. Linear Programming 950 22.1 Basic Concepts. Unconstrained Optimization: Method of Steepest Descent 951 22.2 Linear Programming 954 22.3 Simplex Method 958 22.4 Simplex Method: Difficulties 962 Chapter 22 Review Questions and Problems 968 Summary of Chapter 22 969 CHAPTER 23 Graphs. Combinatorial Optimization 970 23.1 Graphs and Digraphs 970 23.2 Shortest Path Problems. Complexity 975 23.3 Bellman’s Principle. Dijkstra’s Algorithm 980 23.4 Shortest Spanning Trees: Greedy Algorithm 984 23.5 Shortest Spanning Trees: Prim’s Algorithm 988 23.6 Flows in Networks 991 got 23.7 Maximum Flow: Ford–Fulkerson Algorithm 998 23.8 Bipartite Graphs. Assignment Problems 1001 Chapter 23 Review Questions and Problems 1006 Summary of Chapter 23 1007 PART G Probability, Statistics 1009 Software 1009 CHAPTER 24 Data Analysis. Probability Theory 1011 24.1 Data Representation. Average. Spread 1011 24.2 Experiments, Outcomes, Events 1015 24.3 Probability 1018 24.4 Permutations and Combinations 1024 24.5 Random Variables. Probability Distributions 1029 24.6 Mean and Variance of a Distribution 1035 24.7 Binomial, Poisson, and Hypergeometric Distributions 1039 24.8 Normal Distribution 1045 24.9 Distributions of Several Random Variables 1051 Chapter 24 Review Questions and Problems 1060 Summary of Chapter 24 1060 CHAPTER 25 Mathematical Statistics 1063 25.1 Introduction. Random Sampling 1063 25.2 Point Estimation of Parameters 1065 25.3 Confidence Intervals 1068 PART B Linear Algebra. Vector Calculus CHAPTER 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems CHAPTER 9 Vector Differential Calculus. Grad, Div, Curl CHAPTER 10 Vector Integral Calculus. Integral Theorems Matrices and vectors, which underlie linear algebra (Chaps. 7 and 8), allow us to represent linearalgebraisthe branchof numbers or functions in an ordered and compact form. Matrices can hold enormous amounts mathematicsconcerningthestudy of data—think of a network of millions of computer connections or cell phone connections— ofvectorspaceslinesandplanes in a form that can be rapidly processed by computers. The main topic of Chap. 7 is how andsomemappingthatare to solve systems of linear equations using matrices. Concepts of rank, basis, linear requiredtoperformthelinear transformations, and vector spaces are closely related. Chapter 8 deals with eigenvalue transformations problems. Linear algebra is an active field that has many applications in engineering physics, numerics (see Chaps. 20–22), economics, and others. Chapters 9 and 10 extend calculus to vector calculus. We start with vectors from linear algebra and develop vector differential calculus. We differentiate functions of several variables and discuss vector differential operations such as grad, div, and curl. Chapter 10 extends regular integration to integration over curves, surfaces, and solids, thereby obtaining new types of integrals. Ingenious theorems by Gauss, Green, and Stokes allow us to transform these integrals into one another. Software suitable for linear algebra (Lapack, Maple, Mathematica, Matlab) can be found in the list at the opening of Part E of the book if needed. Numeric linear algebra (Chap. 20) can be studied directly after Chap. 7 or 8 because Chap. 20 is independent of the other chapters in Part E on numerics. 255 CHAPTER 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra is a fairly extensive subject that covers vectors and matrices, determinants, systems of linear equations, vector spaces and linear transformations, eigenvalue problems, and other topics. As an area of study it has a broad appeal in that it has many applications in engineering, physics, geometry, computer science, economics, and other areas. It also contributes to a deeper understanding of mathematics itself. Matrices, which are rectangular arrays of numbers or functions, and vectors are the main tools of linear algebra. Matrices are important because they let us express large amounts of data and functions in an organized and concise form. Furthermore, since matrices are single objects, we denote them by single letters and calculate with them directly. All these features have made matrices and vectors very popular for expressing scientific and mathematical ideas. The chapter keeps a good mix between applications (electric networks, Markov processes, traffic flow, etc.) and theory. Chapter 7 is structured as follows: Sections 7.1 and 7.2 provide an intuitive introduction to matrices and vectors and their operations, including matrix multiplication. The next block of sections, that is, Secs. 7.3–7.5 provide the most important method for solving systems of linear equations by the Gauss elimination method. This method is a cornerstone of linear algebra, and the method itself and variants of it appear in different areas of mathematics and in many applications. It leads to a consideration of the behavior of solutions and concepts such as rank of a matrix, linear independence, and bases. We shift to determinants, a topic that has declined in importance, in Secs. 7.6 and 7.7. Section 7.8 covers inverses of matrices. The chapter ends with vector spaces, inner product spaces, linear transformations, and composition of linear transformations. Eigenvalue problems follow in Chap. 8. COMMENT. Numeric linear algebra (Secs. 20.1–20.5) can be studied immediately after this chapter. Prerequisite: None. Sections that may be omitted in a short course: 7.5, 7.9. References and Answers to Problems: App. 1 Part B, and App. 2. 256 SEC. 7.1 Matrices, Vectors: Addition and Scalar Multiplication 257 7.1 Matrices, Vectors: Addition and Scalar Multiplication The basic concepts and rules of matrix and vector algebra are introduced in Secs. 7.1 and 7.2 and are followed by linear systems (systems of linear equations), a main application, in Sec. 7.3. Let us first take a leisurely look at matrices before we formalize our discussion. A matrix is a rectangular array of numbers or functions which we will enclose in brackets. For example, ar rest a11 a12 a13 0.3 1 #5 6 49 c d, Da21 a22 a23T , 0 #0.2 16 (1) a31 a32 a33 matrix e !x 2x 2 4 c d, [a1 a2 a3], c1d a e6x 4x 2 Ector are matrices. The numbers (or functions) are called entries or, less commonly, elements of the matrix. The first matrix in (1) has two rows, which are the horizontal lines of entries. Furthermore, it has three columns, which are the vertical lines of entries. The second and third matrices are square matrices, which means that each has as many rows as columns— 3 and 2, respectively. The entries of the second matrix have two indices, signifying their location within the matrix. The first index is the number of the row and the second is the number of the column, so that together the entry’s position is uniquely identified. For example, a23 (read a two three) is in Row 2 and Column 3, etc. The notation is standard and applies to all matrices, including those that are not square. Matrices having just a single row or column are called vectors. Thus, the fourth matrix in (1) has just one row and is called a row vector. The last matrix in (1) has just one column and is called a column vector. Because the goal of the indexing of entries was to uniquely identify the position of an element within a matrix, one index suffices for vectors, whether they are row or column vectors. Thus, the third entry of the row vector in (1) is denoted by a3. Matrices are handy for storing and processing data in applications. Consider the following two common examples. EXAMPLE 1 Linear Systems, a Major Application of Matrices We are given a system of linear equations, briefly a linear system, such as 4x 1 " 6x 2 " 9x 3 ! 6 6x 1 # 2x 3 ! 20 coefficientsJunknowns 5x 1 # 8x 2 " x 3 ! 10 where x 1, x 2, x 3 are the unknowns. We form the coefficient matrix, call it A, by listing the coefficients of the unknowns in the position in which they appear in the linear equations. In the second equation, there is no unknown x 2, which means that the coefficient of x 2 is 0 and hence in matrix A, a22 ! 0, Thus, 258 CHAP. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems augmentedmatrix 4 6 9 4 6 9 6 ~ A ! D6 0 #2T. We form another matrix A ! D6 0 #2 20T 5 #8 1 5 #8 1 10 by augmenting A with the right sides of the linear system and call it the augmented matrix of the system. ~ ~ Since we can go back and recapture the system of linear equations directly from the augmented matrix A, A contains all the information of the system and can thus be used to solve the linear system. This means that we can just use the augmented matrix to do the calculations needed to solve the system. We shall explain this in detail in Sec. 7.3. Meanwhile you may verify by substitution that the solution is x 1 ! 3, x 2 ! 12 , x 3 ! #1. The notation x 1, x 2, x 3 for the unknowns is practical but not essential; we could choose x, y, z or some other letters. ! EXAMPLE 2 Sales Figures in Matrix Form Sales figures for three products I, II, III in a store on Monday (Mon), Tuesday (Tues), Á may for each week be arranged in a matrix Mon Tues Wed Thur Fri Sat Sun 40 33 81 0 21 47 33 I A! D 0 12 78 50 50 96 90 T # II 10 0 0 27 43 78 56 III If the company has 10 stores, we can set up 10 such matrices, one for each store. Then, by adding corresponding entries of these matrices, we can get a matrix showing the total sales of each product on each day. Can you think of other data which can be stored in matrix form? For instance, in transportation or storage problems? Or in listing distances in a network of roads? ! General Concepts and Notations Let us formalize what we just have discussed. We shall denote matrices by capital boldface letters A, B, C, Á , or by writing the general entry in brackets; thus A ! [ajk], and so on. By an m " n matrix (read m by n matrix) we mean a matrix with m rows and n columns—rows always come first! m $ n is called the size of the matrix. Thus an m $ n matrix is of the form a11 a12 Á a1n a21 a22 Á a2n (2) A ! 3ajk4 ! E U. # # Á # am1 am2 Á amn The matrices in (1) are of sizes 2 $ 3, 3 $ 3, 2 $ 2, 1 $ 3, and 2 $ 1, respectively. Each entry in (2) has two subscripts. The first is the row number and the second is the column number. Thus a21 is the entry in Row 2 and Column 1. If m ! n, we call A an n $ n square matrix. Then its diagonal containing the entries a11, a22, Á , ann is called the main diagonal of A. Thus the main diagonals of the two square matrices in (1) are a11, a22, a33 and e!x, 4x, respectively. Square matrices are particularly important, as we shall see. A matrix of any size m $ n is called a rectangular matrix; this includes square matrices as a special case. SEC. 7.1 Matrices, Vectors: Addition and Scalar Multiplication 259 Vectors A vector is a matrix with only one row or column. Its entries are called the components of the vector. We shall denote vectors by lowercase boldface letters a, b, Á or by its general component in brackets, a ! 3aj4, and so on. Our special vectors in (1) suggest that a (general) row vector is of the form a ! 3a1 a2 Á an4. For instance, a ! 3#2 5 0.8 0 14. A column vector is of the form b1 4 b2 b ! E. U. For instance, b ! D 0T... #7 bm Addition and Scalar Multiplication of Matrices and Vectors What makes matrices and vectors really useful and particularly suitable for computers is the fact that we can calculate with them almost as easily as with numbers. Indeed, we now introduce rules for addition and for scalar multiplication (multiplication by numbers) that were suggested by practical applications. (Multiplication of matrices by matrices follows in the next section.) We first need the concept of equality. DEFINITION Equality of Matrices Two matrices A ! 3ajk4 and B ! 3bjk4 are equal, written A ! B, if and only if they have the same size and the corresponding entries are equal, that is, a11 ! b11, a12 ! b12, and so on. Matrices that are not equal are called different. Thus, matrices of different sizes are always different. EXAMPLE 3 Equality of Matrices Let a11 a12 4 0 A! c d and B! c d. a21 a22 3 #1 Then a11 ! 4, a12 ! 0, A!B if and only if a21 ! 3, a22 ! #1. The following matrices are all different. Explain! 1 3 4 2 4 1 1 3 0 0 1 3 c d c d c d c d c d ! 4 2 1 3 2 3 4 2 0 0 4 2 260 CHAP. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems DEFINITION Addition of Matrices The sum of two matrices A ! 3ajk4 and B ! 3bjk4 of the same size is written A " B and has the entries ajk " bjk obtained by adding the corresponding entries of A and B. Matrices of different sizes cannot be added. As a special case, the sum a " b of two row vectors or two column vectors, which must have the same number of components, is obtained by adding the corresponding components. EXAMPLE 4 Addition of Matrices and Vectors #4 6 3 5 #1 0 1 5 3 If A! c d and B! c d, then A"B! c d. 0 1 2 3 1 0 3 2 2 A in Example 3 and our present A cannot be added. If a ! 35 7 24 and b ! 3#6 2 04, then a " b ! 3#1 9 24. An application of matrix addition was suggested in Example 2. Many others will follow. ! DEFINITION Scalar Multiplication (Multiplication by a Number) The product of any m $ n matrix A ! 3ajk4 and any scalar c (number c) is written cA and is the m $ n matrix cA ! 3cajk4 obtained by multiplying each entry of A by c. Here (#1)A is simply written #A and is called the negative of A. Similarly, (#k)A is written #kA. Also, A " (#B) is written A # B and is called the difference of A and B (which must have the same size!). EXAMPLE 5 Scalar Multiplication 2.7 #1.8 #2.7 1.8 3 #2 0 0 10 If A ! D0 0.9T , then #A ! D 0 #0.9T , A!D 0 1T , 0A ! D0 0T. 9 9.0 #4.5 #9.0 4.5 10 #5 0 0 If a matrix B shows the distances between some cities in miles, 1.609B gives these distances in kilometers. ! Rules for Matrix Addition and Scalar Multiplication. From the familiar laws for the addition of numbers we obtain similar laws for the addition of matrices of the same size m $ n, namely, (a) A"B!B"A (b) (A " B) " C ! A " (B " C) (written A " B " C) (3) (c) A"0!A (d) A " (#A) ! 0. Here 0 denotes the zero matrix (of size m $ n), that is, the m $ n matrix with all entries zero. If m ! 1 or n ! 1, this is a vector, called a zero vector. SEC. 7.1 Matrices, Vectors: Addition and Scalar Multiplication 261 Hence matrix addition is commutative and associative [by (3a) and (3b)]. Similarly, for scalar multiplication we obtain the rules (a) c(A " B) ! cA " cB (b) (c " k)A ! cA " kA (4) (c) c(kA) ! (ck)A (written ckA) (d) 1A ! A. PROBLEM SET 7.1 1–7 GENERAL QUESTIONS 0 2 1. Equality. Give reasons why the five matrices in Example 3 are all different. E ! D3 4T 2. Double subscript notation. If you write the matrix in 3 #1 Example 2 in the form A ! 3ajk4, what is a31? a13? 1.5 #1 #5 a26? a33? 3. Sizes. What sizes do the matrices in Examples 1, 2, 3, u ! D 0 T, v ! D 3T , w ! D#30T. and 5 have? #3.0 2 10 4. Main diagonal. What is the main diagonal of A in Example 1? Of A and B in Example 3? Find the following expressions, indicating which of the 5. Scalar multiplication. If A in Example 2 shows the rules in (3) or (4) they illustrate, or give reasons why they number of items sold, what is the matrix B of units sold are not defined. if a unit consists of (a) 5 items and (b) 10 items? 8. 2A " 4B, 4B " 2A, 0A " B, 0.4B # 4.2A 6. If a 12 $ 12 matrix A shows the distances between 9. 3A, 0.5B, 3A " 0.5B, 3A " 0.5B " C 12 cities in kilometers, how can you obtain from A the 10. (4 # 3)A, 4(3A), 14B # 3B, 11B matrix B showing these distances in miles? 7. Addition of vectors. Can you add: A row and 11. 8C " 10D, 2(5D " 4C), 0.6C # 0.6D, a column vector with different numbers of compo- 0.6(C # D) nents? With the same number of components? Two 12. (C " D) " E, (D " E) " C, 0(C # E) " 4D, row vectors with the same number of components A # 0C but different numbers of zeros? A vector and a 13. (2 # 7)C, 2(7C), #D " 0E, E # D " C " u scalar? A vector with four components and a 2 $ 2 matrix? 14. (5u " 5v) # 12 w, #20(u " v) " 2w, E # (u " v), 10(u " v) " w 8–16 ADDITION AND SCALAR 15. (u " v) # w, u " (v # w), C " 0w, MULTIPLICATION OF MATRICES 0E " u # v AND VECTORS 16. 15v # 3w # 0u, #3w " 15v, D # u " 3C, Let 8.5w # 11.1u " 0.4v 0 2 4 0 5 2 17. Resultant of forces. If the above vectors u, v, w A ! D6 5 5T , B!D 5 3 4T represent forces in space, their sum is called their resultant. Calculate it. 1 0 #3 #2 4 #2 18. Equilibrium. By definition, forces are in equilibrium 5 2 #4 1 if their resultant is the zero vector. Find a force p such that the above u, v, w, and p are in equilibrium. C ! D#2 4T , D!D 5 0T , 19. General rules. Prove (3) and (4) for general 2 $ 3 1 0 2 #1 matrices and scalars c and k. 262 CHAP. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems 20. TEAM PROJECT. Matrices for Networks. Matrices (c) Sketch the three networks corresponding to the have various engineering applications, as we shall see. nodal incidence matrices For instance, they can be used to characterize connections in electrical networks, in nets of roads, in production 1 0 0 1 1 #1 0 0 1 processes, etc., as follows. D#1 1 0 0T , D#1 1 #1 1 0T , (a) Nodal Incidence Matrix. The network in Fig. 155 consists of six branches (connections) and four nodes 0 #1 1 0 0 0 1 #1 0 (points where two or more branches come together). One node is the reference node (grounded node, whose 1 0 1 0 0 voltage is zero). We number the other nodes and number and direct the branches. This we do arbitrarily. D#1 1 0 1 0T. The network can now be described by a matrix 0 #1 #1 0 1 A ! 3ajk4, where (d) Mesh Incidence Matrix. A network can also be "1 if branch k leaves node j characterized by the mesh incidence matrix M ! 3m jk4, ajk ! d #1 if branch k enters node j where 0 if branch k does not touch node j. "1 if branch k is in mesh j A is called the nodal incidence matrix of the network. and has the same orientation Show that for the network in Fig. 155 the matrix A has the given form. m jk ! f #1 if branch k is in mesh j and has the opposite orientation 3 0 if branch k is not in mesh j 1 2 3 2 5 and a mesh is a loop with no branch in its interior (or in its exterior). Here, the meshes are numbered and 1 4 6 directed (oriented) in an arbitrary fashion. Show that for the network in Fig. 157, the matrix M has the given (Reference node) form, where Row 1 corresponds to mesh 1, etc. 3 4 Branch 1 2 3 4 5 6 3 2 5 Node 1 1 –1 –1 0 0 0 Node 2 0 1 0 1 1 0 1 2 Node 3 0 0 1 0 –1 –1 1 4 6 Fig. 155. Network and nodal incidence matrix in Team Project 20(a) (b) Find the nodal incidence matrices of the networks in Fig. 156. 1 1 0 –1 0 0 0 0 0 1 –1 1 1 M= 1 2 2 3 0 –1 1 0 1 0 7 1 0 1 0 0 1 5 5 2 1 2 3 6 Fig. 157. Network and matrix M in Team Project 20(d) 4 4 3 1 4 3 Fig. 156. Electrical networks in Team Project 20(b) SEC. 7.2 Matrix Multiplication 263 7.2 Matrix Multiplication Matrix multiplication means that one multiplies matrices by matrices. Its definition is standard but it looks artificial. Thus you have to study matrix multiplication carefully, multiply a few matrices together for practice until you can understand how to do it. Here then is the definition. (Motivation follows later.) DEFINITION Multiplication of a Matrix by a Matrix The product C ! AB (in this order) of an m $ n matrix A ! 3ajk4 times an r $ p matrix B ! 3bjk4 is defined if and only if r ! n and is then the m $ p matrix C ! 3cjk4 with entries cjk ! a ajlblk ! aj1b1k " aj2b2k " Á " ajnbnk n j ! 1, Á , m (1) l!1 k ! 1, Á , p. The condition r ! n means that the second factor, B, must have as many rows as the first factor has columns, namely n. A diagram of sizes that shows when matrix multiplication is possible is as follows: A B ! C 3m $ n4 3n $ p4 ! 3m $ p4. The entry cjk in (1) is obtained by multiplying each entry in the jth row of A by the corresponding entry in the kth column of B and then adding these n products. For instance, c21 ! a21b11 " a22b21 " Á " a2nbn1, and so on. One calls this briefly a multiplication of rows into columns. For n ! 3, this is illustrated by n=3 p=2 p=2 a11 a12 a13 b11 b12 c11 c12 a21 a22 a23 b21 b22 = c21 c22 m=4 a31 a32 a33 b31 b32 c31 c32 m=4 a41 a42 a43 c41 c42 Notations in a product AB ! C where we shaded the entries that contribute to the calculation of entry c21 just discussed. Matrix multiplication will be motivated by its use in linear transformations in this section and more fully in Sec. 7.9. Let us illustrate the main points of matrix multiplication by some examples. Note that matrix multiplication also includes multiplying a matrix by a vector, since, after all, a vector is a special matrix. EXAMPLE 1 Matrix Multiplication 3 5 #1 2 #2 3 1 22 #2 43 42 AB ! D 4 0 2T D5 0 7 8T ! D 26 #16 14 6T #6 #3 2 9 #4 1 1 #9 4 #37 #28 Here c11 ! 3 # 2 " 5 # 5 " (#1) # 9 ! 22, and so on. The entry in the box is c23 ! 4 # 3 " 0 # 7 " 2 # 1 ! 14. The product BA is not defined. ! 264 CHAP. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems EXAMPLE 2 Multiplication of a Matrix and a Vector 4 2 3 4 # 3"2 # 5 22 3 4 2 c dc d ! c # d ! c d whereas c dc d is undefined. ! 1 8 5 1 3"8 # 5 43 5 1 8 EXAMPLE 3 Products of Row and Column Vectors 1 1 3 6 1 33 6 14 D2T ! 3194, D2T 33 6 14 ! D 6 12 2T. ! 4 4 12 24 4 EXAMPLE 4 CAUTION! Matrix Multiplication Is Not Commutative, AB # BA in General This is illustrated by Examples 1 and 2, where one of the two products is not even defined, and by Example 3, where the two products have different sizes. But it also holds for square matrices. For instance, 1 1 #1 1 0 0 #1 1 1 1 99 99 c dc d ! c d but c dc d ! c d. 100 100 1 #1 0 0 1

Use Quizgecko on...
Browser
Browser