Linear Algebra PDF Notes
Document Details
Uploaded by ResourcefulPanFlute
The University of Arizona
Tags
Summary
These notes cover vector spaces, bases, dimension, and concepts related to determinants, including calculations for determinants of 2x2 and 3x3 matrices. The notes include theorems, definitions, and examples.
Full Transcript
# 4.5 The dimension of a vector space ## Theorem 10 If a vector space *V* has a basis *B = {b<sub>1</sub>,..., b<sub>n</sub>} (with *n* vectors), then any set in *V* containing more than *n* vectors is linearly dependent. (We already know this for *R<sup>n</sup>*, but now we are talking about a ge...
# 4.5 The dimension of a vector space ## Theorem 10 If a vector space *V* has a basis *B = {b<sub>1</sub>,..., b<sub>n</sub>} (with *n* vectors), then any set in *V* containing more than *n* vectors is linearly dependent. (We already know this for *R<sup>n</sup>*, but now we are talking about a general vector space) ## Proof Let us reduce the problem to a problem in *R<sup>n</sup>*, where we know the answer. Suppose *(u<sub>1</sub>,..., u<sub>m</sub>)* is the set of "Vectors" in question, with *m > n*. Each vector *u<sub>i</sub>* has coordinates *(u<sub>i</sub>)*<sub>B</sub> in basis *B*. The number of coordinates is *n*, so *u<sub>i</sub>*<sub>B</sub> = (u<sub>i</sub><sup>1</sup>, u<sub>i</sub><sup>2</sup>,..., u<sub>i</sub><sup>n</sup>) are all in *R<sup>n</sup>*. But in *R<sup>n</sup>* any set of *m* vectors (with *m > n*) is linearly dependent. So, there exists a set of scalars *c<sub>1</sub>,..., c<sub>m</sub>*, not all zero, such that: *c<sub>1</sub>*(u<sub>1</sub>)<sub>B</sub> + *c<sub>2</sub>*(u<sub>2</sub>)<sub>B</sub> +... + *c<sub>m</sub>*(u<sub>m</sub>)<sub>B</sub> = 0 Use linearity of coordinate transformation: (*c<sub>1</sub>u<sub>1</sub> + c<sub>2</sub>u<sub>2</sub> +... + c<sub>m</sub>u<sub>m</sub>)*<sub>B</sub> = 0. This means the vector in parentheses is zero: *c<sub>1</sub>u<sub>1</sub> + c<sub>2</sub>u<sub>2</sub> +... + c<sub>m</sub>u<sub>m</sub> = 0* and *c<sub>1</sub>,..., c<sub>m</sub>* are not all zero *⇒* {u<sub>1</sub>,..., u<sub>m</sub>} is *lin. dependent*. ## Theorem 11 If a vector space *V* has a basis consisting of *n* "vectors", every other basis of *V* must have exactly *n* "vectors". ## Proof Suppose *B<sub>1</sub>* is a basis with *n* "vectors", and *B<sub>2</sub>* is any other basis. Then, *B<sub>2</sub>* cannot have more than *n* "vectors". On the other hand, if *B<sub>2</sub>* had less than *n* "vectors" and still be a basis, then *B<sub>1</sub>* only could have the same number of vectors. So, *B<sub>2</sub>* has exactly the same number of "vectors" - *n*. ## Def If a vector space *V* is spanned by a finite number of "vectors", then *V* is called *finite-dimensional*. In this case *dim V* (i.e. dimension of *V*) is the number of vectors in some basis (the basis can be obtained by reducing, if necessary, the spanning set). ## Def If a vector space *V* cannot be spanned by a finite number of vectors, such a space is called *infinite-dimensional*. ## Example 1 * Space *R<sup>n</sup>* is spanned by the standard basis *e<sub>1</sub>, e<sub>2</sub>,..., e<sub>n</sub>*, where *e<sub>1</sub> = (1, 0,..., 0)*, *e<sub>2</sub> = (0, 1,..., 0)*, etc. There is *n* vectors in this basis, so *dim R<sup>n</sup> = n*. ## ⑥ * The space *P<sub>2</sub>* of polynomials of *deg ≤* 2 is spanned by the basis {1, *t*, *t<sup>2</sup>*. Therefore, *dim P<sub>2</sub> = 3*. ## Similarly * *dim P<sub>n</sub>* = *n* +1. ## The space * *P* of polynomials of arbitrary degree is infinite-dimensional. ## Example 2 Let *H = span{v<sub>1</sub>, v<sub>2</sub>}* with *v<sub>1</sub> = (1, 3)<sup>T</sup>*, *v<sub>2</sub> = (-1, 1)<sup>T</sup>*. Since *v<sub>1</sub>, v<sub>2</sub>* is a *lin. independent set*, and it spans *H*, this set is a basis for *H*. Therefore *dim H = 2*. ## Example Suppose *H = span{v<sub>1</sub>, v<sub>2</sub>, v<sub>3</sub>}* where *v<sub>1</sub> = (1, 0)<sup>T</sup>*, *v<sub>2</sub> = (0, 1)<sup>T</sup>* and *v<sub>3</sub> = (1, -1)<sup>T</sup>*. What’s *dim H*? Is the set *v<sub>1</sub>, v<sub>2</sub>, v<sub>3</sub>* *linearly independent*? Note that *v<sub>3</sub> = v<sub>1</sub> - v<sub>2</sub>*, so *v<sub>3</sub>* is a combination of *v<sub>1</sub>* and *v<sub>2</sub>* and can be removed from the spanning set, but *H = span{v<sub>1</sub>, v<sub>2</sub>}* nevertheless. Set *{v<sub>1</sub>, v<sub>2</sub>}* is an independent spanning set → basis → *dim H = 2*. ## Example 4 Subspaces of *R<sup>3</sup>* * *dim = 0*: {0} * *dim = 1*: All lines through 0 * *dim = 2*: All planes through 0 * *dim = 3*: *R<sup>3</sup>* itself ## Theorem 12 Let *H* be a subspace of a finite-dimensional vector space *V*. Then, any linearly independent set of "vectors" in *H* can be expanded (if it is not a basis yet) to a basis of *H*. Also, *dim H ≤ dim V*. ## Proof Let *S* be a linearly independent set in *H*, *S = {u<sub>1</sub>,..., u<sub>k</sub>}*. If *S* spans *H*, *S* is a basis for *H*. If *S* does not span *H*, there is a vector *u<sub>k+1</sub>* in *H* such that *u<sub>k+1</sub>* cannot be expressed through *u<sub>1</sub>,..., u<sub>k</sub>*. Then, set *S<sub>1</sub> = {u<sub>1</sub>,..., u<sub>k</sub>, u<sub>k+1</sub>}* is linearly independent. If *S<sub>1</sub>* spans *H*, then stop. *S<sub>1</sub>* is a basis. Otherwise, find vector *u<sub>k+2</sub>* that is not in *S<sub>1</sub>*, and add it, making *S<sub>2</sub> = {u<sub>1</sub>,..., u<sub>k</sub>, u<sub>k+1</sub>, u<sub>k+2</sub>}*. And, so on. This process will have to stop, since the number of vectors in a linearly independent set cannot exceed *dim V*. So, one of the sets *S<sub>k</sub>* will have to span *H*, and the number of vectors in *S<sub>k</sub>* will not exceed *dim V*. Thus, *dim H ≤ dim V*. ## Theorem 13 The basis theorem Let *V* be an *n*-dimensional vector space. Then, any independent set of *n* "vectors" is a basis of *V*. And, any set of *n* "vectors" that spans *V* is automatically linearly independent. ## Proof * **a)** Suppose *S* is a linearly independent set of *n* "vectors". By Theorem 12, it can be extended to a basis of *V*. But, it cannot have any more "vectors" without becoming dependent; therefore, *S* is already a basis of *V*. * **b)** Suppose *S* has *n* "vectors", and spans *V*. Assume *S* is dependent. Then, a vector can be removed from *S*, and the new set will still span *V*, with (*n* - 1) vectors. One can continue until the new *S* is independent and still spans *V*. But then it would be a basis of *V* with less than *n* vectors. This cannot be since *dim V = n*. Therefore (this assumption is false. The set *S* is independent and it spans *V*, *S* is a basis for *V*. ## Def The *rank* of (*m x n*) matrix *A* is the dimension of the column space *Col A*, and the *nullity* of *A* is the dimension of the null space *Nul A*. ## Theorem 14 The rank theorem The dimension of the column space and the dim. of the null space of a matrix *A* satisfy the equation *rank A + nullity A = number of columns in A*. ## Proof *number of pivot columns + number of non-pivot columns = number of columns* *rank A + nullity A = number of columns* *dim Col A + dim Nul A = number of columns* ## Example 6 * **a)** If *A* is (*x x g*) and *nullity = 2*, what is the *rank A*? * **b)** Can a (*6 x 9*) matrix have *nullity* equal to 2? ## Solutions * **a)** *rank A + nullity A = number of columns* * * * * * * rank A + 2 = 9 * ⇒ *rank A = 7*. * **b)** *rank A + nullity A = number of columns* * * * * * rank A + 2 = 9 But, there can be no more than 6 linearly independent columns, so *rank A ≤ 6*. *nullity A = 9 - rank A ≥ 9 - 6 = 3* ## Theorem (Addition to theorem 8, section 2.3) * **a)** (*n x n*) matrix *A* is invertible * **b)** The columns of *A* form a basis for *R<sup>n</sup>* * **c)** *Col A = R<sup>n</sup>* * **d)** *rank A = n* * **e)** *Nullity A = 0* (i.e. there is only a trivial solution to *Ax = 0*) * **f)** *Nul A = {0<sub>y</sub>}* (This is really the same as *e*) ## Chapter 3 ### 3.1 Introduction to Determinants We have already defined a determinant of a (*2 x 2*) matrix. Suppose *A = ( a b c d )* Then, if the quantity *(ad - bc)* ≠ 0, the inverse exists – this is called *det(A)*. *A<sup>-1</sup> = ( d -b -c a )* / *det(A)* If *det(A) = ad - bc = 0*, the matrix is not invertible. Let us start with geometrical meaning of *det(A)*. Suppose Vector *x* = (a)<sup>T</sup>, y = (d)<sup>T</sup> *←* there are *columns* spanned by the *columns* of matrix *A*. What’s the area of a parallelogram spanned by *x* and *y*? [Insert image of the parallelogram] Area? Let’s put some angles in the figure. Also, I would like to have vector *x̃* perpendicular to *x*, and the same length, so *|x̃| = |x|*. *x̃* is easy to find, *x̃* = (c)<sup>T</sup>. *Area of a parallelogram = H|x|*. But *H = |y|sin θ*. So *Area = |x||y|sin θ*. We would like to use *dot product* of two vectors, for example *b · c = |b||c|cos ψ* where *ψ* is the angle between *b* and *c*. But in our formula we have *sin θ*. In *dot product*, there is *cosine*. Note that *θ + α = 90° = π/2*. But then *sin(θ) = sin(π/2 - α) = cos α*. *α* is the angle between *x̃* and *ỹ*. So that *x̃ · y = |x̃||y|cos α = |x||y|sin θ = Area*. So, *Area = x̃ · y = (-c) · (d)<sup>T</sup> = ad - bc = det(A)*. So, in our example, *det(A)* represents the area of the parallelogram. One can notice, though, that if we interchange the vectors *(a)<sup>T</sup>* *(c)<sup>T</sup>*, the *Area = bc - ad = - det(A)*. In fact: ## Theorem If *A* is a (*2 x 2*) matrix, then the absolute value of *det(A)* represents the area of the parallelogram spanned by the columns of *A*. In particular, if the columns are parallel, the parallelogram collapses into a line segment, the area = 0, the columns are linearly dependent, and the matrix is not invertible. We would like to introduce a determinant of a (*3 x 3*) matrix. The derivation is too complicated. So, here is the definition. Suppose *A = ( a<sub>11</sub> a<sub>12</sub> a<sub>13</sub> a<sub>21</sub> a<sub>22</sub> a<sub>23</sub> a<sub>31</sub> a<sub>32</sub> a<sub>33</sub> )* ## Define (*2 x 2*) matrices * *A<sub>11</sub> = ( a<sub>22</sub> a<sub>23</sub> a<sub>32</sub> a<sub>33</sub> )* ← obtained by removing column #1 and row #1 from *A*. * *A<sub>12</sub> = ( a<sub>21</sub> a<sub>23</sub> a<sub>31</sub> a<sub>33</sub> )* ← obtained by removing column #2 and row #1 from *A*. * *A<sub>13</sub> = ( a<sub>21</sub> a<sub>22</sub> a<sub>31</sub> a<sub>32</sub> )* ← obtained by removing column #3 and row #1 from *A*. ## Define * *det(A) = a<sub>11</sub>det(A<sub>11</sub>) - a<sub>12</sub>det(A<sub>12</sub>) + a<sub>13</sub>det(A<sub>13</sub>)* ## Example: *A = ( 1 2 3 4 5 6 7 8 9 )* Calculate *det(A)*. *det(A) = 1 * det ( 5 6 8 9 )* - 2 * det ( 4 6 7 9 )* + 3 * det ( 4 5 7 8 )* = 1(5 * 9 - 6 * 8) - 2(4 * 9 - 7 * 6) + 3(4 * 8 - 5 * 7) = 1(45 - 48) - 2(36 - 42) + 3(32 - 35) = -3 - 2(-6) + 3(-3) = -3 + 12 - 9 = 0* ## Theorem The absolute value of the determinant of a (*3 x 3*) matrix *A* represents the volume of a parallelepiped built on columns of *A*. [Insert an image of the parallelepiped] ## Determinant of an (*n x n*) matrix Suppose *A<sub>11</sub>* is matrix *A* without first column and first row, *A<sub>12</sub>* is matrix *A* without second column and first row, *A<sub>13</sub>* is matrix *A* without third column and first row, ## Definition then: * *det(A) = a<sub>11</sub>det(A<sub>11</sub>) - a<sub>12</sub>det(A<sub>12</sub>) + ... + a<sub>1n</sub>det(A<sub>1n</sub>) - ... + (-1)<sup>n + 1</sup>a<sub>n1</sub>det(A<sub>n1</sub>)* Note that this definition is *recursive*. We define the determinant of a (*3 x 3*) matrix through determinants of (*2 x 2*) matrix. Then, we define the determinant of a (*4 x 4*) matrix through determinants of a (*3 x 3*) matrix. Then, we can define a determinant of a (*5 x 5*) matrix, and so on ... ## Example: Calculate *det(A)* for *A = ( 1 2 3 4 5 6 7 8 0 )*, using expansion w.r. to first row. *det(A) = 1 * det( 5 6 8 0 ) - 2 * det( 4 6 7 0 ) + 3 * det( 4 5 7 8 ) = 1(-48) - 2(-42) + 3(32 - 35) = -48 + 84 - 9 = 27* ## Example: Calculate *det(A)* for the same matrix using expansion w.r. to the second column. *det(A) = (-2) * det( 4 6 7 0 ) + 5 * det( 1 3 7 0 ) - 8 * det( 1 3 4 6 ) = -2(-42) + 5 (-21) - 8(6 - 12) = 84 - 105 + 48 = 132 - 105 = 32 - 5 = 27* [Same result as it should be]