FY BSc Algebra NEP2-24-25 Theory PDF
Document Details
Fergusson College (Autonomous), Pune
Manish Agalave
Tags
Summary
These notes cover fundamental concepts in set theory, relations, and functions, including set operations, equivalence relations, and function types. They are geared towards first-year undergraduate students (FY BSc) at Fergusson College, and include examples and proofs.
Full Transcript
Fergusson College (Autonomous), Pune Department of Mathematics F. Y. B. Sc.: MTS 1001: Algebra Notes Manish Agalave...
Fergusson College (Autonomous), Pune Department of Mathematics F. Y. B. Sc.: MTS 1001: Algebra Notes Manish Agalave Date: August 20, 2024 1 Sets, Relations and Functions Sets, Operations on Sets, Power Set, Cartesian product of Sets, Graphical representation of sets. Relations, Types of Relations, Equivalence Relations, Partition of a set and equivalence classes. Functions, Composition of functions, Types of functions (One One, Onto, Bijective). 1.1 Sets 1. A set is a collection of objects known as elements or members. 2. Empty set:The set that has no element is called the empty set (or null set). 3. A set with a single element is called a Singleton set. 4. Since a set is determined by its elements, one way to write a set is to list all its elements and enclose them within curly braces. This is called the roster method of writing a set, and the list is known as a roster. 5. The mostly used way of writing sets is by describing the properties of its elements which are satis ed only by them. This form of a set is usually called set-builder form. 6. A set A is said to be nonempty, if A has at least one element. 7. Subsets: Suppose A, B are two sets. We say that A is a subset of B, if every element of A is also an element of B. 8. If A is a subset of B, then B is called a superset of A and we write 𝐵 ⊃ 𝐴. 9. Equality of sets Two sets A and B are said to be equal if they have the same elements. 10. If A ⊂ B and A ≠ B, then we say that A is a proper subset of B. 1.1.1 Operations on Sets 1. Let A,B be two sets. The union of A and B is the set defined by 𝐴 ∪ 𝐵 = {𝑥 : 𝑥 ∈ 𝐴 or 𝑥 ∈ 𝐵} That is, 𝑥 ∈ 𝐴 ∪ 𝐵 if and only if 𝑥 ∈ 𝐴 or 𝑥 ∈ 𝐵. 2. Prove or disprove following for sets A,B and C: (a) 𝐴 ∪ 𝐵 = 𝐵 ∪ 𝐴 (b) ( 𝐴 ∪ 𝐵) ∪ 𝐶 = 𝐴 ∪ (𝐵 ∪ 𝐶) 3. Let A,B be two sets. The intersection of A and B is the set defined by 𝐴 ∩ 𝐵 = {𝑥 : 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵} That is, 𝑥 ∈ 𝐴 ∩ 𝐵 if and only if 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵. 4. Prove or disprove following for sets A,B and C: (a) 𝐴 ∩ 𝐵 = 𝐵 ∩ 𝐴 (b) ( 𝐴 ∩ 𝐵) ∩ 𝐶 = 𝐴 ∩ (𝐵 ∩ 𝐶) 5. Theorem The intersection of sets distributes over the union of sets. In other words, for any sets A, B and C 𝐴 ∩ (𝐵 ∪ 𝐶) = ( 𝐴 ∩ 𝐵) ∪ ( 𝐴 ∩ 𝐶). 6. Theorem The union of sets distributes over the intersection of sets. In other words, for any sets A, B and C 𝐴 ∪ (𝐵 ∩ 𝐶) = ( 𝐴 ∪ 𝐵) ∩ ( 𝐴 ∪ 𝐶). 7. Two sets A and B are said to be disjoint if 𝐴 ∩ 𝐵 = ∅, that is, if A and B do not have any element in common. 8. For sets A and B the set difference of B from A is defined to be the set 𝐴 − 𝐵 = 𝐴\𝐵 = {𝑥 ∈ 𝐴 : 𝑥 ∉ 𝐵}. 9. For sets A and B the symmetric difference of A and B is defined to be the set 𝐴Δ𝐵 = ( 𝐴\𝐵) ∪ (𝐵\𝐴). 10. Prove or disprove following for sets A,B and C: (a) 𝐴Δ𝐵 = ( 𝐴 ∪ 𝐵)\( 𝐴 ∩ 𝐵) (b) 𝐴Δ𝐵 = 𝐵Δ𝐴 (c) ( 𝐴Δ𝐵)Δ𝐶 = 𝐴Δ(𝐵Δ𝐶) 11. Whenever we consider a set, its elements are chosen from some set which we call the universe or the universal set. In general, the universal set is considered as the totality of elements under consideration. 12. If A is a part of the universe U, that is, if 𝐴 ⊂ 𝑈, then the complement of A (in U), written as 𝐴𝑐 , is defined to be 𝐴𝑐 = 𝑈\𝐴 = {𝑥 ∈ 𝑈 : 𝑥 ∉ 𝐴} 13. Theorem (De Morgan’s Law). Let A and B be subsets of a universal set U. Then (a) ( 𝐴 ∩ 𝐵) 𝑐 = 𝐴𝑐 ∪ 𝐵𝑐 (b) ( 𝐴 ∪ 𝐵) 𝑐 = 𝐴𝑐 ∩ 𝐵𝑐 14. Let A, B be sets. Show that the following are equivalent. (a) 𝐴 ⊆ 𝐵. (b) 𝐴 ∪ 𝐵 = 𝐵. (c) 𝐴 ∩ 𝐵 = 𝐴. (d) 𝐵 𝑐 ⊆ 𝐴𝑐. 15. Family of sets: Many a time in mathematics we deal with sets whose elements are sets. 16. The sets 𝐴1 , 𝐴2 ,... , 𝐴30 are labeled or indexed by the numbers from the set {1, 2,... , 30}. 17. Suppose we consider intervals of the form 𝐼𝑛 = [𝑛, 𝑛 + 1] for natural numbers 𝑛. Then, the sets are labeled or indexed by the set of natural numbers. 18. We may consider intervals of the form 𝐹𝑟 = [𝑟, 𝑟 + 1] for real numbers 𝑟. The sets are labeled or indexed by the real numbers. 19. Union of a family of sets: Let {𝐹𝛼 : 𝛼 ∈ Γ} be a family of sets indexed by a nonempty set Γ. Let us assume that each of the sets in the family is a subset of some universal set 𝑈. We define the union of the family of sets by ∪𝛼∈Γ 𝐹𝛼 = {𝑥 ∈ 𝑈 : ∃𝛼 ∈ Γ such that 𝑥 ∈ 𝐹𝛼 }. 20. Intersection of a family of sets: Let {𝐹𝛼 : 𝛼 ∈ Γ} be a family of sets indexed by a nonempty set Γ. Let us assume that each of the sets in the family is a subset of some universal set 𝑈. We define the intersection of the family of sets by ∩𝛼∈Γ 𝐹𝛼 = {𝑥 ∈ 𝑈 : ∀𝛼 ∈ Γ we have 𝑥 ∈ 𝐹𝛼 }. 21. Pairwise disjoint family of sets: Let {𝐹𝛼 : 𝛼 ∈ Γ} be a family of sets indexed by a nonempty set Γ. Let us assume that each of the sets in the family is a subset of some universal set 𝑈. Then the family is said to be disjoint if ∩𝛼∈Γ 𝐹𝛼 = ∅. The family is said to be a pairwise disjoint if for every pair of elements 𝛼, 𝛽 ∈ Γ with 𝛼 ≠ 𝛽, we have 𝐹𝛼 ∩ 𝐹𝛽 = ∅. 22. Archimedean Property Given any real number x, there exists a natural number n such that 𝑛 > 𝑥. In other words, the set N is not bounded above in R. 23. Pairwise disjoint family of sets 1.1.2 Power set For a set 𝑆, the power set of 𝑆 is defined to be the family of all subsets of 𝑆, and is denoted by 𝒫(𝑆). 1.1.3 Cartesian Product The Cartesian product of two sets 𝐴 and 𝐵; denoted by 𝐴 × 𝐵 is defined as the set {(𝑎, 𝑏) : 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵}. Here (𝑎, 𝑏) is called an ordered pair. 1.2 Relations 1. Relation : A relation 𝑅 from a non-empty set 𝐴 to a non-empty set 𝐵 is a subset of 𝐴 × 𝐵. 2. At times, a relation between X and Y is also called a binary relation between X and Y. 3. Relation : A relation 𝑅 on a non-empty set 𝑆 is a subset of 𝑆 × 𝑆. 1.2.1 Types of Relations 1. Reflexive Relation : A relation 𝑅 on a non-empty set 𝑆 is called reflexive, if for every 𝑥 ∈ 𝑆, (𝑥, 𝑥) ∈ 𝑅 i.e. ∀𝑥 ∈ 𝑆, 𝑥𝑅𝑥. 2. Symmetric Relation : A relation 𝑅 on a non-empty set 𝑆 is called symmetric, if (𝑥, 𝑦) ∈ 𝑅, then (𝑦, 𝑥) ∈ 𝑅 i.e. if 𝑥𝑅𝑦 then 𝑦𝑅𝑥. 3. Anti-Symmetric Relation : A relation 𝑅 on a non-empty set 𝑆 is called anti-symmetric, if (𝑥, 𝑦) ∈ 𝑅 and (𝑦, 𝑥) ∈ 𝑅 then 𝑥 = 𝑦 i.e. if 𝑥𝑅𝑦 and 𝑦𝑅𝑥 then 𝑥 = 𝑦. 4. Transitive Relation : A relation 𝑅 on a non-empty set 𝑆 is called transitive, if (𝑥, 𝑦) ∈ 𝑅 and (𝑦, 𝑧) ∈ 𝑅, then (𝑥, 𝑧) ∈ 𝑅 i.e. if 𝑥𝑅𝑦 and 𝑦𝑅𝑧, then 𝑥𝑅𝑧. 5. Less than 6. The relation is called the standard order on R, denoted by ≤. 7. equality or identity relation on X 8. universal relation on X. 9. lexicographic or dictionary relation on 𝑋 × 𝑌. 10. relation induced by f. 11. 1.2.2 Equivalence Relation, equivalence classes and partitions 1. Equivalence Relation : A relation 𝑅 on a non-empty set 𝑆 is called an equivalence relation if 𝑅 is reflexive, symmetric and transitive. 2. Equivalence Class : Let 𝑅 be an equivalence relation on a non-empty set 𝑆 and 𝑎 ∈ 𝑆. Equivalence class of 𝑎 is following set: 𝑎¯ = [𝑎] = 𝐶𝑙 (𝑎) = 𝐸𝑙 (𝑎) = {𝑥 ∈ 𝑆|𝑎𝑅𝑥}. 3. Lemma Let 𝑋 be a nonempty set and ≡ be an equivalence relation on 𝑋. If 𝑦 ∈ [𝑥], then [𝑥] = [𝑦]. 4. Theorem Let 𝑋 be a nonempty set and ≡ an equivalence relation on 𝑋. Let 𝑥, 𝑦 ∈ 𝑋. Then exactly one of the following is true. (a) [𝑥] ∩ [𝑦] = ∅. (b) [𝑥] = [𝑦]. 5. Definition A partition of a nonempty set 𝑋 is a pairwise disjoint collection of subsets of 𝑋 whose union is 𝑋. 3 1.3 Functions 1. Let X and Y be nonempty sets. A function (or map or mapping) f from X to Y is a rule (or correspondence, association) that assigns to each element in X, a unique element in Y. 2. A function 𝑓 from set 𝐴 to 𝐵 is a relation from 𝐴 to 𝐵 such that for each 𝑎 ∈ 𝐴 belongs to precisely one pair (𝑎, 𝑏) ∈ 𝑓. 3. Note that for a correspondence or association from X to Y to be a function two things are important: (i) each element in X must be associated to an element in Y and (ii) no element in X should be associated to more than one element in Y. 4. If a function 𝑓 is from 𝐴 to 𝐵, we write 𝑓 : 𝐴 → 𝐵. 5. If f is a function from X to Y, then X is called the domain of f and Y is called the codomain of f. 6. If 𝑥 ∈ 𝑋 is associated to the element y in Y, we say that y is the image of x under f and write 𝑦 = 𝑓 (𝑥). If 𝑦 = 𝑓 (𝑥), then x is called a preimage of y. 7. The range of 𝑓 : 𝑋 → 𝑌 is defined to be the subset 𝑅( 𝑓 ) = {𝑦 ∈ 𝑌 |∃𝑥 ∈ 𝑋 such that 𝑦 = 𝑓 (𝑥)}. 8. Instead of (𝑥, 𝑦) ∈ 𝑓 , we usually write 𝑦 = 𝑓 (𝑥). 9. Function on X means the function 𝑓 : 𝑋 → 𝑋. 10. Constant function 11. Zero function 12. Identity function 13. The inclusion function (also called embedding) of 𝑋 in 𝑌. 14. Polynomial function 15. 16. (Equality of functions): Let 𝑋, 𝑌 , 𝑍, 𝑊 be nonempty sets, and 𝑓 : 𝑋 → 𝑌 and 𝑔 : 𝑍 → 𝑊 be two functions. We say that f is equal to g, and write f = g, if (i) X = Z and Y = W, that is, domains and codomains of f and g are equal, and (ii) for each 𝑥 ∈ 𝑋 = 𝑍, we have f(x) = g(x). 17. Definition (Graph of a function). Let 𝑓 : 𝑋 → 𝑌 be a function. Then the graph 𝐺 ( 𝑓 ) of 𝑓 is the subset of 𝑋 × 𝑌 defined by 𝐺 ( 𝑓 ) = {(𝑥, 𝑦) ∈ 𝑋 × 𝑌 : 𝑦 = 𝑓 (𝑥)}. 18. Suppose we are given a subset 𝑆 of R2. If every vertical line in R2 intersects 𝑆 at most once, then 𝑆 is indeed the graph of a function. This is called the vertical line test of function. 19. Definition: Let X be a nonempty set and 𝑓 , 𝑔 : 𝑋 → R. The maximum max{ 𝑓 , 𝑔} and the minimum min{ 𝑓 , 𝑔} of f and g are functions from 𝑋 to R defined as follows: max{ 𝑓 , 𝑔}(𝑥) = max{ 𝑓 (𝑥), 𝑔(𝑥)} and min{ 𝑓 , 𝑔}(𝑥) = min{ 𝑓 (𝑥), 𝑔(𝑥)} 20. Let 𝑓 : 𝐴 → 𝐵 be a function. 𝑓 is called a one-one function if 𝑓 (𝑎) = 𝑓 (𝑏) implies 𝑎 = 𝑏 (𝑎, 𝑏 ∈ 𝐴). 21. Let 𝑓 : 𝐴 → 𝐵 be a function. 𝑓 is called a one-one function if 𝑎 ≠ 𝑏 implies that 𝑓 (𝑎) ≠ 𝑓 (𝑏). 22. A one-one function is also called as an injective function or an injection. 23. Let 𝑓 : 𝐴 → 𝐵 be a function. 𝑓 is called an onto function if the range of 𝑓 is equal to 𝐵. 24. In other words, a function 𝑓 : 𝐴 → 𝐵 is said to be an onto function if for every 𝑏 ∈ 𝐵, there exists an 𝑎 ∈ 𝐴, such that 𝑏 = 𝑓 (𝑎). 25. An onto function is also called as a surjective function or a surjection. 26. Let f : A → B be a function. f is called a bijection if 𝑓 is both one-one (injective) and onto (surjective). 27. A bijection is also called a one-one correspondence. 28. 𝑓 : R → R is onto if and only if every horizontal line intersects the graph of 𝑓 at least once. 29. 𝑓 : R → R is one-one if and only if every horizontal line intersects the graph of f at most once. 30. 𝑓 : R → R is a bijection if and only if every horizontal line intersects the graph of f exactly once. 31. Increasing function: Let 𝐼 ⊂ R be any nonempty subset. A function 𝑓 : 𝐼 → R is said to be increasing if for every 𝑥, 𝑦 ∈ 𝐼, 𝑥 < 𝑦 implies 𝑓 (𝑥) ≤ 𝑓 (𝑦). 32. Decreasing function: Let 𝐼 ⊂ R be any nonempty subset. A function 𝑓 : 𝐼 → R is said to be decreasing if for every 𝑥, 𝑦 ∈ 𝐼, 𝑥 < 𝑦 implies 𝑓 (𝑥) ≥ 𝑓 (𝑦). 33. A function which is either increasing or decreasing is called a monotone function. 34. Theorem Let 𝑓 : 𝑋 → 𝑌 and 𝑔 : 𝑌 → 𝑍 be functions. (a) If 𝑓 and 𝑔 are injective (one-one), then 𝑔 ◦ 𝑓 is injective. (b) If 𝑓 and 𝑔 are surjective (onto), then 𝑔 ◦ 𝑓 is surjective. (c) If 𝑓 and 𝑔 are bijections, then 𝑔 ◦ 𝑓 is a bijection. 35. Theorem Let 𝑓 : 𝑋 → 𝑌 and 𝑔 : 𝑌 → 𝑍 be functions. (a) If 𝑔 ◦ 𝑓 is injective, then 𝑓 is injective. (b) If 𝑔 ◦ 𝑓 is surjective, then 𝑔 is surjective. 36. Theorem For functions 𝑓 : 𝑋 → 𝑌 , 𝑔 : 𝑌 → 𝑍 and ℎ : 𝑍 → 𝑊, ℎ ◦ (𝑔 ◦ 𝑓 ) = (ℎ ◦ 𝑔) ◦ 𝑓. 37. Inverse of a function 38. Graph of inverse of a function 39. Bijections between intervals 40. Cantor’s Theorem 41. Image of subsets under functions 42. Inverse image of subsets under functions 43. Let 𝑓 : 𝐴 → 𝐵 and 𝑔 : 𝐵 → 𝐶 be functions. Then the composition of 𝑓 with 𝑔, written as 𝑔 ◦ 𝑓 , is a function from 𝐴 to 𝐶 defined by (𝑔 ◦ 𝑓 )(𝑥) = 𝑔( 𝑓 (𝑥)). 44. Binary Operation : A function 𝐹 : 𝐴 × 𝐴 → 𝐴 is called a binary operation on 𝐴. 45. Binary operation ∗ on 𝐴 is said to be associative if ∀𝑥, 𝑦, 𝑧 ∈ 𝐴, 𝑥 ∗ (𝑦 ∗ 𝑧) = (𝑥 ∗ 𝑦) ∗ 𝑧. 46. Binary operation ∗ on 𝐴 is said to be commutative if ∀𝑥, 𝑦 ∈ 𝐴, 𝑥 ∗ 𝑦 = 𝑦 ∗ 𝑥. 47. Let ∗ be a binary operation on 𝐴. 𝑒 ∈ 𝐴 is said to be an identity with respect to ∗ if ∀𝑎 ∈ 𝐴, 𝑎 ∗ 𝑒 = 𝑒 ∗ 𝑎 = 𝑎. 2 Real Numbers Introduction of real numbers, Well ordering property, inductive property, Absolute value and its properties, Completeness property, Density of rational numbers. 2.1 The Algebraic and Order Properties of R 1. Algebraic Properties of R On the set R of real numbers there are two binary operations, denoted by + and · and called addition and multiplication, respectively. These operations satisfy the following properties: A1 𝑎 + 𝑏 = 𝑏 + 𝑎 for all 𝑎, 𝑏 ∈ R; (commutative) A2 (𝑎 + 𝑏) + 𝑐 = 𝑎 + (𝑏 + 𝑐) for all 𝑎, 𝑏, 𝑐 ∈ R; (associative) A3 there exists an element 0 in R such that 0 + 𝑎 = 𝑎 for all 𝑎 ∈ R; (existence of zero element) A4 for each 𝑎 in R there exists an element −𝑎 in R such that 𝑎 + (−𝑎) = 0; (existence of negative element) M1 𝑎 · 𝑏 = 𝑏 · 𝑎 for all 𝑎, 𝑏 ∈ R; (commutative) M2 (𝑎 · 𝑏) · 𝑐 = 𝑎 · (𝑏 · 𝑐) for all 𝑎, 𝑏, 𝑐 ∈ R; (associative) M3 there exists an element 1 in R distinct from 0 such that 1 · 𝑎 = 𝑎 for all 𝑎 ∈ R; (existence of unit element) 1 1 M4 for each 𝑎 ≠ 0 in R there exists an element in R such that 𝑎 · = 1; (existence of reciprocals) 𝑎 𝑎 D 𝑎 · (𝑏 + 𝑐) = (𝑎 · 𝑏) + (𝑎 · 𝑐) and (𝑏 + 𝑐) · 𝑎 = (𝑏 · 𝑎) + (𝑐 · 𝑎); (distributive property of multiplication over addition ) 2. Theorem: (a) If 𝑧 and 𝑎 are elements in R with 𝑧 + 𝑎 = 𝑎, then 𝑧 = 0. (b) If 𝑢 and 𝑏 ≠ 0 are elements in R with 𝑢 · 𝑏 = 𝑏, then 𝑢 = 1. (c) If 𝑎 ∈ R, then 𝑎 · 0 = 0. 3. Theorem: 1 (a) If 𝑎 ≠ 0 and 𝑏 in R such that 𝑎 · 𝑏 = 1, then 𝑏 =. 𝑎 (b) If 𝑎 · 𝑏 = 0, then either 𝑎 = 0 or 𝑏 = 0. 2.2 Rational and Irrational Numbers 1. Theorem: There does not exist a rational number 𝑟 such that 𝑟 2 = 2. 2.3 The Order Properties of R 1. There is a nonempty subset P of R, called the set of positive real numbers, that satisfies the following properties: (a) If 𝑎, 𝑏 belongs P, then 𝑎 + 𝑏 belongs to P. (b) If 𝑎, 𝑏 belongs P, then 𝑎𝑏 belongs to P. (c) Trichotomy Property If 𝑎 belongs to R, then exactly one of the following holds: 𝑎 ∈ P, 𝑎 = 0, −𝑎 ∈ P. 2. Definition: Let 𝑎, 𝑏 be elements of R. (a) If 𝑎 − 𝑏 ∈ P, then we write 𝑎 > 𝑏 or 𝑏 < 𝑎. (b) If 𝑎 − 𝑏 ∈ P ∪ {0}, then we write 𝑎 ≥ 𝑏 or 𝑏 ≤ 𝑎. 3. Theorem:: Let 𝑎, 𝑏, 𝑐 be any elements of R. (a) If 𝑎 > 𝑏 and 𝑏 > 𝑐, then 𝑎 > 𝑐. 6 (b) If 𝑎 > 𝑏, then 𝑎 + 𝑐 > 𝑏 + 𝑐. (c) If 𝑎 > 𝑏 and 𝑐 > 0, then 𝑎𝑐 > 𝑏𝑐. If 𝑎 > 𝑏 and 𝑐 < 0, then 𝑎𝑐 < 𝑏𝑐. 4. Theorem:: (a) Let 𝑎 ∈ R and 𝑎 ≠ 0, then 𝑎 2 > 0. (b) 1 > 0. (c) If 𝑛 ∈ N, then 𝑛 > 0. 5. Theorem: If 𝑎 ∈ R is such that 0 ≤ 𝑎 < 𝜖 for every 𝜖 > 0, then 𝑎 = 0. 6. Theorem:: Let 𝑎𝑏 > 0, then either (a) 𝑎 > 0 and 𝑏 > 0, or (b) 𝑎 < 0 and 𝑏 < 0. 7. Corollary: Let 𝑎𝑏 < 0, then either (a) 𝑎 < 0 and 𝑏 > 0, or (b) 𝑎 > 0 and 𝑏 < 0. 8. Determine the set 𝐴 of all real numbers 𝑥 such that 2𝑥 + 3 ≤ 6. 9. Determine the set 𝐵 = {𝑥 ∈ R | 𝑥 2 + 𝑥 > 2}. 2𝑥 + 1 10. Determine the set 𝐶 = {𝑥 ∈ R | < 1}. 𝑥+2 11. Let 𝑎 ≥ 0 and 𝑏 ≥ 0. Then √ √ 𝑎 < 𝑏 ⇐⇒ 𝑎 2 < 𝑏 2 ⇐⇒ 𝑎 < 𝑏. 𝑎+𝑏 12. If 𝑎, 𝑏 are positive real numbers, then their Arithmetic mean is √ 2 and their Geometric mean is 𝑎𝑏. √ 𝑎+𝑏 The Arithmetic-Geometric Mean Inequality for 𝑎, 𝑏 is 𝑎𝑏 <. 2 13. Bernoulli’s Inequality: If 𝑥 > −1, then (1 + 𝑥) 𝑛 ≥ 1 + 𝑛𝑥 for all 𝑛 ∈ N. 2.4 Absolute Value and the Real Line 𝑎 if 𝑎>0 1. Definition: The absolute value of a real number 𝑎, denoted by | 𝑎 |, is defined by | 𝑎 |= 0 if 𝑎=0 −𝑎 if 𝑎 0. Then the 𝜖-neighborhood of 𝑎 is the set 𝑉𝜖 (𝑎) = {𝑥 ∈ R || 𝑥 − 𝑎 |< 𝜖 }. 2. Theorem: Let 𝑎 ∈ R. If 𝑥 belongs to the neighborhood 𝑉𝜖 (𝑎) for every 𝜖 > 0, then 𝑥 = 𝑎. 2.6 The Completeness Property of R 2.6.1 Suprema and Infima 1. Definition: Let 𝑆 be a nonempty subset of R. (a) The set 𝑆 is said to be bounded above if there exists a number 𝑢 ∈ R such that 𝑠 ≤ 𝑢 for all 𝑠 ∈ 𝑆. Each such number 𝑢 is called an upper bound of 𝑆. (b) The set 𝑆 is said to be bounded below if there exists a number 𝑢 ∈ R such that 𝑢 ≤ 𝑠 for all 𝑠 ∈ 𝑆. Each such number 𝑢 is called an lower bound of 𝑆. (c) The set 𝑆 is said to be bounded if it is both bounded above and bounded below. A set is said to be unbounded if it is not bounded. 2. Definition: Let 𝑆 be a nonempty subset of R. (a) If 𝑆 is bounded above, then a number 𝑢 is said to be a supremum (or a least upper bound (l.u.b.)) of 𝑆 if it satisfies the conditions: i. 𝑢 is an upper bound of 𝑆, and ii. if 𝑣 is any upper bound of 𝑆, then 𝑢 ≤ 𝑣. (b) If 𝑆 is bounded below, then a number 𝑢 is said to be a infimum (or a greatest lower bound (g.l.b.)) of 𝑆 if it satisfies the conditions: i. 𝑤 is an lower bound of 𝑆, and ii. if 𝑡 is any lower bound of 𝑆, then 𝑡 ≤ 𝑤. 3. Following two statements about a number 𝑢 and a set 𝑆 are equivalent: (a) 𝑢 is an upper bound of 𝑆. (b) 𝑠 ≤ 𝑢 for all 𝑠 ∈ 𝑆. 4. Following statements about an upper bound 𝑢 of a set 𝑆 are equivalent: (a) if 𝑣 is any upper bound of 𝑆, then 𝑢 ≤ 𝑣, (b) if 𝑧 < 𝑢, then 𝑧 is not an upper bound of 𝑆. (c) if 𝑧 < 𝑢, then there exists 𝑠 𝑧 ∈ 𝑆 such that 𝑧 < 𝑠 𝑧 , (d) if 𝜖 > 0, then there exists 𝑠𝜖 ∈ 𝑆 such that 𝑢 − 𝜖 < 𝑠𝜖. 5. Lemma A number 𝑢 is the supremum of a nonempty subset 𝑆 of R if and only if 𝑢 satisfies the conditions: (a) 𝑠 ≤ 𝑢 for all 𝑠 ∈ 𝑆, (b) if 𝑣 < 𝑢, then there exists 𝑠′ ∈ 𝑆 such that 𝑣 < 𝑠′. 6. Lemma An upper bound 𝑢 of a nonempty set 𝑆 of R is the supremum of 𝑆 if and only if for every 𝜖 > 0 there exists an 𝑠𝜖 ∈ 𝑆 such that 𝑢 − 𝜖 < 𝑠𝜖. 2.6.2 The Completeness Property of R 1. Every nonempty set of real numbers that has an upper bound also has a supremum in R. 2.6.3 Applications of the Supremum Property 1. Let 𝑆 be a nonempty subset of R that is bounded above, and let 𝑎 be any number in R. Define the set 𝑎 + 𝑆 = {𝑎 + 𝑠 : 𝑠 ∈ 𝑆}. Then prove that sup(𝑎 + 𝑆) = 𝑎 + sup 𝑆. 2. Suppose that 𝐴 and 𝐵 are nonempty subsets of R that satisfy the property 𝑎 ≤ 𝑏 for all 𝑎 ∈ 𝐴 and all 𝑏 ∈ 𝐵. Then prove that sup 𝐴 ≤ inf 𝐵. 3. Discuss relation between supremum, infimum and functions. 4. The Archimedean Property If 𝑥 ∈ R, then there exists 𝑛𝑥 ∈ N such that 𝑥 < 𝑛𝑥. 1 5. Corollary: If 𝑆 = | 𝑛 ∈ N , then inf 𝑆 = 0. 𝑛 1 6. Corollary: If 𝑡 > 0, there exists 𝑛𝑡 ∈ N such that 0 < < 𝑡. 𝑛𝑡 7. Theorem: There exists a positive real number 𝑥 such that 𝑥 2 = 2. 8. The Density Theorem for Rational Numbers If 𝑥 and 𝑦 are any real numbers with 𝑥 < 𝑦, then there exists a rational number 𝑟 ∈ Q such that 𝑥 < 𝑟 < 𝑦. 9. The Density Theorem for Irrational Numbers If 𝑥 and 𝑦 are any real numbers with 𝑥 < 𝑦, then there exists an irrational number 𝑧 such that 𝑥 < 𝑧 < 𝑦. 9 3 Row Echelon Form of Matrices and Applications Systems of linear equations, Row reduction and echelon forms, The rank of a matrix and applications, Matrix operations, Determinants, The inverse of a matrix, Characterizations of invertible matrices, Eigen values and eigenvectors, The characteristic equation and the Cayley-Hamilton theorem. 3.1 Linear Equations in Linear Algebra 3.1.1 System of linear equations 1. Linear equation: A linear equation in the variables 𝑥 1 , 𝑥2 ,... , 𝑥 𝑛 is an equation that can be written in the form 𝑎 1 𝑥1 + 𝑎 2 𝑥 2 + · · · + 𝑎 𝑛 𝑥 𝑛 = 𝑏 where 𝑏 and the coefficients 𝑎 1 , 𝑎 2 ,... , 𝑎 𝑛 are real or complex numbers, usually known in advance. The subscript 𝑛 may be any positive integer. 2. System of linear equations: A system of linear equations (or a linear system) is a collection of one or more linear equations involving the same variables say, 𝑥1 , 𝑥2 ,... , 𝑥 𝑛. 3. A solution of the system is a list (𝑠1 , 𝑠2 ,... , 𝑠𝑛 ) of numbers that makes each equation a true statement when the values 𝑠1 , 𝑠2 ,... , 𝑠𝑛 are substituted for 𝑥1 , 𝑥2 ,... , 𝑥 𝑛 , respectively. 4. The set of all possible solutions is called the solution set of the linear system. 5. Two linear systems are called equivalent if they have the same solution set. 6. A system of linear equations is said to be consistent if it has either one solution or infinitely many solutions; a system is inconsistent if it has no solution. 7. Discuss the solution set of a system of two linear equations in two variables. 8. It is easy because it amounts to finding the intersection of two lines. 9. Matrix Notation 10. The essential information of a linear system can be recorded compactly in a rectangular array called a matrix. 11. Given the system 𝑥 1 − 2𝑥 2 + 𝑥3 = 0 (1) 2𝑥 2 − 8𝑥3 = 8 (2) 5𝑥 1 − 5𝑥 3 = 10 (3) 1 −2 1 with the coefficients of each variable aligned in columns, the matrix 0 2 8 is called the coefficient 5 0 −5 1 −2 1 0 matrix (or matrix of coefficients) of the given system, and the matrix 0 2 8 8 is called the augmented 5 0 −5 10 matrix of the given system. 12. Solving the linear system: The basic strategy is to replace one system with an equivalent system (that is one with the same solution set) that is easier to solve. 13. Roughly speaking, use the 𝑥1 term in the first equation of a system to eliminate the 𝑥 1 terms in the other equations. Then use the 𝑥2 term in the second equation to eliminate the 𝑥 2 terms in the other equations, and so on, until you finally obtain a very simple equivalent system of equations. 14. Three basic operations are used to simplify a linear system: (a) Replace one equation by the sum of itself and a multiple of another equation, (b) interchange two equations, and (c) multiply all the terms in an equation by a nonzero constant. 15. Explain how to solve using an example. 16. Two matrices are called row equivalent if there is a sequence of elementary row operations that transforms one matrix into the other. 17. It is important to note that row operations are reversible. 18. If the augmented matrices of two linear systems are row equivalent, then the two systems have the same solution set. 19. Determine if the following system is consistent: 𝑥 2 − 4𝑥 3 = 8 (4) 2𝑥 1 − 3𝑥 2 + 2𝑥 3 = 1 (5) 4𝑥1 − 8𝑥 2 + 12𝑥 3 = 1 (6) 3.1.2 Row Reduction and Echelon Forms: 1. The algorithm applies to any matrix 2. In the definitions that follow, a nonzero row or column in a matrix means a row or column that contains at least one nonzero entry; a leading entry of a row refers to the leftmost nonzero entry (in a nonzero row). 3. A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: (a) All nonzero rows are above any rows of all zeros. (b) Each leading entry of a row is in a column to the right of the leading entry of the row above it. (c) All entries in a column below a leading entry are zeros. 4. If a matrix in echelon form satisfies the following additional conditions, then it is in reduced echelon form (or reduced row echelon form): (a) The leading entry in each nonzero row is 1. (b) Each leading 1 is the only nonzero entry in its column. 5. An echelon matrix (respectively, reduced echelon matrix) is one that is in echelon form (respectively, reduced echelon form). Property 2 says that the leading entries form an echelon (“steplike”) pattern that moves down and to the right through the matrix. 6. If a matrix A is row equivalent to an echelon matrix U, we call U an echelon form (or row echelon form) of A; if U is in reduced echelon form, we call U the reduced echelon form of A. 7. Any nonzero matrix may be row reduced (that is, transformed by elementary row operations) into more than one matrix in echelon form, using different sequences of row operations. However, the reduced echelon form one obtains from a matrix is unique. 8. A pivot position in a matrix A is a location in A that corresponds to a leading 1 in the reduced echelon form of A. A pivot column is a column of A that contains a pivot position. 0 −3 −6 4 −1 −2 −1 3 9. Row reduce the matrix A below to echelon form, and locate the pivot columns of A, where 𝐴 = −2 −3 0 3 1 4 5 −9 10. The Row Reduction Algorithm: The algorithm that follows consists of four steps, and it produces a matrix in echelon form. A fifth step produces a matrix in reduced echelon form. 11 (a) Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top. (b) Select a nonzero entry in the pivot column as a pivot. If necessary, interchange rows to move this entry into the pivot position. (c) Use row replacement operations to create zeros in all positions below the pivot. (d) Cover (or ignore) the row containing the pivot position and cover all rows, if any, above it. Apply steps 1–3 to the submatrix that remains. Repeat the process until there are no more nonzero rows to modify. (e) Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. If a pivot is not 1, make it 1 by a scaling operation. 11. Apply elementary row operations to transform the following matrix first into echelon form and then into 0 3 −6 6 4 −5 reduced echelon form: 3 −7 8 −5 8 9 3 −9 12 −9 6 15 12. The combination of steps 1–4 is called the forward phase of the row reduction algorithm. Step 5, which produces the unique reduced echelon form, is called the backward phase. 13. Solutions of Linear Systems 14. The variables corresponding to pivot columns in the matrix are called basic variables. The other variables are called a free variables. 1 6 2 −5 −2 −4 15. Find the general solution of the linear system whose augmented matrix has been reduced to 0 0 2 −8 −1 3 0 0 0 0 1 7 16. Parametric Descriptions of Solution Sets: Parametric descriptions of solution sets in which the free variables act as parameters. Solving a system amounts to finding a parametric description of the solution set or determining that the solution set is empty. 17. Back-Substitution 18. Existence and Uniqueness Theorem: A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column—that is, if and only if an echelon form of the augmented matrix has no row of the form 0... 0 𝑏 , with b nonzero. If a linear system is consistent, then the solution set contains either (a) a unique solution, when there are no free variables, or (b) infinitely many solutions, when there is at least one free variable. 19. USING ROW REDUCTION TO SOLVE A LINEAR SYSTEM: (a) Write the augmented matrix of the system. (b) Use the row reduction algorithm to obtain an equivalent augmented matrix in echelon form. Decide whether the system is consistent. If there is no solution, stop; otherwise, go to the next step. (c) Continue row reduction to obtain the reduced echelon form. (d) Write the system of equations corresponding to the matrix obtained in step 3. (e) Rewrite each nonzero equation from step 4 so that its one basic variable is expressed in terms of any free variables appearing in the equation. 3.1.3 Vector Equations: 1. Vector will mean an ordered list of numbers.(For some time) 2. Vectors in R2 : 3. A matrix with only one column is called a column vector or simply a vector. 1 4. Example of vectors with two entries are u = 2 5. The set of all vectors with two entries is denoted by R2. 6. Two vectors are equal if and only if their corresponding entries are equal. 7. Given two vectors u and v in R2 , their sum is the vector u + v obtained by adding corresponding entries of u and v. 8. Given a vector u and a real number 𝑐, the scalar multiple of u by 𝑐 is the vector 𝑐u obtained by multiplying each entry in u by 𝑐. 9. Geometric Descriptions of R2. 10. Parallelogram Rule for Addition: If u and v in R2 are represented as points in the plane, then u+v corresponds to the fourth vertex of the parallelogram whose other vertices are u, 0, and u. 11. Vectors in R𝑛 : 12. If 𝑛 is a positive integer, R𝑛 denotes the collection of all lists (or ordered 𝑛-tuples) of 𝑛 real numbers, usually 𝑢 1 𝑢 2 written as 𝑛 × 1 column matrices, such as u = .. . 𝑢 𝑛 13. The vector whose entries are all zero is called the zero vector and is denoted by 0. 14. Algebraic Properties of R𝑛 : For all u, v, w in R𝑛 and all scalars 𝑐 and 𝑑: (a) u+v=v+u (b) (u + v) + w = u + (v + w) (c) u+0=u (d) u + (−u) = 0 where −u = (−1)u (e) 𝑐(u + v) = 𝑐v + 𝑐u (f) (𝑐 + 𝑑)u = 𝑐u + 𝑑u (g) (𝑐𝑑)u = 𝑐(𝑑u) = 𝑑 (u) (h) 1u = u 15. Linear Combinations: Given vectors v1 , v2 ,... , v 𝑝 in R𝑛 and given scalars 𝑐 1 , 𝑐 2 ,... , 𝑐 𝑝 , the vector y defined by y = 𝑐 1 v1 + · · · 𝑐 𝑝 v 𝑝 is called a linear combination of v1 , v2 ,... , v 𝑝 with weights 𝑐 1 , 𝑐 2 ,... , 𝑐 𝑝. 16. Determine whether 𝑏b can be generated (or written) as a linear combination of a1 and a2. 17. A vector equation 𝑥 1 a1 + · · · 𝑥 𝑛 a𝑛 = b has the same solution set as the linear system whose augmented matrix is a1 a2 · · · a 𝑛 b (7) In particular, b can be generated by a linear combination of a1 ,... , a𝑛 if and only if there exists a solution to the linear system corresponding to the matrix (7). 18. If v1 ,... , v 𝑝 are in R𝑛 , then the set of all linear combinations of v1 ,... , v 𝑝 is denoted by Span{v1 ,... , v 𝑝 } and is called the subset of R𝑛 spanned (or generated) by v1 ,... , v 𝑝. That is, Span{v1 ,... , v 𝑝 } is the collection of all vectors that can be written in the form 𝑐 1 v1 + · · · + 𝑐 𝑝 v 𝑝 with 𝑐 1 ,... , 𝑐 𝑝 scalars. 19. A Geometric Description of Span{v} and Span{u, v}: 20. Let v be a nonzero vector in R3. Then Span{v} is the set of all scalar multiples of v, which is the set of points on the line in R3 through v and 0. 21. If v and u are nonzero vectors in R3 , with v not a multiple of u, then Span{v, u} is the plane in R3 that contains v, u, and 0. 3.1.4 The Matrix Equation 𝐴x = b: 1. A fundamental idea in linear algebra is to view a linear combination of vectors as the product of a matrix and a vector. 2. If 𝐴 is an 𝑚 × 𝑛 matrix, with columns a1... , a𝑛 , and if x is in R𝑛 , then the product of 𝐴 and x, denoted by 𝐴x, is the linear combination of the columns of 𝐴 using the corresponding entries in x as weights; that is, 𝑥 1 𝑥 2 𝐴x = a1 a2 · · · a𝑛 .. = 𝑥 1 a1 + 𝑥2 a2 + · · · 𝑥 𝑛 a𝑛 . 𝑥 𝑛 3. If 𝐴 is an 𝑚 × 𝑛 matrix, with columns a1... , a𝑛 , and if b is in R𝑚 , the matrix equation Ax=b has the same solution set as the vector equation 𝑥 1 a1 + 𝑥 2 a2 + · · · 𝑥 𝑛 a 𝑛 = b which, in turn, has the same solution set as the system of linear equations whose augmented matrix is a1 a2 · · · a 𝑛 b. 4. Existence of Solutions: 5. The equation 𝐴x = b has a solution if and only if b is a linear combination of the columns of 𝐴. 6. “The columns of 𝐴 span R𝑚 ” means that every b in R𝑚 is a linear combination of the columns of 𝐴. 7. In general, a set of vectors {v1 ,... , v 𝑝 } in R𝑚 spans (or generates) R𝑚 if every vector in R𝑚 is a linear combination of v1 ,... , v 𝑝 that is, if Span{v1 ,... , v 𝑝 } = R𝑚. 8. Let 𝐴 be an 𝑚 × 𝑛 matrix. Then the following statements are logically equivalent. That is, for a particular 𝐴, either they are all true statements or they are all false. (a) For each b in R𝑚 , the equation 𝐴x = b has a solution. (b) Each b in R𝑚 is a linear combination of the columns of 𝐴. (c) The columns of 𝐴 span R𝑚. (d) 𝐴 has a pivot position in every row. 9. Computation of 𝐴x. 10. Row–Vector Rule for Computing 𝐴x: If the product 𝐴x is defined, then the 𝑖th entry in 𝐴x is the sum of the products of corresponding entries from row 𝑖 of 𝐴 and from the vector x. 11. Properties of the Matrix-Vector Product 𝐴x: 12. If 𝐴 is an 𝑚 × 𝑛 matrix, u and v are vectors in R𝑛 , and 𝑐 is a scalar, then: (a) 𝐴(u + v) = 𝐴u + 𝐴v (b) 𝐴(𝑐u) = 𝑐( 𝐴u) 3.1.5 Solution Sets of Linear Systems: 1. Homogeneous Linear Systems: A system of linear equations is said to be homogeneous if it can be written in the form 𝐴x = 0, where 𝐴 is an 𝑚 × 𝑛 matrix and 0 is the zero vector in R𝑚. 2. Such a system 𝐴x = 0 always has at least one solution, namely x = 0 (the zero vector in R𝑛. This zero solution is usually called the trivial solution. 3. For a given equation 𝐴x = 0; the important question is whether there exists a nontrivial solution, that is, a nonzero vector x that satisfies 𝐴x = 0. 4. The homogeneous equation 𝐴x = 0 has a nontrivial solution if and only if the equation has at least one free variable. 5. Parametric Vector Form: The equation 𝑎𝑥 + 𝑏𝑦 + 𝑐𝑧 = 𝑑, 𝑎, 𝑏, 𝑐, 𝑑 ∈ R for plane is an implicit description of the plane. Solving this equation amounts to finding an explicit description of the plane as the set spanned by u and v. Equation x = 𝑠u + 𝑡v, 𝑠, 𝑡 ∈ R is called a parametric vector equation of the plane. 6. The equation x = 𝑠v (with (with 𝑠 in R), is a parametric vector equation of a line. 7. Whenever a solution set is described explicitly with vectors, we say that the solution is in parametric vector form. 8. Non-Homogeneous Linear Systems: A system of linear equations is said to be non-homogeneous if it can be written in the form 𝐴x = b, where 𝐴 is an 𝑚 × 𝑛 matrix and b is the non-zero vector in R𝑚. 9. Solutions of Non-homogeneous Systems: When a non-homogeneous linear system has many solutions, the general solution can be written in parametric vector form as one vector plus an arbitrary linear combination of vectors that satisfy the corresponding homogeneous system. 10. The equation x = p + 𝑡v, where t is a general parameter, describes the solution set of 𝐴x = b in parametric vector form. The solution set of 𝐴x = 0 has the parametric vector equation x = 𝑡v [with the same v ]. Thus the solutions of 𝐴x = b are obtained by adding the vector p to the solutions of 𝐴x = 0. The vector p itself is just one particular solution of 𝐴x = b. 11. Suppose the equation 𝐴x = b is consistent for some given b, and let p be a solution. Then the solution set of 𝐴x = b is the set of all vectors of the form w = p + vℎ , where vℎ is any solution of the homogeneous equation 𝐴x = 0. 12. WRITING A SOLUTION SET (OF A CONSISTENT SYSTEM) IN PARAMETRIC VECTOR FORM (a) Row reduce the augmented matrix to reduced echelon form. (b) Express each basic variable in terms of any free variables appearing in an equation. (c) Write a typical solution x as a vector whose entries depend on the free variables, if any. (d) Decompose x into a linear combination of vectors (with numeric entries) using the free variables as parameters. 3.1.6 Applications of Linear Systems: 1. A Homogeneous System in Economics 2. Balancing Chemical Equations 3. Network Flow 3.1.7 Linear Independence: 1. An indexed set of vectors {v1 ,... , v 𝑝 } in R𝑛 is said to be linearly independent if the vector equation 𝑥 1 v1 + · · · + 𝑥 𝑝 v 𝑝 = 0 has only the trivial solution. The set {v1 ,... , v 𝑝 } is said to be linearly dependent if there exist weights {𝑐 1 ,... , 𝑐 𝑝 }, not all zero, such that 𝑐 1 v1 + · · · + 𝑐 𝑝 v 𝑝 = 0 2. Let v1 , v2 , v3 be vectors in R3. (a) Determine if the set {v1 , v2 , v3 } is linearly independent. (b) If possible, find a linear dependence relation among v1 , v2 , and v3. 3. Linear Independence of Matrix Columns: Suppose that we begin with a matrix 𝐴 = a1 · · · a𝑛 instead of a set of vectors. The matrix equation 𝐴x = 0 can be written as 𝑥 1 a1 + 𝑥 2 a2 + · · · + 𝑥 𝑛 a𝑛 = 0. 4. Each linear dependence relation among the columns of 𝐴 corresponds to a nontrivial solution of 𝐴x = 0. 5. The columns of a matrix 𝐴 are linearly independent if and only if the equation 𝐴x = 0 has only the trivial solution. 6. Sets of One or Two Vectors: 7. A set containing only one vector-say, v is linearly independent if and only if v is not the zero vector. 8. This is because the vector equation 𝑥 1 v = 0 has only the trivial solution when v ≠ 0. The zero vector is linearly dependent because 𝑥 1 0 = 0 has many nontrivial solutions. 9. A set of two vectors {v1 , v2 } is linearly dependent if at least one of the vectors is a multiple of the other. The set is linearly independent if and only if neither of the vectors is a multiple of the other. 10. Sets of Two or More Vectors: 11. Characterization of Linearly Dependent Sets: An indexed set 𝑆 = {v1 ,... , v 𝑝 } of two or more vectors is linearly dependent if and only if at least one of the vectors in 𝑆 is a linear combination of the others. In fact, if 𝑆 is linearly dependent and v1 ≠ 0, then some v 𝑗 (with 𝑗 > 1) is a linear combination of the preceding vectors,v1 ,... , v 𝑗−1. 12. If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1... , v 𝑝 } in R𝑛 is linearly dependent if 𝑝 > 𝑛. 13. If a set𝑆 = {v1... , v 𝑝 } in R𝑛 contains the zero vector, then the set is linearly dependent. 3.1.8 Introduction to Linear Transformations: 1. A transformation (or function or mapping) 𝑇 from R𝑛 to R𝑚 is a rule that assigns to each vector x in R𝑛 a vector 𝑇 (x) in R𝑚. 2. Matrix Transformations: 3. A transformation (or mapping) 𝑇 is linear if (a) 𝑇 (u + v) = 𝑇u + 𝑇v for all u, v in the domain of 𝑇 (b) 𝑇 (𝑐u) = 𝑐(𝑇u) for all scalars 𝑐 and all u in the domain of 𝑇 4. If 𝑇 is a linear transformation, then 𝑇 (0) = 0 and 𝑇 (𝑐u + 𝑑v) = 𝑐𝑇 (u) + 𝑑𝑇 (v). for all vectors u, v in the domain of 𝑇 and all scalars 𝑐, 𝑑. 5. 6. 7. 3.1.9 The Matrix of a Linear Transformation: 1. Let 𝑇 : R𝑛 → R𝑚 be a linear transformation. Then there exists a unique matrix 𝐴 such that 𝑇 (𝑥) = 𝐴𝑥 for all 𝑥 in R𝑛. In fact, 𝐴 is the 𝑚 × 𝑛 matrix whose 𝑗th column is the vector 𝑇 (𝑒 𝑗 ), where 𝑒 𝑗 is the 𝑗th column of the identity matrix in R𝑛 : 𝐴 = 𝑇 (𝑒 1 ) 𝑇 (𝑒 2 ) · · · 𝑇 (𝑒 𝑛 ). 2. 3. 4. 5. 6. 7. 3.2 Matrix Algebra: 3.2.1 Matrix Operations: 1. Matrix is a linear transformation from R𝑛 to R𝑚. 2. A matrix is a rectangular array of numbers with 𝑚 rows and 𝑛 columns. It is represented by 𝑎 11 𝑎 12 · · · 𝑎 1𝑛 𝑎 21 𝑎 22 · · · 𝑎 2𝑛 𝐴 = .. ...... .... 𝑎 𝑚1 𝑎 𝑚2 · · · 𝑎 𝑚𝑛 𝑚×𝑛 where 𝑎𝑖 𝑗 ∈ R(1 ≤ 𝑖 ≤ 𝑚, 1 ≤ 𝑗 ≤ 𝑛) are called entries. We say that matrix 𝐴 is of size 𝑚 × 𝑛 in this case. 3. Row Matrix : A matrix with only one row. For example: 1 2 3 4. Column Matrix : A matrix with only one column. For example: 4 5 6 5. Null Matrix (Zero Matrix) : All elements are zero. For example: 0 0 0 0 6. Square Matrix : The number of rows equals the number of columns. For example: 1 2 3 4 7. Diagonal Matrix : All non-diagonal elements are zero. For example: 2 0 0 0 5 0 0 0 3 8. Identity Matrix : Diagonal elements are all 1, and non-diagonal elements are 0. For example: 1 0 0 0 1 0 0 0 1 9. Triangular Matrix : (a) Upper Triangular : All elements below the diagonal are zero. (b) Lower Triangular : All elements above the diagonal are zero. 10. Singular Matrix : A matrix with a determinant of 0. 11. Non-Singular Matrix : A matrix with a non-zero determinant. 12. Matrix Dimensions : We can only add matrices if they have the same dimensions. In other words, both matrices must have the same number of rows and columns. 13. Sums and Scalar Multiples 14. Element-wise Addition : To add matrices, we simply add the corresponding entries (elements) together. For example, if we have matrices 𝐴 and 𝐵: 𝑎 𝑐 𝑒 𝑔 𝐴= 𝐵= 𝑏 𝑑 𝑓 ℎ The sum 𝐴 + 𝐵 is obtained by adding the corresponding entries: 𝑎+𝑒 𝑐+𝑔 𝐴+𝐵= 𝑏+ 𝑓 𝑑+ℎ 15. Scalar Multiplication : We can also multiply a matrix by a scalar (a real number). To do this, we multiply every element of the matrix by the scalar. 16. Let 𝐴, 𝐵 and 𝐶 be matrices of the same size, and let 𝑟 and 𝑠 be scalars. (a) 𝐴 + 𝐵 = 𝐵 + 𝐴 (b) ( 𝐴 + 𝐵) + 𝐶 = 𝐴 + (𝐵 + 𝐶) (c) 𝐴 + 0 = 𝐴 (d) 𝑟 ( 𝐴 + 𝐵) = 𝑟 𝐴 + 𝑟 𝐵 (e) (𝑟 + 𝑠) 𝐴 = 𝑟 𝐴 + 𝑠𝐴 (f) (𝑟 𝑠) 𝐴 = 𝑟 (𝑠𝐴) 17. Subtraction : Subtraction of matrices follows the same element-wise approach. For matrices 𝐶 and 𝐷: 2 8 5 6 𝐶= 𝐷= 0 9 11 3 The difference 𝐶 − 𝐷 is calculated as: 2−5 8−6 −3 2 𝐶−𝐷 = = 0 − 11 9 − 3 −11 6 18. Scalar Multiplication as Repeated Addition : Scalar multiplication is like repeated matrix addition. For 4 8 example, if 𝐴 = , then: 2 1 3·4 3·8 𝐴 + 𝐴 + 𝐴 = 3𝐴 = 3·2 3·1 19. Subtraction as the Addition of the Opposite : We can think of 𝐴 − 𝐵 as 𝐴 + (−𝐵), which is equivalent to 𝐴 + (−1) · 𝐵. 20. Matrix multiplication is a fundamental operation in linear algebra. When multiplying two matrices, the number of columns in the first matrix must match the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows from the first matrix and the number of columns from the second matrix. Let’s denote the matrices as follows: Matrix 𝐴: An (𝑚 × 𝑛) matrix Matrix 𝐵: An (𝑛 × 𝑝) matrix The matrix product (𝐶 = 𝐴𝐵) is defined as an (𝑚 × 𝑝) matrix, where each entry (𝑐𝑖 𝑗 ) is obtained by multiplying the corresponding entries of the (i)th row of matrix A and the (j)th column of matrix B, and then summing up these products. In other words: 𝑛 ∑︁ 𝑐𝑖 𝑗 = 𝑎𝑖𝑘 · 𝑏 𝑘 𝑗 𝑘=1 Here, (𝑎𝑖𝑘 ) represents the entry in the (𝑖)th row and (𝑘)th column of matrix 𝐴, and (𝑏 𝑘 𝑗 ) represents the entry in the (𝑘)th row and ( 𝑗)th column of matrix 𝐵. The product (𝐴𝐵) is defined only if the number of columns in matrix 𝐴 equals the number of rows in matrix 𝐵 (i.e., (𝑛)). 21. If 𝐴 is an 𝑚 × 𝑛 matrix, and if 𝐵 is an 𝑛 × 𝑝 matrix with columns 𝑏 1 ,... , 𝑏 𝑝 , then the product 𝐴𝐵 is the 𝑚 × 𝑝 matrix whose columns are 𝐴𝑏 1,... , 𝐴𝑏 𝑝. That is, 𝐴𝐵 = 𝐴 𝑏 1 𝑏 2 · · · 𝑏 𝑝 = 𝐴𝑏 1 𝐴𝑏 2 · · · 𝐴𝑏 𝑝. 22. Each column of 𝐴𝐵 is a linear combination of the columns of 𝐴 using weights from the corresponding column of 𝐵. 23. ROW–COLUMN RULE FOR COMPUTING 𝐴𝐵: If the product 𝐴𝐵 is defined, then the entry in row 𝑖 and column 𝑗 of 𝐴𝐵 is the sum of the products of corresponding entries from row 𝑖 of 𝐴 and column 𝑗 of 𝐵. If ( 𝐴𝐵)𝑖 𝑗 denotes the (𝑖, 𝑗)-entry in 𝐴𝐵, and if 𝐴 is an 𝑚 × 𝑛 matrix, then ( 𝐴𝐵)𝑖 𝑗 = 𝑎𝑖1 𝑏 1 𝑗 + 𝑎𝑖2 𝑏 2 𝑗 + · · · 𝑎𝑖𝑛 𝑏 𝑛 𝑗. 24. Properties of Matrix Multiplication: Let 𝐴 be an 𝑚 × 𝑛 matrix, and let 𝐵 and 𝐶 have sizes for which the indicated sums and products are defined. (a) 𝐴(𝐵𝐶) = ( 𝐴𝐵)C (associative law of multiplication) (b) 𝐴(𝐵 + 𝐶) = 𝐴𝐵 + 𝐴𝐶 (left distributive law) (c) (𝐵 + 𝐶) 𝐴 = 𝐵𝐴 + 𝐶 𝐴 (right distributive law) (d) 𝑟 ( 𝐴𝐵) = (𝑟 𝐴)𝐵 = 𝐴(𝑟 𝐵) for any scalar 𝑟 (e) 𝐼𝑚 𝐴 = 𝐴 = 𝐴𝐼𝑛 (identity for matrix multiplication) 25. The Transpose of a Matrix: Given an 𝑚 × 𝑛 matrix 𝐴, the transpose of 𝐴 is the 𝑛 × 𝑚 matrix, denoted by 𝐴𝑇 , whose columns are formed from the corresponding rows of 𝐴. 26. Let 𝐴 and 𝐵 denote matrices whose sizes are appropriate for the following sums and products. (a) ( 𝐴𝑇 )𝑇 = 𝐴 (b) ( 𝐴 + 𝐵)𝑇 = 𝐴𝑇 + 𝐵𝑇 (c) For any scalar 𝑟, (𝑟 𝐴)𝑇 = 𝑟 𝐴𝑇 (d) ( 𝐴𝐵)𝑇 = 𝐵𝑇 𝐴𝑇 27. The transpose of a product of matrices equals the product of their transposes in the reverse order. 28. Symmetric Matrix : A square matrix where 𝐴 = 𝐴𝑇. For example: 2 3 3 4 3.2.2 The Inverse of a Matrix: 1. An 𝑛 × 𝑛 matrix 𝐴 is said to be invertible if there is an 𝑛 × 𝑛 matrix 𝐶 such that 𝐶 𝐴 = 𝐼 and 𝐴𝐶 = 𝐼 where 𝐼 = 𝐼𝑛 , the 𝑛 × 𝑛 identity matrix. In this case, 𝐶 is an inverse of 𝐴. 2. In fact, 𝐶 is uniquely determined by 𝐴, because if 𝐵 were another inverse of 𝐴, then 𝐵 = 𝐵𝐼 = 𝐵( 𝐴𝐶) = (𝐵𝐴)𝐶 = 𝐼𝐶 = 𝐶. This unique inverse is denoted by 𝐴−1 , so that 𝐴−1 𝐴 = 𝐼 and 𝐴𝐴−1 = 𝐼. 3. A matrix that is not invertible is sometimes called a singular matrix, and an invertible matrix is called a nonsingular matrix. 𝑎 𝑏 −1 1 𝑑 −𝑏 4. Let 𝐴 =. If 𝑎𝑑 − 𝑏𝑐 ≠ 0, then 𝐴 is invertible and 𝐴 =. 𝑐 𝑑 𝑎𝑑 − 𝑏𝑐 −𝑐 𝑎 If 𝑎𝑑 − 𝑏𝑐 = 0, then 𝐴 is not invertible. 5. If 𝐴 is an invertible 𝑛 × 𝑛 matrix, then for each b in R𝑛 , the equation 𝐴x = b has the unique solution x = 𝐴−1 b. 6. (a) If 𝐴 is an invertible matrix, then 𝐴−1 is invertible and ( 𝐴−1 ) −1 = 𝐴. (b) If 𝐴 and 𝐵 are 𝑛 × 𝑛 invertible matrices, then so is 𝐴𝐵, and the inverse of 𝐴𝐵 is the product of the inverses of 𝐴 and 𝐵 in the reverse order. That is, ( 𝐴𝐵) −1 = 𝐵−1 𝐴−1. (c) If 𝐴 is an invertible matrix, then so is 𝐴𝑇 , and the inverse of 𝐴𝑇 is the transpose of 𝐴−1. That is, ( 𝐴𝑇 ) −1 = ( 𝐴−1 )𝑇. 7. The product of 𝑛 × 𝑛 invertible matrices is invertible, and the inverse is the product of their inverses in the reverse order. 8. Elementary Matrices: An elementary matrix is one that is obtained by performing a single elementary row operation on an identity matrix. 9. Compute 𝐸 1 𝐴, 𝐸 2 𝐴, and 𝐸 3 𝐴, and describe how these products can be obtained by elementary row operations 𝑎 𝑏 𝑐 1 0 0 0 1 0 1 0 0 on 𝐴, where 𝐴 = 𝑑 𝑒 𝑓 , 𝐸 1 = 0 1 0 , 𝐸 2 = 1 0 0 , 𝐸 3 = 0 1 0 𝑔 ℎ 𝑖 −4 0 1 0 0 1 0 0 5 10. Left-multiplication (that is, multiplication on the left) by 𝐸𝑖 has the same effect on any 3 × 𝑛 matrix. 11. 𝐸𝑖 𝐼 = 𝐸𝑖 , we see that 𝐸𝑖 itself is produced by this same row operation on the identity. 12. If an elementary row operation is performed on an 𝑚 × 𝑛 matrix 𝐴, the resulting matrix can be written as 𝐸 𝐴, where the 𝑚 × 𝑚 matrix 𝐸 is created by performing the same row operation on 𝐼𝑚. 13. Each elementary matrix 𝐸 is invertible. The inverse of 𝐸 is the elementary matrix of the same type that transforms 𝐸 back into 𝐼. 14. An 𝑛 × 𝑛 matrix 𝐴 is invertible if and only if 𝐴 is row equivalent to 𝐼𝑛 , and in this case, any sequence of elementary row operations that reduces 𝐴 to 𝐼𝑛 also transforms 𝐼𝑛 into 𝐴−1. 15. An Algorithm for Finding 𝐴−1 : If we place 𝐴 and 𝐼 side by side to form an augmented matrix 𝐴 𝐼 , then row operations on this matrix produce identical operations on 𝐴 and on 𝐼. 16. Row reduce the augmented matrix 𝐴 𝐼. If 𝐴 is row equivalent to 𝐼, then 𝐴 𝐼 is row equivalent to 𝐼 𝐴−1. Otherwise, 𝐴 does not have an inverse. 17. Another View of Matrix Inversion 18. 19. 20. 3.2.3 Characterizations of Invertible Matrices: 1. The Invertible Matrix Theorem: Let 𝐴 be a square 𝑛 × 𝑛 matrix. Then the following statements are equivalent. That is, for a given 𝐴, the statements are either all true or all false. (a) 𝐴 is an invertible matrix. (b) 𝐴 is row equivalent to the 𝑛 × 𝑛 identity matrix. (c) 𝐴 has 𝑛 pivot positions. (d) The equation 𝐴x = 0 has only the trivial solution. (e) The columns of 𝐴 form a linearly independent set. (f) The linear transformation x → 𝐴x is one-to-one. (g) The equation 𝐴x = b has at least one solution for each b in R𝑛. (h) The columns of 𝐴 span R𝑛. (i) The linear transformation x → 𝐴x maps R𝑛 onto R𝑛. (j) There is an 𝑛 × 𝑛 matrix 𝐶 such that 𝐶 𝐴 = 𝐼. (k) There is an 𝑛 × 𝑛 matrix 𝐷 such that 𝐴𝐷 = 𝐼. (l) 𝐴𝑇 is an invertible matrix. 2. Let 𝐴 and 𝐵 be square matrices. If 𝐴𝐵 = 𝐼, then 𝐴 and 𝐵 are both invertible, with 𝐵 = 𝐴−1 and 𝐴 = 𝐵−1. 3. Invertible Linear Transformations: A linear transformation 𝑇 : R𝑛 → R𝑛 is said to be invertible if there exists a function 𝑆 : R𝑛 → R𝑛 such that 𝑆(𝑇 (x)) = x for all x in R𝑛 and 𝑇 (𝑆(x)) = x for all x in R𝑛 4. If such an 𝑆 exists, it is unique and must be a linear transformation. We call 𝑆 the inverse of 𝑇 and write it as 𝑇 −1. 5. Let 𝑇 : R𝑛 → R𝑛 be a linear transformation and let 𝐴 be the standard matrix for 𝑇. Then 𝑇 is invertible if and only if 𝐴 is an invertible matrix. In that case, the linear transformation 𝑆 given by 𝑆(x) = 𝐴−1 x is the unique function satisfying equations 𝑆(𝑇 (x)) = x for all x in R𝑛 and 𝑇 (𝑆(x)) = x for all x in R𝑛. 6. 7. 8. 3.2.4 Partitioned Matrices: 1. A key feature of our work with matrices has been the ability to regard a matrix 𝐴 as a list of column vectors rather than just a rectangular array of numbers. This point of view has been so useful that we wish to consider other partitions of 𝐴, indicated by horizontal and vertical dividing rules. 2. The matrix 3 0 −1 5 9 −2 𝐴 = −5 2 4 0 −3 1 −8 −6 3 1 7 −4 can also be written as the 2 × 3 partitioned (or block) matrix 𝐴11 𝐴12 𝐴13 𝐴= 𝐴21 𝐴22 𝐴23 whose entries are the blocks (or submatrices) 3 0 −1 5 9 −2 𝐴11 = , 𝐴12 = , 𝐴13 = , −5 2 4 0 −3 1 𝐴21 = −8 −6 3 , 𝐴22 = 1 7 , 𝐴23 = −4 21 3. Addition and Scalar Multiplication: If matrices 𝐴 and 𝐵 are the same size and are partitioned in exactly the same way, then it is natural to make the same partition of the ordinary matrix sum 𝐴 + 𝐵. Each block of 𝐴 + 𝐵 is the (matrix) sum of the corresponding blocks of 𝐴 and 𝐵. Multiplication of a partitioned matrix by a scalar is also computed block by block. 4. Multiplication of Partitioned Matrices: Partitioned matrices can be multiplied by the usual row–column rule as if the block entries were scalars, provided that for a product 𝐴𝐵, the column partition of 𝐴 matches the row partition of 𝐵. 5. Take an example of multiplication of partitioned matrices. 6. Column–Row Expansion of 𝐴𝐵: If 𝐴 is 𝑚 × 𝑛 and 𝐵 is 𝑛 × 𝑝, then 𝑟𝑜𝑤 1 (𝐵) 𝑟𝑜𝑤 2 (𝐵) 𝐴𝐵 = 𝑐𝑜𝑙 1 ( 𝐴) 𝑐𝑜𝑙 2 ( 𝐴) · · · 𝑐𝑜𝑙 𝑛 ( 𝐴) .. . 𝑟𝑜𝑤 𝑛 (𝐵) = 𝑐𝑜𝑙1 ( 𝐴)𝑟𝑜𝑤 1 (𝐵) + · · · + 𝑐𝑜𝑙 𝑛 ( 𝐴)𝑟𝑜𝑤 𝑛 (𝐵) 7. Inverses of Partitioned Matrices: A matrix of the form 𝐴 𝐴12 𝐴 = 11 𝐴21 𝐴22 is said to be block upper triangular. Assume that 𝐴11 is 𝑝 × 𝑝, 𝐴22 is 𝑞 × 𝑞, and 𝐴 is invertible. Find a formula for 𝐴−1. 8. A block diagonal matrix is a partitioned matrix with zero blocks off the main diagonal (of blocks). Such a matrix is invertible if and only if each block on the diagonal is invertible. 9. 3.2.5 Matrix Factorizations: 1. A factorization of a matrix 𝐴 is an equation that expresses 𝐴 as a product of two or more matrices. 2. The 𝐿𝑈 Factorization: At first, assume that 𝐴 is an 𝑚 × 𝑛 matrix that can be row reduced to echelon form, without row interchanges. (Later, we will treat the general case.) Then 𝐴 can be written in the form 𝐴 = 𝐿𝑈, where 𝐿 is an 𝑚 × 𝑚 lower triangular matrix with 1’s on the diagonal and 𝑈 is an 𝑚 × 𝑛 echelon form of 𝐴. Such a factorization is called an 𝐿𝑈 factorization of 𝐴. The matrix 𝐿 is invertible and is called a unit lower triangular matrix. 1 0 0 0 ★ ★ ★ ★ ★ 1 0 0 0 ★ ★ ★ 𝐴 = ★ ★ 1 0 0 0 0 ★ ★ ★ ★ 1 0 0 0 0 0 1 0 0 0 ★ ★ ★ ★ ★ 1 0 0 0 ★ ★ ★ where 𝐿 = and 𝑈 = 0 0 0 ★. ★ ★ 1 0 ★ ★ ★ 1 0 0 0 0 0 Before studying how to construct 𝐿 and 𝑈, we should look at why they are so useful. When 𝐴 = 𝐿𝑈, the equation 𝐴x = b can be written as 𝐿 (𝑈x) = b. Writing y for 𝑈x, we can find x by solving the pair of equations 𝐿y = b 𝑈x = y. First solve 𝐿y = 𝑏 for y, and then solve 𝑈x = y for x. 3. Explain above procedure using an example: 4. An 𝐿𝑈 Factorization Algorithm: Suppose 𝐴 can be reduced to an echelon form 𝑈 using only row replacements that add a multiple of one row to another row below it. In this case, there exist unit lower triangular elementary matrices 𝐸 1 ,... , 𝐸 𝑝 such that 𝐸 𝑝 · · · 𝐸 1 𝐴 = 𝑈. (8) Then 𝐴 = (𝐸 𝑝 · · · 𝐸 1 ) −1𝑈 = 𝐿𝑈, where 𝐿 = (𝐸 𝑝 · · · 𝐸 1 ) −1 (9) It can be shown that products and inverses of unit lower triangular matrices are also unit lower triangular. Thus 𝐿 is unit lower triangular. Note that the row operations in equation (8), which reduce 𝐴 to 𝑈, also reduce the 𝐿 in equation (9) to 𝐼 , because 𝐸 𝑝 · · · 𝐸 1 𝐿 = (𝐸 𝑝 · · · 𝐸 1 )(𝐸 𝑝 · · · 𝐸 1 ) −1 = 𝐼. This observation is the key to constructing 𝐿. 5. ALGORITHM FOR AN LU FACTORIZATION: (a) Reduce 𝐴 to an echelon form 𝑈 by a sequence of row replacement operations, if possible. (b) Place entries in 𝐿 such that the same sequence of row operations reduces 𝐿 to 𝐼. 6. Explain above procedure using an example: 7. A Matrix Factorization in Electrical Engineering 3.2.6 The Leontief Input Output Model: 1. 2. 3. 4. 5. 6. 7. 3.2.7 Applications to Computer Graphics: 1. Homogeneous Coordinates 2. Composite Transformations 3. Scale 4. Rotate 5. Translate 6. 3D Computer Graphics 7. Homogeneous 3D Coordinates 8. Perspective Projections 3.2.8 Subspaces of R𝑛 : 1. A subspace of R𝑛 is any set 𝐻 in R𝑛 that has three properties: (a) The zero vector is in 𝐻. (b) For each u and v in 𝐻, the sum u + v is in 𝐻. (c) For each u in 𝐻 and each scalar 𝑐, the vector 𝑐u is in 𝐻. 2. Column Space and Null Space of a Matrix 3. The column space of a matrix 𝐴 is the set Col 𝐴 of all linear combinations of the columns of 𝐴. 4. If 𝐴 = 𝑎 1 · · · 𝑎 𝑛 , with the columns in R𝑚 , then Col 𝐴 is the same as Span{𝑎 1 ,... , 𝑎 𝑛 }. The column space of an 𝑚 × 𝑛 matrix is a subspace of R𝑚. Note that Col 𝐴 equals R𝑚 only when the columns of 𝐴 span R𝑚. Otherwise, Col 𝐴 is only part of R𝑚. 5. The null space of a matrix 𝐴 is the set Nul 𝐴 of all solutions of the homogeneous equation 𝐴x = 0. 6. When 𝐴 has 𝑛 columns, the solutions of 𝐴x = 0 belong to R𝑛 , and the null space of 𝐴 is a subset of R𝑛. In fact, Nul 𝐴 has the properties of a subspace of R𝑛. 7. Basis for a Subspace: Basis for a subspace 𝐻 of R𝑛 is a linearly independent set in 𝐻 that spans 𝐻. 8. The pivot columns of a matrix 𝐴 form a basis for the column space of 𝐴. 9. 3.2.9 Dimension and Rank: 1. Coordinate Systems: 2. Suppose the set B = {b1 ,... , b 𝑝 } is a basis for a subspace 𝐻. For each x in 𝐻, the coordinates of x relative to the basis 𝐵 are the weights 𝑐 1 ,... , 𝑐 𝑝 such that x = 𝑐 1 b1 + · · · + 𝑐 𝑝 b 𝑝 , and the vector in R 𝑝 𝑐1 𝑐2 [x] B = .. . 𝑐 𝑝 is called the coordinate vector of x (relative to B) or the B coordinate vector of x. 3. The Dimension of a Subspace 4. The dimension of a nonzero subspace 𝐻, denoted by dim 𝐻, is the number of vectors in any basis for 𝐻. The dimension of the zero subspace {0} is defined to be zero. 5. The rank of a matrix 𝐴, denoted by rank 𝐴, is the dimension of the column space of 𝐴. 6. The Rank Theorem: If a matrix 𝐴 has 𝑛 columns, then rank 𝐴+dim Nul 𝐴 = 𝑛. 7. The Basis Theorem: Let 𝐻 be a 𝑝-dimensional subspace of R𝑛 : Any linearly independent set of exactly 𝑝 elements in 𝐻 is automatically a basis for 𝐻: Also, any set of 𝑝 elements of 𝐻 that spans 𝐻 is automatically a basis for 𝐻: 8. Rank and the Invertible Matrix Theorem 9. The Invertible Matrix Theorem (continued) Let 𝐴 be an 𝑛 × 𝑛 matrix. Then the following statements are each equivalent to the statement that 𝐴 is an invertible matrix. (a) The columns of 𝐴 form a basis of R𝑛 : (b) Col 𝐴 = R𝑛 (c) rank 𝐴 = 𝑛 (d) dim Nul 𝐴 = 0 (e) Nul 𝐴 = {0} 3.3 Determinant: 3.3.1 Introduction to Determinants: 1. Recall: A 2 × 2 matrix is invertible if and only if its determinant is nonzero. 2. Lets discover the definition for the 3 × 3 case by watching what happens when an invertible 3 × 3 matrix 𝐴 is row reduced. Consider [ 𝐴] 3×3 = [𝑎𝑖 𝑗 ] with 𝑎 11 ≠ 0. If we multiply the second and third rows of 𝐴 by 𝑎 11 and then subtract appropriate multiples of the first row from the other two rows, we find that 𝐴 is row equivalent to the following two matrices: 𝑎 11 𝑎 12 𝑎 13 𝑎 11 𝑎 12 𝑎 13 𝐴 ∼ 𝑎 21 𝑎 22 𝑎 23 ∼ 0 𝑎 11 𝑎 22 − 𝑎 11 𝑎 22 𝑎 11 𝑎 23 − 𝑎 13 𝑎 21 (10) 𝑎 31 𝑎 32 𝑎 33 0 𝑎 11 𝑎 32 − 𝑎 12 𝑎 31 𝑎 11 𝑎 33 − 𝑎 13 𝑎 31 Since 𝐴 is invertible, either the (2, 2)-entry or the (3, 2)-entry on the right in (10) is nonzero. Let us suppose that the (2, 2)-entry is nonzero. (Otherwise, we can make a row interchange before proceeding.) Multiply row 3 by 𝑎 11 𝑎 22 − 𝑎 12 𝑎 21 , and then to the new row 3 add −(𝑎 11 𝑎 32 − 𝑎 12 𝑎 31 ) times row 2. This will show that 𝑎 11 𝑎 12 𝑎 13 𝐴 ∼ 0 𝑎 11 𝑎 22 − 𝑎 11 𝑎 22 𝑎 11 𝑎 23 − 𝑎 13 𝑎 21 (11) 0 0 𝑎 11 Δ where Δ = 𝑎 11 𝑎 22 𝑎 33 + 𝑎 12 𝑎 23 𝑎 31 + 𝑎 13 𝑎 21 𝑎 32 − 𝑎 11 𝑎 23 𝑎 32 − 𝑎 12 𝑎 21 𝑎 33 − 𝑎 13 𝑎 22 𝑎 31 (12) Since 𝐴 is invertible, Δ must be nonzero. The converse is true. We call Δ in (12) the determinant of the 3 × 3 matrix 𝐴. To generalize the definition of the determinant to larger matrices, we’ll use 2 × 2 determinants to rewrite the 3 × 3 determinant Δ described above. Since the terms in Δ can be grouped as Δ = (𝑎 11 𝑎 22 𝑎 33 − 𝑎 11 𝑎 23 𝑎 32 ) + (𝑎 12 𝑎 23 𝑎 31 − 𝑎 12 𝑎 21 𝑎 33 ) + (𝑎 13 𝑎 21 𝑎 32 − 𝑎 13 𝑎 22 𝑎 31 ) 𝑎 22 𝑎 23 𝑎 21 𝑎 23 𝑎 21 𝑎 22 Δ = 𝑎 11 det − 𝑎 12 det + 𝑎 13 det 𝑎 32 𝑎 33 𝑎 31 𝑎 33 𝑎 31 𝑎 32 Δ = 𝑎 11 det 𝐴11 − 𝑎 12 det 𝐴12 + 𝑎 13 det 𝐴13 where 𝐴11 , 𝐴12 , and 𝐴13 are obtained from 𝐴 by deleting the first row and one of the three columns. 3. For any square matrix 𝐴, let 𝐴𝑖 𝑗 denote the submatrix formed by deleting the 𝑖th row and 𝑗th column of 𝐴. 4. For 𝑛 ≥ 2, the determinant of an 𝑛 × 𝑛 matrix 𝐴 = [𝑎𝑖 𝑗 ] is the sum of 𝑛 terms of the form ±𝑎 1 𝑗 det 𝐴1 𝑗 , with plus and minus signs alternating, where the entries 𝑎 11 , 𝑎 12 ,... , 𝑎 1𝑛 are from the first row of 𝐴. In symbols, det 𝐴 = 𝑎 11 det 𝐴11 − 𝑎 12 det 𝐴12 + · · · + (−1) 1+𝑛 𝑎 1𝑛 det 𝐴1𝑛 = 𝑛𝑗=1 (−1) 1+ 𝑗 𝑎 1 𝑗 det 𝐴1 𝑗 Í 5. Definition of det 𝐴 in a slightly different form. Given 𝐴 = [𝑎𝑖 𝑗 ], the (𝑖, 𝑗)-cofactor of 𝐴 is the number 𝐶𝑖 𝑗 given by 𝐶𝑖 𝑗 = (−1) 𝑖+ 𝑗 det 𝐴𝑖 𝑗 (13) Then det 𝐴 = 𝑎 11𝐶11 + 𝑎 12𝐶12 + · · · + 𝑎 1𝑛 𝐶1𝑛 This formula is called a cofactor expansion across the first row of 𝐴. 6. The determinant of an 𝑛 × 𝑛 matrix 𝐴 can be computed by a cofactor expansion across any row or down any column. The expansion across the 𝑖th row using the cofactors in (13) is det 𝐴 = 𝑎𝑖1𝐶𝑖1 + 𝑎𝑖2𝐶𝑖2 + · · · + 𝑎𝑖𝑛 𝐶𝑖𝑛 The cofactor expansion down the 𝑗th column is det 𝐴 = 𝑎 1 𝑗 𝐶1 𝑗 + 𝑎 2 𝑗 𝐶2 𝑗 + · · · + 𝑎 𝑛 𝑗 𝐶𝑛 𝑗 1 5 0 7. Use a cofactor expansion across the third row to compute det 𝐴, where 𝐴 = 2 4 −1 0 −2 0 3 −7 8 9 −6 0 2 1 5 0 8. Use a cofactor expansion across the first column to compute det 𝐴, where 𝐴 = 0 0 8 9 −6 0 0 2 4 −1 0 0 0 −2 0 9. If 𝐴 is a triangular matrix, then det 𝐴 is the product of the entries on the main diagonal of 𝐴. 10. 11. 3.3.2 Properties of Determinants: 1. The secret of determinants lies in how they change when row operations are performed. 2. Row Operations: Let 𝐴 be a square matrix. (a) If a multiple of one row of 𝐴 is added to another row to produce a matrix 𝐵, then det 𝐵 = det 𝐴. (b) If two rows of 𝐴 are interchanged to produce 𝐵, then det 𝐵 = − det 𝐴. (c) If one row of 𝐴 is multiplied by 𝑘 to produce 𝐵, then det 𝐵 = 𝑘 det 𝐴. 1 −4 2 3. Compute det 𝐴, where 𝐴 = −2 8 −9 −1 7 0 The strategy is to reduce 𝐴 to echelon form and then to use the fact that the determinant of a triangular matrix is the product of the diagonal entries. 4. Suppose a square matrix 𝐴 has been reduced to an echelon form 𝑈 by row replacements and row interchanges. (This is always possible. See the row reduction algorithm.) If there are 𝑟 interchanges, then above Theorem shows that det 𝐴 = (−1) 𝑟 det 𝑈 5. A square matrix 𝐴 is invertible if and only if det 𝐴 ≠ 0. 6. A useful corollary is that det 𝐴 = 0 when the columns of 𝐴 are linearly dependent. 7. Also, det 𝐴 = 0 when the rows of 𝐴 are linearly dependent. 8. Column Operations: We can perform operations on the columns of a matrix in a way that is analogous to the row operations we have considered. 9. If 𝐴 is an 𝑛 × 𝑛 matrix, then det 𝐴𝑇 = det 𝐴. 10. Determinants and Matrix Products: 11. If 𝐴 and 𝐵 are 𝑛 × 𝑛 matrices, then det( 𝐴𝐵) = (det 𝐴)(det 𝐵). 12. A Linearity Property of the Determinant Function: 13. For an 𝑛 × 𝑛 matrix 𝐴, we can consider det 𝐴 as a function of the 𝑛 column vectors in 𝐴. We will show that if all columns except one are held fixed, then det 𝐴 is a linear function of that one (vector) variable. 14. Suppose that the 𝑗th column of 𝐴 is allowed to vary, and write 𝐴 = [a1 · · · a 𝑗−1 x a 𝑗+1 · · · a𝑛 ] Define a transformation 𝑇 from R𝑛 to R by 𝑇 (x) = det[a1 · · · a 𝑗−1 x a 𝑗+1 · · · a𝑛 ] Then 𝑇 (𝑐x) = 𝑐𝑇 (x) for all scalars 𝑐 and all x ∈ R𝑛 𝑇 (u + v) = 𝑇 (u) + 𝑇 (v) for all u, v ∈ R𝑛 15. 3.3.3 Cramer’s Rule, Volume, and Linear Transformations: 1. Cramer’s Rule: For any 𝑛 × 𝑛 matrix 𝐴 and any b in R𝑛 , let 𝐴𝑖 (b) be the matrix obtained from 𝐴 by replacing column 𝑖 by the vector b. 𝐴𝑖 (b) = [a1 · · · a𝑖−1 b a𝑖+1 · · · a𝑛 ] 2. Cramer’s Rule: Let 𝐴 be an invertible 𝑛 × 𝑛 matrix. For any b in R𝑛 , the unique solution x of 𝐴x = b has entries given by det 𝐴𝑖 (b) 𝑥𝑖 = , 𝑖 = 1, 2,... , 𝑛. det 𝐴 3. Application to Engineering: 4. A Formula for 𝐴–1 : Cramer’s rule leads easily to a general formula for the inverse of an 𝑛 × 𝑛 matrix 𝐴. The 𝑗th column of 𝐴−1 is a vector x that satisfies 𝐴x = e 𝑗 where e 𝑗 is the 𝑗th column of the identity matrix, and the 𝑖th entry of x is the (𝑖, 𝑗)-entry of 𝐴−1. By Cramer’s rule, det 𝐴𝑖 (e 𝑗 ) {(𝑖, 𝑗)𝑡ℎentry of𝐴−1 } = 𝑥𝑖 = det 𝐴 Recall that 𝐴 𝑗𝑖 denotes the submatrix of 𝐴 formed by deleting row 𝑗 and column 𝑖. A cofactor expansion down column 𝑖 of 𝐴𝑖 (e 𝑗 ) shows that det 𝐴𝑖 (e 𝑗 ) = (−1) 𝑖+ 𝑗 det 𝐴 𝑗𝑖 = 𝐶 𝑗𝑖 where 𝐶 𝑗𝑖 is a cofactor of 𝐴. Thus, the (𝑖, 𝑗)-entry of 𝐴−1 is the cofactor 𝐶 𝑗𝑖 divided by det 𝐴. Hence, 𝐶11 𝐶21 · · · 𝐶𝑛1 1 𝐶12 𝐶22 · · · 𝐶𝑛2 𝐴−1 = ....... det 𝐴 ..... 𝐶1𝑛 𝐶2𝑛 · · · 𝐶𝑛𝑛 5. An Inverse Formula: Let 𝐴 be an invertible 𝑛 × 𝑛 matrix. Then 1 𝐴−1 = 𝑎𝑑𝑗 𝐴 det 𝐴 27 −2 14 4 6. Find the inverse of the matrix 𝐴 = 3 −7 1 5 −7 −3 7. Determinants as Area or Volume: 8. If 𝐴 is a 2 × 2 matrix, the area of the parallelogram determined by the columns of 𝐴 is | det 𝐴|. If 𝐴 is a 3 × 3 matrix, the volume of the parallelepiped determined by the columns of 𝐴 is | det 𝐴|. 9. It will suffice to show that any 2 × 2 matrix 𝐴 = [a1 , a2 ] can be transformed into a diagonal matrix in a way that changes neither the area of the associated parallelogram nor |𝑑𝑒𝑡 𝐴|. We know that the absolute value of the determinant is unchanged when two columns are interchanged or a multiple of one column is added to another. And it is easy to see that such operations suffice to transform 𝐴 into a diagonal matrix. Column interchanges do not change the parallelogram at all. So it suffices to prove the following simple geometric observation that applies to vectors in R2 or R3 : Let a1 and a2 be nonzero vectors. Then for any scalar 𝑐, the area of the parallelogram determined by a1 and a2 equals the area of the parallelogram determined by a1 and a2 + 𝑐a1. Discuss the proof. 10. Calculate the area of the parallelogram determined by the points (−2, −2), (0, 3), (4, −1), (6, 4). 11. Linear Transformations: Determinants can be used to describe an important geometric property of linear transformations in the plane and in R3. If 𝑇 is a linear transformation and 𝑆 is a set in the domain of 𝑇, let 𝑇 (𝑆) denote the set of images of points in 𝑆. We are interested in how the area (or volume) of 𝑇 (𝑆) compares with the area (or volume) of the original set 𝑆. For convenience, when 𝑆 is a region bounded by a parallelogram, we also refer to 𝑆 as a parallelogram. 12. Let 𝑇 : R2 → R2 be the linear transformation determined by a 2 × 2 matrix 𝐴. If 𝑆 is a parallelogram in R2 , then {area of 𝑇 (𝑆)} = | det 𝐴|{area of 𝑆} If 𝑇 is determined by a 3 × 3 matrix 𝐴, and if 𝑆 is a parallelepiped in R3 , then {volume of 𝑇 (𝑆)} = | det 𝐴|{volume of 𝑆} 13. Let 𝑎 and 𝑏 be positive numbers. Find the area of the region 𝐸 bounded by the ellipse whose equation is 𝑥12 𝑥 22 + =1 𝑎2 𝑏2 14. 3.4 Eigenvalues and Eigenvectors: 3.4.1 Eigenvectors and Eigenvalues: 1. Although a transformation x → 𝐴x may move vectors in a variety of directions, it often happens that there are special vectors on which the action of 𝐴 is quite simple. 2. An eigenvector of an 𝑛 × 𝑛 matrix 𝐴 is a nonzero vector x such that 𝐴x = 𝜆x for some scalar 𝜆. A scalar 𝜆 is called an eigenvalue of 𝐴 if there is a nontrivial solution x of 𝐴x = 𝜆x, such an x is called an eigenvector corresponding to 𝜆. 1 6 6 3 3. Let 𝐴 = ,𝑢 = ,𝑣 =. Are 𝑢 and 𝑣 eigenvectors of 𝐴? 5 2 −5 −2 1 6 4. Show that 7 is an eigenvalue of matrix 𝐴 = , and find the corresponding eigenvectors. 5 2 5. 𝜆 is an eigenvalue of an 𝑛 × 𝑛 matrix 𝐴 if and only if the equation ( 𝐴 − 𝜆𝐼)x = 0 (14) has a nontrivial solution. 6. The set of all solutions of (14) is just the null space of the matrix 𝐴 − 𝜆𝐼. So this set is a subspace of R𝑛 and is called the eigenspace of 𝐴 corresponding to 𝜆. The eigenspace consists of the zero vector and all the eigenvectors corresponding to 𝜆. 4 −1 6 7. Let 𝐴 = 2 1 6. An eigenvalue of 𝐴 is 2. Find a basis for the corresponding eigenspace. 2 −1 8 8. The eigenvalues of a triangular matrix are the entries on its main diagonal. 9. What does it mean for a matrix A to have an eigenvalue of 0? This happens if and only if the equation 𝐴x = 0x has a nontrivial solution. But 𝐴x = 0x is equivalent to 𝐴x = 0, which has a nontrivial solution if and only if 𝐴 is not invertible. Thus 0 is an eigenvalue of 𝐴 if and only if 𝐴 is not invertible. 10. If v1 ,... , v𝑟 are eigenvectors that correspond to distinct eigenvalues 𝜆 1 ,... , 𝜆𝑟 of an 𝑛 × 𝑛 matrix 𝐴, then the set {v1 ,... , v𝑟 } is linearly independent. 11. Eigenvectors and Difference Equations: 12. This section concludes by showing how to construct solutions of the first-order difference equation discussed in the chapter introductory example: x 𝑘+1 = 𝐴x 𝑘 , (𝑘 = 0, 1, 2,...) (15) If 𝐴 is an 𝑛 × 𝑛 matrix, then (15) is a recursive description of a sequence {x 𝑘 } in R𝑛. A solution of (15) is an explicit description of {x 𝑘 } whose formula for each x 𝑘 does not depend directly on 𝐴 or on the preceding terms in the sequence other than the initial term x0. The simplest way to build a solution of (15) is to take an eigenvector x0 and its corresponding eigenvalue 𝜆 and let x 𝑘 == 𝜆 𝑘 x0 , (𝑘 = 0, 1, 2,...) (16) This sequence is a solution because 𝐴x 𝑘 = 𝐴(𝜆 𝑘 x0 ) = 𝜆 𝑘 ( 𝐴x0 ) = 𝜆 𝑘 (𝜆x0 ) = 𝜆 𝑘+1 x0 = x 𝑘+1. Linear combinations of solutions in the form of equation (16) are solutions, too. 13. 14. 3.4.2 The Characteristic Equation: 1. Useful information about the eigenvalues of a square matrix 𝐴 is encoded in a special scalar equation called the characteristic equation of 𝐴. 2 3 2. Find the eigenvalues of 𝐴 =. 3 −6 We must find all scalars 𝜆 such that the matrix equation ( 𝐴 − 𝜆𝐼)x = 0 has a nontrivial solution. By the Invertible Matrix Theorem, this problem is equivalent to finding all 𝜆 such that the matrix ( 𝐴 − 𝜆𝐼) is not invertible. Do the remaining calculations. 3. Same trick works in general. 4. Properties of Determinants: Let 𝐴 and 𝐵 be 𝑛 × 𝑛 matrices. (a) 𝐴 is invertible if and only