🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

WarmheartedElf

Uploaded by WarmheartedElf

2023

Tags

algebra mathematics number theory

Full Transcript

4CCM121A/5CCM121B Introduction to Algebra Beth Romano Lecture Notes 2023 Contents 1 Introduction 3 2 Background 3 2.1 Sets............................................ 3 2.2 Functions......................................... 5 3 Properties of the Integers 6 3.1 The well-ordering principle and mathema...

4CCM121A/5CCM121B Introduction to Algebra Beth Romano Lecture Notes 2023 Contents 1 Introduction 3 2 Background 3 2.1 Sets............................................ 3 2.2 Functions......................................... 5 3 Properties of the Integers 6 3.1 The well-ordering principle and mathematical induction............... 6 3.2 Divisibility and the division algorithm......................... 7 3.3 Greatest common divisors................................ 9 3.4 Relatively prime integers................................. 11 3.5 The Euclidean algorithm................................. 11 3.6 Prime factorization.................................... 14 4 Binary operations 16 4.1 Definition and properties................................. 16 4.2 Symmetries of polygons................................. 18 4.3 Permutations....................................... 20 4.4 Arithmetic modulo n................................... 23 5 Groups: Examples and first properties 5.1 27 Definition and examples................................. 27 5.2 The group Z× n....................................... 29 5.3 Product groups...................................... 30 5.4 First properties...................................... 31 5.5 Powers of group elements................................ 33 6 Subgroups and homomorphisms 36 6.1 Subgroups......................................... 36 6.2 Cyclic groups....................................... 39 6.3 Homomorphisms..................................... 40 6.4 Isomorphisms....................................... 44 7 Binary relations and cosets 47 7.1 Binary relations...................................... 47 7.2 Cosets and Lagrange’s theorem............................. 48 8 Rings 51 8.1 Definition and basic properties............................. 51 8.2 Subrings.......................................... 53 8.3 Units............................................ 54 8.4 Ring homomorphisms.................................. 55 8.5 Types of rings....................................... 56 8.6 The Chinese remainder theorem............................. 58 2 1 Introduction The following lecture notes are for the Introduction to Algebra class and will be updated every week throughout the term following what we do in lectures. They are based on the lecture notes from 2020 – 2021, written by Payman Kassaei, and many of the proofs will be the same. These previous notes are available on the KEATS page for the course. Please consult them for additional context and for more details. Note that reading these notes is not a good substitute for attending lectures! In particular, we will go through examples in lectures that may not appear in the notes. Other references include: Elementary Number Theory by David Burton (Chapters 1 – 4) A Course in Group Theory by John F. Humphreys (the first 5 chapters) A First Course in Abstract Algebra by John B. Fraleigh (Chapters 0 – 4) Abstract Algebra by Dummit and Foote (Chapters 0 – 3 and 7) There are also some nice notes online from similar courses at other universities. One example is Julia Goedecke’s notes from the group theory course at Cambridge: http://www.julia-goedecke.de/pdf/GroupsNotes.pdf 2 Background 2.1 Sets A set is a collection of elements (which are also called members of the set). Here are examples of familiar sets, with notation we’ll use for them: Z = {... , −3, −2, −1, 0, 1, 2, 3,... } is the set of integers N = {1, 2, 3,... } is the set of natural numbers. Q = { ab | a ∈ Z, b ∈ N} is the set of rational numbers R is the set of real numbers C = {a + bi | a, b ∈ R} is the set of complex numbers. Here i is a square root of −1. ∅ is the empty set, which is the set with no elements Note that some texts consider 0 to be a natural number. We will not. If a set A has finitely many elements, we say it is a finite set. If A is a set, we have the following notation: x ∈ A means x is an element of A 3 x∈ / A means x is not an element of A. In addition, if A is a finite set, we sometimes write |A| for the number of elements of A. If A and B are sets, then we say B is a subset of A (denoted B ⊂ A or B ⊆ A) if x ∈ B implies x ∈ A. This means that every element of B is also an element of A. B is a proper subset of A (denoted B ⊊ A) if B ⊂ A but B ̸= A. Using sets A and B, we can also define the following sets: The intersection of A and B, denoted A ∩ B, is the set of elements that belong to both A and B. The union of A and B, denoted A ∪ B, is the set of elements that belong to either A or B. The complement of B in A, denoted A \ B, is the set of elements in A that do not belong to B. In other words, A \ B = {x ∈ A | x ∈ / B}. We may also define the product A × B as follows: Definition 2.1. Let A and B be sets. The product A × B is defined by A × B = {(a, b) | a ∈ A, b ∈ B}. Thus any element of A × B is an ordered pair (a, b) where a is an element of A and b is an element of B. For example, if A = B = R, then A × B = R × R is the plane R2. In general, we sometimes write A × A as A2. We write An for the product A × A ×... A of n copies of A. Example 2.2. Suppose A = {−1, 1} and B = {0, 1, 2} Then A × B = {(−1, 0), (−1, 1), (−1, 2), (1, 0), (1, 1), (1, 2)}. The following lemma, which you can use without proof, will be useful later. Lemma 2.3. Let A and B be finite sets. Then |A × B| = |A||B|. You will often have to prove that two given sets A and B are equal. One standard way to do this is to prove A ⊂ B and then prove B ⊂ A. This will imply that A = B. 4 2.2 Functions Let A and B be sets. A function f from A to B is a rule that assigns to every element a ∈ A an element (denoted f (a)) in B. We write f : A → B to say that f is a function from A to B. We also say that f is a map from A to B. Given a ∈ A, the element f (a) is called the image of a in B. If A, B, C, D are sets and f : A → B, g : C → D are functions, we say that f and g are equal (written f = g) if the following conditions hold: A = C, B = D, and f (x) = g(x) for all x ∈ A. Definition 2.4. Let f : A → B be a function. The image of f , denoted f (A) or im f , is the set {f (x) | x ∈ A}. If f : A → B is a function and S ⊂ A, we also sometimes write f (S) for the set {f (x) | x ∈ S}, called the image of S in B. Definition 2.5. A function f : A → B is called injective or one-to-one if for every x, y ∈ A, f (x) = f (y) implies x = y. For example, f : Z → Z given by f (x) = 2x is injective, but f : Z → Z given by f (x) = x2 is not. To prove a function f is injective, you should normally use the definition: assume that f (x) = f (y), and then prove x = y. There is an example of this in the proof of Proposition 2.10 below. Definition 2.6. A function f : A → B is called surjective or onto if for every y ∈ B, there exists x ∈ A such that f (x) = y. Note that f is surjective if and only if f (A) = B, i.e. the image of A is all of B. To prove a function f is surjective, you should also normally use the definition: take an arbitrary element y ∈ B and find and element x ∈ A such that f (x) = y. Here is an example of this kind of argument. Example 2.7. Claim: The function f : R → R given by f (x) = 2x + 1 is surjective. Proof of claim: Given y ∈ R, set x = 21 (y − 1). Then 1 f (x) = 2( (y − 1)) + 1 = y. 2 Thus f is surjective. Definition 2.8. A function f : A → B is called bijective if it is both injective and surjective. Now we consider the composition of functions. Let A, B, and C be sets. Definition 2.9. Let f : A → B and g : B → C be functions. The composite function g ◦ f : A → C is defined by (g ◦ f )(x) = g(f (x)) for all x ∈ A. 5 The following proposition tells us that the composition of injective functions is injective, and the composition of surjective functions is surjective. Proposition 2.10. Let f : A → B and g : B → C be functions. 1. If f and g are both injective, then g ◦ f is injective. 2. If f and g are both surjective, then g ◦ f is surjective. 3. If f and g are both bijective, then g ◦ f is bijective. Proof. 1. Assume f and g are both injective. Suppose (g ◦ f )(x) = (g ◦ f )(y) for some x, y ∈ A. By the definition this means that g(f (x)) = g(f (y)). Since g is injective, this tells us that f (x) = f (y). Since f is injective, this tells us that x = y. We have proved that (g ◦ f )(x) = (g ◦ f )(y) implies x = y. Thus g ◦ f is injective. 2. Exercise (see Sheet 1). 3. Assume f and g are both bijective. By definition this tells us that f and g are both injective, so by part 1 of the proposition, g ◦ f is injective. By definition of bijective, we also know that f and g are both surjective. By part 2 of the proposition, g ◦ f is surjective. Since g ◦ f is both injective and surjective, it is bijective. 3 Properties of the Integers Recall that the operations of addition and multiplication on Z satisfy the following properties: commutativity: a + b = b + a and ab = ba for all a, b ∈ Z associativity: a + (b + c) = (a + b) + c for all a, b, c ∈ Z the distributive property: a(b + c) = ab + ac for all a, b, c ∈ Z. We will use these properties without further comment in the following section. Note that we will discuss properties like this in more detail later in the term. 3.1 The well-ordering principle and mathematical induction Recall that a subset S ⊂ Z is bounded below if there exists n ∈ Z such that n ≤ s for all s ∈ S. Such an element n is called a lower bound for S. An element s ∈ S is called a least element if it is a lower bound for S. Note the difference between a lower bound and a least element: a lower bound can be any element of Z (it doesn’t necessarily have to be in S), but a least element must be in S. 6 Example 3.1. Let S = {−1, 2, 3}. Then any negative integer n is a lower bound for S. The least element of S is −1. Let S = {2n | n ∈ Z}. Then S is not bounded below. We define the concepts of being bounded above and of a greatest element similarly. The integers have the following property: The well-ordering principle: Every non-empty subset of Z that is bounded below has a least element. Note that the well-ordering principle does not hold if we replace Z by R: for example, the set {x ∈ R | x > 0} is bounded below but does not have a least element. Exercise 3.2. Use the well-ordering principle to show that every non-empty subset of Z that is bounded above has a greatest element. (Hint: given such a set S, consider {−s | s ∈ S}.) Using the well-ordering principle, we can prove the validity of the principle of mathematical induction. Theorem 3.3 (Principle of mathematical induction). Suppose that for every n ∈ N we have an assertion P (n). Assume 1. P (1) is true, and 2. for every integer n > 1, we have that P (1), P (2),... , P (n − 1) together imply P (n). Then P (n) is true for all n ∈ N. Proof. Let S be the set of integers n such that P (n) is false. We want to show that S = ∅. We know S is bounded below since it is a subset of N, which is bounded below as a subset of Z. Suppose for contradiction that S is non-empty. Then by the well-ordering principle it has a least element m ∈ S. Now, by assumption we know that P (1) is true. In particular 1 ∈ / S. Since m is the least element of S, we must have 1, 2,... , m − 1 ∈ / S. This tells us that P (1),... , P (m − 1) are all true. By assumption, this implies that P (m) is true. But then m ∈ / S, a contradiction. In practice, when proving claims by induction, we often prove the stronger statement that P (n−1) implies P (n), rather than assumption 2 in the theorem. 3.2 Divisibility and the division algorithm Definition 3.4. Let n, m ∈ Z. We say n is divisible by m (written m|n) if there exists an integer k ∈ Z such that n = mk. If n is divisible by m, we also say that m divides n, m is a divisor of n, and that n is a multiple of m. Remark 3.5. Note that in the definition we do not require n or m to be positive. Thus, for example, −2 divides 6, since 6 = (−2)(−3). 7 Remark 3.6. Also note that based on the definition, every integer n is a divisor of 0: this is because 0 = n(0). If n is not divisible by m, we write m ∤ n. Proposition 3.7. Let a, b, c ∈ Z and n ∈ N. Then the following hold: 1. If a|b and b|c, then a|c. 2. If a|n, then a ≤ n. 3. If c|a and c|b, then c|(ax + by) for any x, y ∈ Z. Proof. 1. Exercise (see Sheet 2). 2. The proof is omitted. 3. Assume c|a and c|b. This means that there exists k ∈ Z such that a = ck and m ∈ Z such that b = cm. Let x, y ∈ Z. Then ax + by = (ck)x + (cm)y = c(kx) + c(my) = c(kx + my). Since kx + my ∈ Z, this tells us c|(ax + by). Theorem 3.8 (The division algorithm). Let m ∈ Z and n ∈ N. Then there exist q, r ∈ Z such that: 1. m = nq + r and 2. 0 ≤ r < n Moreover, (q, r) is the unique pair of integers such that these two properties hold. Proof/Algorithm. Fix n ∈ N. For m ∈ Z, let P (m) be the “existence” statement, so P (m) is the statement that there exists integers q, r ∈ Z such that statements 1 and 2 hold. We will prove P (m) for m ≥ 0 by induction on m. First note that P (m) holds for all 0 ≤ m < n. Indeed, in this case we can take q = 0 and r = m. Now suppose m ≥ n and that P (0),... P (m − 1) are true. Let m′ = m − n. Then m′ < m so P (m′ ) is true by assumption. Thus there exist q ′ , r′ ∈ Z such that m′ = nq ′ + r′ and 0 ≤ r′ < n. We have m = m′ + n, so m = q′ n + r′ + n = (q ′ + 1)n + r′. Letting q = q ′ + 1 and r = r′ proves P (m). By induction, P (m) holds for all m ≥ 0. Now suppose m < 0. Since −m > 0 we know P (−m) holds, i.e. there exists q ′ , r′ ∈ Z such that −m = nq ′ + r′ 8 and 0 ≤ r′ < n. If r′ = 0, then m = n(−q ′ ) and so P (m) holds. If r′ ̸= 0, then 0 < n − r′ < n. We have m = n(−q ′ ) − r′ = n(−q ′ ) − r′ + n − n = n(−q ′ − 1) + (n − r′ ). Taking q = −q ′ − 1, r = n − r′ proves P (m). Thus we have proven P (m) for all m ∈ Z. Next we must prove the uniqueness statement. Suppose (q1 , r1 ) and (q2 , r2 ) are two pairs statisfying properties 1 and 2 above. So we have m = nq1 + r1 and m = nq2 + r2 (3.1) with 0 ≤ r1 < n and 0 ≤ r2 < n. We must prove that q1 = q2 and r1 = r2. First we prove r1 = r2. Assume for contradiction that r1 ̸= r2. Without loss of generality we can assume r2 > r1 , so r2 − r1 ∈ N. Using (3.1) we have nq1 + r1 = nq2 + r2 n(q1 − q2 ) = r2 − r1. We see that n|(r2 − r1 ). By Proposition 3.7 part 2, we have r2 − r1 > n, a contradiction. Thus r1 = r2. Using this, (3.1) tells us nq1 = nq2 , which implies q1 = q2. 3.3 Greatest common divisors Definition 3.9. Let a, b be integers that are not both zero. An integer c ∈ Z is called a common divisor of a and b if c|a and c|b. Definition 3.10. Let a, b be integers that are not both zero. The greatest common divisor of a and b, denoted gcd(a, b) is the greatest integer g such that g|a and g|b. Equivalently, g = gcd(a, b) if the following three properties hold: 1. g|a, 2. g|b, and 3. if d ∈ Z with d|a and d|b, then d ≤ g. The gcd has the following nice properties: Theorem 3.11. Let a, b be integers that are not both zero, and let g = gcd(a, b). Then 1. g is the smallest positive integer of the form ax + by with x, y ∈ Z. 2. if d|a and d|b, then d|g. Proof. Let a, b, g ∈ Z be as in the statement of the theorem. Let T = {ax + by | x, y ∈ Z} ∩ N. 9 So T is the set of all positive integers of the form ax + by for some x, y ∈ Z. Note that T is nonempty. For example, |a| + |b| ∈ T. And T is bounded below by 0. By the well-ordering principle T contains a least element, call it m. We will show that m satisfies the three properties in the definiton of gcd, i.e. m|a, m|b, and if d|a and d|b, then d ≤ m. First we show that m|a. By the division algorithm, we know that a = mq +r for some 0 ≤ r < m. We need to show that r = 0. Since m ∈ T , there exist x, y ∈ Z such that m = ax + by. So we have r = a − mq = a − (ax + by)q = a(1 − xq) + b(−yq). If r ̸= 0, then this shows that r ∈ T. But r < m. Since m is the least element of S, this is a contradiction. Thus r = 0 and m|a. The proof that m|b is equivalent. Now suppose d ∈ Z with d|a and d|b. As above, we know m = ax + by for some x, y ∈ Z, so Proposition 3.7 part 3 tells us that d|m. Since m ∈ N, Proposition 3.7 tells us d ≤ m. Thus m = gcd(a, b), and we have proven statement 1. Statement 2 follows from Proposition 3.7 part 3. Now let’s fix a, b ∈ Z not both zero, and let’s look more closely at the set S := {ax + by | x, y ∈ Z}. An integer n ∈ S if and only if there exist x, y ∈ Z such that n = ax + by. Graphically, if we consider the line ax + by = n drawn in the plane R2 , we see that n ∈ S if and only if there exists a point (x, y) on this line with x, y ∈ Z. As an example, let a = b = 2 and let n = 1. Then the line ax + by = 1 is given by y = −x + 21 , and there are no points (x, y) ∈ Z2 on this line. This is the beginning of the study of Diophantine equations, which can become much more complicated than what we’re considering now.1 Using Theorem 3.11, we can gain a complete description of S. The first part of the theorem tells us gcd(a, b) is the least element of S. But we can say more: Theorem 3.12. Let a, b be integers that are not both zero, and let d = gcd(a, b). Let S = {ax + by | x, y ∈ Z}. Then S = {dn | n ∈ Z}. The theorem says that S is given by the set of all multiples of gcd(a, b). Proof. We will prove the two sets are equal by a double-inclusion argument. By Theorem 3.11, we know d ∈ S, so there exist x, y ∈ Z such that d = ax + by. Let n ∈ N. Then dn = (ax + by)n = a(xn) + b(yn), 1 Consider, for example, Fermat’s Last Theorem, proven by Andrew Wiles. 10 so dn ∈ S. We have shown that {dn | n ∈ Z} ⊂ S. Now suppose m ∈ S, so m = ax + by for some x, y ∈ Z. Then d|a and d|b, so by Proposition 3.7 part 3, we have that d|m. Thus S ⊂ {dn | n ∈ Z}, and the theorem is proven. Remark 3.13. Given d ∈ N, the set {dn | n ∈ Z} is often denoted dZ. 3.4 Relatively prime integers The following special case is particularly important. Definition 3.14. Let a, b ∈ Z not both zero. We say that a and b are relatively prime if gcd(a, b) = 1. We also sometimes say coprime instead of relatively prime. The following is a corollary of Theorem 3.11. Corollary 3.15. Let a, b ∈ Z not both zero. Then a and b are relatively prime if and only if there exist x, y ∈ Z such that ax + by = 1. Proof. Assume a and b are relatively prime. Then by definition gcd(a, b) = 1, so by Theorem 3.11 there exist x, y ∈ Z such that ax + by = 1. Now assume there exist integers x, y ∈ Z such that ax + by = 1. Then 1 is the smallest positive integer of the form ax + by (since no positive integer is smaller than 1), so by Theorem 3.11 we have 1 = gcd(a, b). Proposition 3.16. Let a, b, c ∈ Z. Assume a and b are relatively prime. 1. If a|bc, then a|c. 2. If a|c and b|c, then ab|c. Proof. 1. By Corollary 3.15, there exist x, y ∈ Z such that ax + by = 1. Multiplying both sides of this equation by c, we have acx + bcy = c. By definition a|ac. By assumption a|bc. By Proposition 3.7 part 3, we have a|c. 2. Exercise (see Sheet 2). 3.5 The Euclidean algorithm Given a, b ∈ Z (with b ̸= 0), how do we find gcd(a, b)? We can use what’s called the Euclidean algorithm, which is quicker than finding all the divisors of a and b. To use it, you repeatedly apply the division algorithm. Here’s how it works: 1. Start with a, b ∈ Z, and assume b > 0 (we can change the sign of b without changing gcd(a, b)). 11 2. Given the pair (a, b), use the division algorithm to find q1 , r1 ∈ Z such that a = q1 b + r1 and 0 ≤ r1 < b. 3. If r1 = 0, then b = gcd(a, b), and we’re done. Otherwise repeat the process with the pair (b, r1 ): use the division algorithm to find q2 , r2 ∈ Z with b = q2 r1 + r2 0 ≤ r2 < r1. 4. If r2 = 0, then r1 = gcd(a, b), and we’re done. Otherwise repeat the process with the pair (r1 , r2 ): use the division algorithm to find q3 , r3 ∈ Z with r1 = q3 r2 + r3 0 ≤ r3 < r2. 5. At every subsequent step, we use the division algorithm for the pair (rj+1 , rj ) to find qj+1 , rj+1 ∈ Z such that rj−1 = qj+1 rj + rj+1 0 ≤ rj+1 < rj. When rj+1 = 0, we have gcd(a, b) = rj. We will explain why the algorithm works soon, but first let’s do an example. Example 3.17. Let’s find gcd(108, 20) using the Euclidean algorithm. We have 108 = 5(20) + 8 20 = 2(8) + 4 8 = 2(4) + 0, so gcd(108, 20) = 4. Note that following the notation above, we have a = 108, b = 20, r1 = 8, r2 = 4. To see that the algorithm works, we need to show that gcd(a, b) = rj as claimed, and we need to show that the algorithm terminates. The algorithm terminates since b > r1 > r2 >... , so eventually there will be a pair (rj−1 , rj ) such that rj+1 = 0. To show gcd(a, b) = rj , we use the following lemma. Lemma 3.18. If a, b, q, r ∈ Z with b ̸= 0 and a = bq + r, then gcd(a, b) = gcd(b, r). Proof. Let d = gcd(a, b). To show that d = gcd(b, r), we first need to show that d|b and d|r. The fact that d|b follows from the definition of gcd(a, b). The equation a = bq + r tells us that r = a + b(−q). Since d|a and d|b, Proposition 3.7 part 3 tells us that d|r. Next suppose g|b and g|r. We must show that g ≤ d. Since a = bq + r, Proposition 3.7 part 3 tells us that g|a. Thus g is a common divisor of a and b, so by the definition of gcd(a, b), we must have g ≤ d. We have shown that d = gcd(b, r). 12 The lemma tells us that in the algorithm we have gcd(a, b) = gcd(b, r1 ) = gcd(r1 , r2 ) = · · · = gcd(rj−1 , rj ). In the last step we have rj |rj−1 , which tells us that rj = gcd(rj−1 , rj ) = gcd(a, b) as desired. Given a, b as above, we know from Theorem 3.11 that gcd(a, b) can be written in the form ax + by for some integers x, y. We can use the Euclidean algorithm backwards to find such integers x, y. Here’s how: the Euclidean algorithm gives us a list of equations a = bq1 + r1 b = r1 q2 + r2... rj−2 = rj−1 qj + rj rj−1 = rj qj+1 with rj = gcd(a, b). Using these, we have rj = rj−2 − rj−1 qj rj−1 = rj−3 − rj−2 qj−1... r2 = b − r1 q2 r1 = a − bq1. Start with the first equation, and use the second to plug in for rj−1 : gcd(a, b) = rj = rj−2 − (rj−3 − rj−2 qj−1 )qj. Simplifying, the righthand side is now of the form rj−2 x + rj−3 y for x, y ∈ Z. Then use the next equation to plug in for rj−2 , and then use the next equation to plug in for rj−3. Continuing in this way, we eventually have an equation of the form ax + by. Example 3.19. We saw in the previous example that gcd(108, 20) = 4. Let’s find integers x, y such that 108x + 20y = 4. We previous had 108 = 5(20) + 8 20 = 2(8) + 4 8 = 2(4) + 0. Using the second equation, we have 4 = 20 − 2(8), (3.2) 8 = 108 − 5(20). (3.3) and using the first equation, we have Using (3.3) to plug into (3.2), we have 4 = 20 − 2(108 − 5(20)) = −2(108) + 6(20), So taking x = −2 and y = 6, we have 4 = 108x + 20y. 13 Example 3.20. Let’s use the Euclidean algorithm to find gcd(114, 42). We have 114 = 2(42) + 30 42 = 1(30) + 12 30 = 2(12) + 6 12 = 2(6), so gcd(114, 42) = 6. Next let’s find x, y ∈ Z such that 6 = 114x + 42y. We have 6 = 30 − 2(12) 12 = 42 − 30 30 = 114 − 2(42). Plugging in, we have 6 = 30 − 2(42 − 30) = −2(42) + 3(30) = −2(42) + 3(114 − 2(42)) = 3(114) − 8(42) so we can take x = 3 and y = −8. 3.6 Prime factorization Definition 3.21. Let n ∈ Z with n > 1. We say n is prime if its only positive divisors are 1 and n. If n is not prime, it is called composite. The proof of the following lemma is an exercise (see Sheet 3). Lemma 3.22. Let a ∈ Z, and let p ∈ Z be a prime. 1. If p ∤ a, then gcd(a, p) = 1. 2. If a is prime and p|a, then p = a. Proposition 3.23. Let a, b ∈ Z, and let p ∈ Z be a prime. If p|ab, then p|a or p|b. Proof. Assume p|ab and p ∤ a. Then by part 1 of Lemma 3.22 we have gcd(a, p) = 1, so by Proposition 3.16, we have p|b. The following corollary follows from the proposition by induction on k. (The proof is an exercise, see Sheet 3.) Corollary 3.24. Let a1 , a2 ,... , ak ∈ Z, and let p ∈ Z be prime. If p|a1 a2... ak , then p|ai for some i ∈ {1,... , k}. Theorem 3.25 (The fundamental theorem of arithmetic). Let n ∈ Z with n > 1. Then there is a positive integer k and a list p1 ,... , pk of prime numbers such that n = p1... pk. This factorization is unique, up to changing the order of the prime factors p1 ,... , pk. 14 An expression for n in the form p1... pk (where p1 ,... , pk are primes) is called a prime factorization for n. Note that we can have duplicates: it’s possible for pi = pj for some i, j. The theorem is saying, first, that every integer greater than 1 has a prime factorization. The uniqueness statement is saying that if n = p1... pk and n = q1... qr are two prime factorizations for n, then we have k = r, and we can reorder the list q1 ,... qr of primes to get p1 = q1 , p2 = q2 ,... pk = qk. Proof. First note that the theorem is true if n is a prime, since we can take k = 1 and p1 = n. We will now use induction on n to prove that n has a prime factorization. Our base case is n = 2. Since 2 is prime, it has a prime factorization as just mentioned. Now assume n > 2 and that each integer 2, 3,... , n − 1 has a prime factorization. If n is prime, then we already saw that it has a prime factorization. Otherwise n has a divisor m with m ̸= 1, n. We have n = mℓ for some ℓ (and note that we also have ℓ ̸= 1, n since m ̸= 1, n). Since m and ℓ are integers with 1 < m < n and 1 < ℓ < n, both m and ℓ have prime factorizations by assumption. So there exist primes p1 ,... , pt and q1 ,... qr with m = p1... pt and ℓ = q1... qr. We have n = mℓ = p1... pt q1... qr , which is a prime factorization of n. This proves the existence statement in the theorem. Now we prove the uniqueness statement. Again, we first assume n is prime. Then n = p1 is one prime factorization of n. Suppose n = q1... qr is another. Then q1 |n. Since q1 is prime, we have q1 = n by Lemma 3.22. But then if r > 1, we have q2... qr = 1, contradicting the fact that all qi are greater than 1. Thus r = 1 and n = p1 = q1 is the unique prime factorization of n. We now again use induction on n. Assume the prime factorizations of 2,... , n − 1 are unique. Suppose n = p1... pk and n = q1... qr are prime factorizations of n. Since p1 |n, by Corollary 3.24, we have p1 |qi for some i. Since qi is prime, this implies p1 = qi by Lemma 3.22. Reordering q1 ,... , qr , we may assume i = 1. Thus p2... pk = q2... qr are prime factorizations of pn1. Since 1 < pn1 < n, the prime factorization of pn1 is unique by our induction hypothesis. Thus k = r, and up to reordering we have pi = qi for all i. Corollary 3.26. Let a, b ∈ Z, not both zero. Then a and b are relatively prime if and only if they have no common prime divisors. 15 Proof. Suppose a and b have no common prime divisors. Let g = gcd(a, b). If g > 1, then by the fundamental theorem of arithmetic, g has a prime divisor p. But then p|g, so p|a and p|b by Proposition 3.7. This contradicts our assumption, so we must have g = 1, i.e. m and n are relatively prime. Now suppose m and n are relatively prime, i.e. gcd(a, b) = 1. By definition of gcd, any common divisor of a and b is at most 1. Since every prime is greater than 1, a and b have no common prime divisors. 4 Binary operations 4.1 Definition and properties Definition 4.1. A binary operation ∗ on a set S is a function S × S → S. Under this function, the image of a pair (a, b) is denoted a ∗ b. Example 4.2. Here are some common binary operations on Z: Addition + is a binary operation on Z (also on N, Q, R, and C). To relate this to the definition above, the corresponding function Z × Z → Z is the function (a, b) 7→ a + b. Multiplication · is a binary operation on Z (also on N, Q, R, and C. Note that we normally omit the symbol · and just write ab for a · b. Subtraction − is a binary operation on Z (but note that it does not give a binary operation on N). Example 4.3. Here are two binary operations on the set M2 (R) of 2 × 2 matrices over R Matrix addition + is a binary operation on M2 (R). As a reminder, matrix addition is defined on 2 × 2 matrices as follows:       a+x b+y x y a b = + c+z d+w z w c d Matrix multiplication is a binary operation on M2 (R). Note that for two matrices A, B ∈ M2 (R), we write their product as AB. Matrix multiplication is defined on 2 × 2 matrices as follows:      a b x y ax + bz ay + bw = c d z w cx + dz cy + dw Example 4.4. Another important example is defined as follows: let A be a set, and let S be the set of all functions f : A → A. Then composition of functions ◦ is a binary operation on S: this is because given two functions f : A → A and g : A → A, the composition f ◦ g is also a function A → A. There are some variations on Example 4.4 that will come up throughout the course: one is where we take S to be the set of all permutations of {1,... , n} and one is where we take S to be all symmetries of a regular n-gon. In the rest of Section 4, we’ll look at some examples in detail (including the ones just mentioned), but for now, we’ll define some properties that a binary operation might have. 16 Definition 4.5. Let ∗ be a binary operation on a set S. 1. We say ∗ is commutative if a ∗ b = b ∗ a for all a, b ∈ S. 2. We say ∗ is associative if a ∗ (b ∗ c) = (a ∗ b) ∗ c for all a, b, c ∈ S. Example 4.6. Let’s consider the examples above. As we already mentioned, addition on Z is commutative since a+b=b+a for all a, b ∈ Z. Addition is associative since a + (b + c) = (a + b) + c for all a, b, c ∈ Z. Similarly, multiplication on Z is commutative and associative. Subtraction − on Z is not commutative: for example 0 − 1 ̸= 1 − 0. Subtraction on Z is also not associative. For example, 0 − (1 − 1) ̸= (0 − 1) − 1. Example 4.7. Matrix addition on M2 (R) is commutative:           a b x y a+x b+y x y a b + = = + c d z w c+z d+w z w c d for all a, b, c, d, x, y, z, w ∈ R. And matrix addition is also associative, which you can easily show as an exercise. Matrix multiplication on M2 (R) is not commutative. For example,      0 1 0 1 1 0 , = 0 0 0 0 0 0 but      0 1 1 0 0 0 =. 0 0 0 0 0 0 But matrix multiplication is associative: we have A(BC) = (AB)C for all A, B, C ∈ M2 (R). You can prove this by hand, though you might find the proof a bit tedious. You will probably also prove it in a linear algebra class. Example 4.8. Now let’s consider composition of functions on Z. Let S be the set of all functions f : Z → Z. As mentioned above, composition of functions ◦ gives a binary operation on S. This operation is not commutative. For example, let f : Z → Z be defined by f (n) = 1 for all n ∈ Z, and let g : Z → Z be defined by g(n) = n + 1 for all n ∈ Z. 17 Then we have (f ◦ g)(n) = f (g(n)) = f (n + 1) = 1 for all n ∈ Z, and (g ◦ f )(n) = g(f (n)) = g(1) = 2 for all n ∈ Z. Since f ◦ g ̸= g ◦ f , we see that ◦ is not commutative on S. But ◦ associative on S, which we will prove in the following lemma. Lemma 4.9. Let A be a set. Let f, g, and h be functions A → A. Then f ◦ (g ◦ h) = (f ◦ g) ◦ h. Proof. To show that f ◦ (g ◦ h) = (f ◦ g) ◦ h, we need to show that (f ◦ (g ◦ h))(a) = ((f ◦ g) ◦ h)(a) for all a ∈ A. So let a ∈ A. Then (f ◦ (g ◦ h))(a) = f ((g ◦ h)(a)) = f (g(h(a)) = (f ◦ g)(h(a)) = ((f ◦ g) ◦ h)(a), which is what we wanted to prove. 4.2 Symmetries of polygons We now define a particular example of a set with a binary operation. Let Dn be the set of symmetries of a regular n-gon. By symmetry we mean one of the following maps: a rotation by 2πk n for some k ∈ Z reflection through an axis of symmetry through the n-gon the trivial symmetry, which is the map that fixes the n-gon. Example 4.10. Let’s look at the case n = 3, We have the following two rotations (here we are labeling the vertices as A, B, and C): A C 2π 3 r1 B C A B 4π 3 C A B r2 B A 18 C Note that if we rotate anti-clockwise by anti-clockwise by 4π 3 is the same as r1. 2π 3 , then this is the same as r2 , and similarly rotating We have the following three reflections: A A s1 B C C B A C s2 B C B A A B s3 B C A C And the trivial symmetry e fixes the triangle: A A e B C B C For a general n, Dn consists of n − 1 rotations (by 2πk n for n ∈ {1, 2,... , n − 1}), n reflections, and the trivial symmetry, for a total of 2n elements. Composition of functions ◦ defines a binary operation on Dn. Example 4.11. For example, let n = 3 as above. We can compute s1 ◦ r1 as follows: A C 2π 3 C C r1 B s1 A B 19 A B We see that s1 ◦ r1 = s2. Note that for f, g ∈ Dn , we normally write f ◦ g as f g. We write f ◦ f as f 2 , and f ◦ f ◦ · · · ◦ f (n times) as f n. Exercise 4.12. Let s ∈ D3 be a reflection, and let r ∈ D3 be a rotation. Then sr = r2 s. We finish with some observations about the elements of Dn that will become important later. Lemma 4.13. Let e ∈ Dn be the trivial symmetry. 1. If f ∈ Dn , then f e = ef = f. 2. If r ∈ Dn is a rotation, then rn = e. 3. If s ∈ Dn is a reflection, then s2 = e. The proof of the lemma follows easily from definitions. 4.3 Permutations Definition 4.14. A permutation of {1, 2,... , n} is a bijective function f : {1, 2,... , n} → {1, 2,... , n}. We write Sn for the set of all permutations of {1, 2,... , n}. We now discuss some common notation for elements of Sn. Let f ∈ Sn. Then one way of writing f is as:   1 2... n f (1) f (2)... f (n) Example 4.15. The element  f= 1 2 3 4 5 6 2 1 4 5 3 6  of S6 is given by f (1) = 2 f (2) = 1 f (3) = 4 f (4) = 5 f (5) = 3 f (6) = 6. For algebraists, a more common way of writing elements of Sn is to use what’s called cycle notation. If a1 , a2 ,... , ak are distinct elements of {1, 2,... , n}, we write (a1 a2 a3... ak ) 20 for the permutation f defined by f (a1 ) = a2 f (a2 ) = a3 f (a3 ) = a4... f (ak−1 ) = ak f (ak ) = a1 and f (j) = j for all j ∈ / {a1 , a2 ,... , ak }. We call (a1 a2... ak ) a k-cycle (or just a cycle). Example 4.16. The element f = (1 5 3) ∈ S5 is defined by f (1) = 5, f (2) = 2, f (3) = 1, f (4) = 4, f (5) = 3. Note that (a1 a2... ak ) = (a2 a3... ak a1 ) = (a3 a4... ak ak a1 a2 ) and so on. This follows directly from the definition of a k-cycle. Similar to the previous section, we write e for the element of Sn given by e(j) = j for all j ∈ {1, 2,... , n}. Note that every 1-cycle is equal to e. Note that Sn contains n! elements. This is because to define an element f ∈ Sn , we first must choose f (1). There are n possibilities for f (1). Given f (1), there are n − 1 possibilities for f (2) (since f is injective). Given f (1) and f (2), there are n − 2 possibilities for f (3), and so on, giving 2 possibilities for f (n − 1) and 1 for f (n). We thus count n(n − 1)(n − 2)... (2)(1) elements of Sn. Example 4.17. There are 6 elements of S3. They are given in cycle notation by S3 = {e, (1 2), (1 3), (2 3), (1 2 3), (1 3 2)}. By Proposition 2.10, if f, g ∈ Sn , then f ◦ g ∈ Sn. This tells us that composition of functions ◦ defines a binary operation on Sn. As in the previous section, we often write f g for f ◦ g. In particular, if we start with two cycles f = (a1... ak ) and g = (b1... bm ), then we write (a1... ak )(b1... bm ) for the function f ◦ g. Example 4.18. Let f = (1 4 2)(3 2) ∈ S4. Let’s find f (2). We start on the right. We have (3 2) (1 4 2) (3 2) (1 4 2) 2 −−−→ 3 −−−−→ 3 so f (2) = 3. Now we find f (3). We have 3 −−−→ 2 −−−−→ 1 so f (3) = 1. Similarly, you can check that f (1) = 4 and f (4) = 2. Thus we see that f is given by   1 2 3 4 f= = (1 4 2 3). 4 3 1 2 21 As in the previous section, we write f n for f ◦ f ◦ · · · ◦ f (n times). The binary operation ◦ on Sn is associative by Lemma 4.9. Note that for n ≥ 3, the binary operation ◦ on Sn is not commutative. For example, (1 2)(1 3) ̸= (1 3)(1 2). But there are certain pairs of cycles that do commute, as we describe now. Definition 4.19. Two cycles (a1... ak ), (b1... bℓ ) ∈ Sn are called disjoint if {a1 ,... , ak } ∩ {b1 ,... bℓ } = ∅. Lemma 4.20. Disjoint cycles commute. In other words if (a1... ak ), (b1... bℓ ) ∈ Sn are disjoint, then (a1... ak )(b1... bℓ ) = (b1... bℓ )(a1... ak ). Proof. We need to show that (a1... ak )(b1... bℓ )(j) = (b1... bℓ )(a1... ak )(j) for all j ∈ {1,... , n}. So let j ∈ {1,... , n}. If j ∈ / {a1 ,... , ak , b1 ,... , bℓ }, then (a1... ak )(b1... bℓ )(j) = (b1... bℓ )(a1... ak )(j) = j. If j = bi for some i, then (a1... ak )(b1... bℓ )(j) = (a1... ak )(bi+1 ) = bi+1 and (b1... bℓ )(a1... ak )(j) = (b1... bℓ )(j) = bi+1. (Here if i = ℓ, then we are taking bi+1 = b1.) We are using the fact that {a1 ,... , ak } ∩ {b1 ,... , bℓ } = ∅, so (a1... ak ) fixes both bi and bi+1. If j = ai for some i, the proof is similar. A nice fact about Sn is that every f ∈ Sn can be written as the product of disjoint cycles. We won’t prove this in this module, but here is an example of how you’d write a function this way: Example 4.21. Let   1 2 3 4 5 6 f= ∈ S6 2 1 4 5 3 6 as in Example 4.15. To write f as a product of disjoint cycles, we can start with 1. We see that f (1) = 2, so we start writing a cycle as (1 2... Since f (2) = 1, we close the cycle, getting (1 2)... Now we’ve considered 1 and 2, so we move on to 3. We see that f (3) = 4, so we continue writing f as (1 2)(3 4... Then f (4) = 5, so we have (1 2)(3 4 5... 22 and then f (5) = 3, so we close the cycle, getting (1 2)(3 4 5). Lastly we move on to 6. Since f (6) = 6, we can omit it in cycle notation. Thus f = (1 2)(3 4 5). If we start with f ∈ Sn as a product of non-disjoint cycles, we can use a similar process to write f as a product of disjoint cycles: Example 4.22. Let f = (1 4 2 3)(1 5)(1 2) ∈ S5. Let’s write f as a product of disjoint cycles. We again start with 1, and we must start on the right. To find f (1) we have (1 2) (1 5) (1 4 2 3) 1 −−−→ 2 −−−→ 2 −−−−−→ 3 so f (1) = 3, and we start writing f as (1 3... To continue the cycle, we find f (3). We have (1 2) (1 5) (1 4 2 3) 3 −−−→ 3 −−−→ 3 −−−−−→ 1 so f (3) = 1 and we close the cycle (1 3)... Next we move on to 2. We have (1 2) (1 5) (1 4 2 3) 2 −−−→ 1 −−−→ 5 −−−−−→ 5 so f (2) = 5. We write (1 3)(2 5... To continue we must find f (5). We have (1 2) (1 5) (1 4 2 3) 5 −−−→ 5 −−−→ 1 −−−−−→ 4 so f (5) = 4, and we write (1 3)(2 5 4... You can check that f (4) = 2. We close the cycle, getting f = (1 3)(2 5 4). 4.4 Arithmetic modulo n Definition 4.23. Let a, b ∈ Z and n ∈ N. We say a is congruent to b modulo n if n|(a − b). Equivalently, if a is congruent to b modulo n, we say “a and b are congruent modulo n.” We often say “mod n” instead of “modulo n.” We write a ≡ b mod n to mean a is congruent to b modulo n. Example 4.24. If a ∈ Z is even, then a ≡ 0 mod 2. If a is odd, then a ≡ 1 mod 2. 23 Proposition 4.25. Let a, b ∈ Z and n ∈ N. The following are equivalent: 1. a ≡ b mod n. 2. a = kn + b for some k ∈ Z. 3. a and b have the same remainder when divided by n. Proof. (1 ⇒ 2) If a ≡ b mod n, by definition we have n|(a − b). By the definition of divisor, this means a − b = nk for some k ∈ Z, and so a = kn + b. (2 ⇒ 3) Assume k ∈ Z and a = kn + b. Using the Division Algorithm, write a = qn + r, where q, r ∈ Z and 0 ≤ r < n. Then r is the remainder when dividing a by n. We have kn + b = qn + r b = (q − k)n + r, so we see the r is also the remainder when dividing b by n. (3 ⇒ 1) Assume a and b have the same remainder when divided by n. So there exist q1 , q2 , r ∈ Z such that 0 ≤ r < n, a = q1 n + r and b = q2 n + r. We have a − b = (q1 − q2 )n, so n|(a − b) and a ≡ b mod n. Example 4.26. As an example, the integers 1, 6, 11, 16, 21 all have the same remainder when dividing by 5, so they are all congruent to each other modulo 5. We also have −4 ≡ 1 mod 5, since −4 = 5 + 1. The following useful lemma is left as an exercise (see Sheet 4). Lemma 4.27. Let a, b, c ∈ Z and n ∈ N. 1. If a ≡ b mod n, then b ≡ a mod n. 2. If a ≡ b mod n and b ≡ c mod n, then a ≡ c mod n. 3. We have a ≡ a mod n. The next proposition will allow us to define binary operations using arithmetic modulo n. Proposition 4.28. Let a, b, c, d ∈ Z and n ∈ N. Assume a ≡ b mod n and c ≡ d mod n. Then 1. a + c ≡ b + d mod n, and 2. ac ≡ bd mod n Proof. By Proposition 4.25, there exist integers k, m ∈ Z such that a = kn + b and c = mn + d. Then a + c = (k + m)n + (b + d), so using Proposition 4.25 again, we have a + c ≡ b + d mod n. 24 We also have ac = (kn + b)(mn + d) = kmn2 + (kd + bm)n + bd = (kmn + kd + bm)n + bd so using Proposition 4.25 again, we have ab ≡ bd mod n. Definition 4.29. Let a ∈ Z and n ∈ N. The set {b ∈ Z | a ≡ b mod n} is called the congruence class of a modulo n. We write [a]n for the congruence class of a modulo n. The congruence class of a modulo n is also called the residue class of a modulo n. Proposition 4.25 gives us another way of writing [a]n : Lemma 4.30. Given a ∈ Z and n ∈ N, we have [a]n = {a + kn | k ∈ Z}. The proof follows directly from Proposition 4.25. Example 4.31. Building on Example 4.26, the congruence class of 1 modulo 5 is given by 5 = {1 + 5k | k ∈ Z} = {... , −14, −9, −4, 1, 6, 11, 16,... }. Proposition 4.32. Let a, b ∈ Z and n ∈ N. Then a ≡ b mod n if and only if [a]n = [b]n. Proof. (⇒) Assume a ≡ b mod n. We need to show [a]n = [b]n. We will use double inclusion. Let c ∈ [a]n. Then by definition we have c ≡ a mod n. By Lemma 4.27, we have c ≡ b mod n. This tells us c ∈ [b]n. Similarly if c ∈ [b]n , then c ∈ [a]n. Thus [a]n = [b]n. (⇐) Assume [a]n = [b]n. We have a ≡ a mod n, so a ∈ [a]n. By assumption this tells us a ∈ [b]n. By definition, this means a ≡ b mod n. Lemma 4.33. Let a ∈ Z and n ∈ N. Then a ∈ [r]n for some r ∈ Z with 0 ≤ r < n. Proof. By the Division Algorithm, we know a = qn + r for some q, r ∈ Z with 0 ≤ r < n. By Proposition 4.25, we have a ∈ [r]n. The lemma tells us that if we fix n ∈ N, then every integer is in one of the following congruence classes: n , n ,... , [n − 1]n. Further, this congruence classes are disjoint: if k, m ∈ {0,... , n − 1}, then [k]n ∩ [m]n ̸= ∅ if and only if k = m. (This follows directly from Proposition 4.25, where we are using the fact that k and m are their own remainders after division by n.) In other words, the sets n , n ,... , [n−1]n form a partition of Z. Given n ∈ N, let Zn be the set of congruence classes modulo n. We have just shown that Zn = {n , n ,... , [n − 1]n }. Note that Zn is a finite set, but each element of Zn is an infinite set. 25 Example 4.34. We have Z3 = {3 , 3 , 3 }, where 3 = {... , −6, −3, 0, 3, 6, 9,... } 3 = {... , −5, −2, 1, 4, 7,... } 3 = {... , −4, −1, 2, 5, 8,... }. Fix n ∈ N. Now we come to the purpose of this section, which is to define binary operations on Zn. We would like to define addition on Zn by [a]n + [b]n = [a + b]n and multiplication by [a]n [b]n = [ab]n , but there is an issue we have to address first. Let’s take a look at addition. For each congruence class [a]n , there are infinitely many integers a′ such that [a′ ]n = [a]n. Similarly, there are infinitely many b′ ∈ Z such that [b′ ]n = [b]n. We want to make sure that our binary operation (4.4) doesn’t depend on which elements we choose in each congruence class. In other words, we need to make sure that if [a′ ]n = [a]n and [b′ ]n = [b]n , then [a + b]n = [a′ + b′ ]n. If this is the case, we say that + is well defined. We need to check the same thing for multiplication. More concretely, let’s take the example of n = 3. We have 3 = [−2]3 and 3 = 3. For (4.4) to make sense, we need to make sure, for example, that [1 + 2]3 = [(−2) + 8]3. And for multiplication to make sense, we need to make sure, for example, that [(1)(2)]3 = [(−2)(8)]3. These properties will follow from Proposition 4.28, as explained in the following proposition. Proposition 4.35. Let a, a′ , b, b′ ∈ Z and n ∈ N. Assume [a]n = [a′ ]n and [b]n = [b′ ]n. Then [a + b]n = [a′ + b′ ]n and [ab]n = [a′ b′ ]n. The proof follows directly from propositions 4.28 and 4.32. Thus the formulas (4.4) and (4.4) give two binary operations on Zn , which we refer to as addition and multiplication, respectively. Proposition 4.36. Addition and multiplication on Zn are both commutative and associative. The proof is an exercise (see Sheet 4). 26 5 Groups: Examples and first properties 5.1 Definition and examples Definition 5.1. A group is a set G with a binary operation ∗ satisfying the following properties: 1. ∗ is associative. 2. There is an element e ∈ G such that e ∗ g = g ∗ e = g for all g ∈ G. 3. Given g ∈ G, there is an element h ∈ G such that g ∗ h = h ∗ g = e. We often write (G, ∗) for the group G with binary operation ∗. Remark 5.2. The element e ∈ G as in the definition is called an identity element of G. Given g ∈ G, an element h with g ∗ h = h ∗ g = e is called an inverse of g. Given g1 , g2 ∈ G, we often write g1 g2 instead of g1 ∗ g2. Now let’s consider some examples. Example 5.3. Let G = Z and let ∗ = + be given by addition. We have already seen that ∗ is associative. We have 0+n=n+0=n for all n ∈ Z, so 0 is an identity element. Lastly, if n ∈ Z, then n + (−n) = (−n) + n = 0 so −n is the inverse of n. Thus (Z, +) is a group. Here is a non-example. Consider Z with the binary operation of multiplication. We already showed that multiplication is associative. We have n(1) = (1)n = n for all n ∈ Z, so 1 is an identity element for multiplication. But the element 2 ∈ Z does not have an inverse under multiplication. So Z does not form a group under the binary operation of multiplication. But we can adjust this example to form groups under multiplication: Example 5.4. Let k ∈ {Q, R, C}, and let k × = k \ {0}. Then multiplication gives a binary operation on k ×. Multiplication is associative, and 1 ∈ k × is an identity element. Further, given x ∈ k × , we have     1 1 x = x=1 x x so 1 x is an inverse of x. Thus k × forms a group under multiplication. Example 5.5. Now consider G = M2 (R), the set of 2 × 2 matrices with entries in R. Let ∗ = + be given by matrix addition. We already saw that + is associative. You will prove on Sheet 5 that (M2 (R), +) forms a group. 27 Here is another non-example: Consider M2 (R) with the binary operation of matrix multplication. We already saw that this operation is associative. If we let I = ( 10 01 ), then AI = IA = A for all A ∈ M2 (R), so I is an identity under multiplication. But there are elements with no inverse under multiplication. For example, B = ( 00 00 ) has no inverse under multiplication: there is no A ∈ M2 (R) with AB = BA = I. So M2 (R) does not form a group under matrix multiplication. But we can again adjust this non-example to form a group:  Example 5.6. Recall that given A = ac db ∈ M2 (R), we define the determinant of A by det(A) = ad − bc. If det(A) ̸= 0, then A has a multiplicative inverse given by  d −b  ad−bc ad−bc B=. a −c ad−bc ad−bc We have AB = BA = I. So we let GL2 (R) = {A ∈ M2 (R) | det(A) ̸= 0}. Then GL2 (R) forms a group under matrix multiplication. Example 5.7. As in Section 4.2, let Dn be the symmetries of a regular n-gon, and let ∗ be given by composition. As we already showed, ∗ is associative. Let e ∈ Dn be the trivial symmetry, which fixes the n-gon. In Lemma 4.13, we saw that if f ∈ Dn , then f e = ef = f , so e is an identity element. Further Lemma 4.13 shows that each f ∈ Dn has an inverse: The inverse of e is e. If r ∈ Dn is a rotation, then rn−1 is an inverse of r (since rn−1 r = rrn−1 = rn = e). If s ∈ Dn is a reflection, then s is an inverse of s (since ss = s2 = e). Thus Dn is a group, called a dihedral group. Example 5.8. As in Section 4.3, let Sn be the set of permutations of {1,... , n}. Recall that composition of functions ◦ gives a binary operation on Sn. We will show that Sn is a group under this operation. We already saw that ◦ is associative by Lemma 4.9. We defined a function e ∈ Sn by e(j) = j for all j ∈ {1,... , n}. If f ∈ Sn , then (f e)(j) = f (e(j)) = f (j) = e(f (j)) = ef (j) for all j ∈ {1,... , n}. Thus f e = ef = f , and e is an identity element. Now we need to show that every f ∈ Sn has an inverse. Given f ∈ Sn , define g ∈ Sn in the following way: given j ∈ {1,... , n}, by the definition of bijective, there exists a unique k ∈ {1,... , n} such that f (k) = j. We define g(j) = k. Note that (f ◦ g)(j) = f (g(j)) = f (k) = j by construction. We have that (g ◦ f )(j) = g(f (j)), which by definition is the unique element m of {1,... , n} such that f (m) = f (j). But since f is bijective, we have m = j. So (g ◦ f )(j) = j for all j ∈ {1,... , n}. We have g ◦ f = f ◦ g = e. Thus g is an inverse for f. We have shown that Sn is a group under composition. We call Sn a symmetric group. 28 Example 5.9. Now let Zn be the set of congruence classes modulo n as defined in Section 4.4. We have defined a binary operation + on Zn , and by Proposition 4.36 we know + is associative. We have [a]n + n = n + [a]n = [a]n for all [a]n ∈ Zn , so n is an identity element under +. Given [a]n ∈ Zn , we have [a]n + [−a]n = [−a]n + [a]n = n so [−a]n is an inverse for [a]n. Thus (Zn , +) forms a group. 5.2 The group Z× n Let n ∈ Z, and assume n ≥ 2. We have another binary operation on Zn , given by multiplication. By Proposition 4.36, we know this operation is associative. And we have [a]n n = n [a]n = [a]n for all [a]n ∈ Zn , so n forms an identity under multiplication. But not every element has an inverse: for example, n [a]n = n ̸= n for any [a]n ∈ Zn , so n does not have an inverse. And there could be other elements without inverses: for example, in Z6 , we have [a]6 6 = [2a]6 for any [a]6 ∈ Z6. But 2a is always even, so it is never of the form 6k + 1 for some k ∈ Z. By Proposition 4.25, this tells us we can never have [2a]6 = 6. Thus 6 does not have an inverse under multiplication in Z6. So which elements in Zn have multiplicative inverses? The following proposition tells us. Proposition 5.10. Let a ∈ Z and n ∈ N. Then there exists b ∈ Z such that ab ≡ 1 mod n if and only if gcd(a, n) = 1. Proof. (⇐) Assume gcd(a, n) = 1. Then by Corollary 3.15 there exist x, y ∈ Z such that ax+ny = 1. So ax = n(−y) + 1, so ax ≡ 1 mod n by Proposition 4.25. Thus we can take b = x. (⇒) Suppose b ∈ Z with ab ≡ 1 mod n. Then n|(ab − 1), so ab − 1 = ny for some y ∈ Z. Then ab + n(−y) = 1, and by Corollary 3.15, we have gcd(a, n) = 1. The goal is to use Proposition 5.10 to define a group using multiplication on Zn. The next proposition tells us that if a ∈ Z satisfies gcd(a, n) = 1, then any element b ∈ Z congruent to a modulo n also satisfied gcd(b, n) = 1. This tells us that every element in the congruence class [a]n has a multiplicative inverse modulo n. Proposition 5.11. Let a, b ∈ Z, n ∈ N, and assume a ≡ b mod n. Then gcd(a, n) = gcd(b, n). Proof. By Proposition 4.25, there exists k ∈ Z such that a = nk + b. Now use Lemma 3.18 to see gcd(a, n) = gcd(b, n). 29 We define Z× n = {[a]n | gcd(a, n) = 1}. Then Z× n is a subset of Zn. Example 5.12. Recall that Z6 = {6 , 6 , 6 , 6 , 6 , 6 }. We have Z× 6 = {6 , 6 }. If n is prime, then Z× n = {n , n ,... , [n − 1]n }. Proposition 5.13. The set Z× n forms a group under multiplication. Proof. First we need to show that multiplication gives a binary operation on Z× n , i.e. we need ×. You proved this in Sheet 2 (Exercise 5(ii)). to show that if [a]n , [b]n ∈ Z× , then [ab] ∈ Z n n n Next, we already know that multiplication on Zn is associative, so multiplication on Z× n is also associative. As we already saw, the element n ∈ Zn is an identity under multiplication. And Proposition 5.10 shows that if [a]n ∈ Z× n , then [a]n has an inverse with respect to multiplication. × Thus Zn forms a group. 5.3 Product groups Let (G1 , ∗1 ) and (G2 , ∗2 ) be groups. Consider the set G1 × G2 = {(g1 , g2 ) | g1 ∈ G1 , g2 ∈ G2 }. We can define a binary operation ⋆ on G1 × G2 by (g1 , g2 ) ⋆ (h1 , h2 ) = (g1 ∗1 h1 , g2 ∗2 h2 ). Example 5.14. Let (G1 , ∗1 ) = (Z2 , +) and (G2 , ∗2 ) = (Z3 , +). Then Z2 × Z3 = {(2 , 3 ), (2 , 3 ), (2 , 3 ), (2 , 3 ), (2 , 3 ), (2 , 3 )} and ⋆ is defined by ([a]2 , [b]3 ) ⋆ ([c]2 , [d]3 ) = ([a + c]2 , [b + d]3 ) Example 5.15. If G1 = G2 = R, and ∗1 = ∗2 = + is addition on R, then G1 × G2 = R2 , and ⋆ is just vector addition on R2. Proposition 5.16. With notation as above, (G1 × G2 , ⋆) forms a group. Proof. We first need to check that ⋆ is associative. This is an exercise (see Sheet 5). Now we need to show that there is an identity element in G1 × G2. Let e1 ∈ G1 be an identity element for ∗1 , and let e2 ∈ G2 be an identity element for ∗2. Then (e1 , e2 ) ⋆ (g1 , g2 ) = (e1 ∗1 g1 , e2 ∗2 g2 ) = (g1 , g2 ) for any (g1 , g2 ) ∈ G1 × G2. And similarly (g1 , g2 ) ⋆ (e1 , e2 ) = (g1 , g2 ) for any (g1 , g2 ) ∈ G1 × G2. Thus (e1 , e2 ) is an identity element. 30 Last, let (g1 , g2 ) ∈ G1 × G2. We need to show that (g1 , g2 ) has an inverse under ⋆. Since (G1 , ∗1 ) is a group, we know that g1 has an inverse under ∗1 , call it h1. Since (G2 , ∗2 ) is a group, we know that g2 has an inverse under ∗2 , call it h2. Then (g1 , g2 ) ⋆ (h1 , h2 ) = (g1 ∗1 h1 , g2 ∗2 h2 ) = (e1 , e2 ), and similarly (h1 , h2 ) ⋆ (g1 , g2 ) = (e1 , e2 ). Thus (h1 , h2 ) is an inverse for (g1 , g2 ). Remark 5.17. Note that we usually write (g1 , g2 )(h1 , h2 ) instead of (g1 , g2 ) ⋆ (h1 , h2 ). If the operations on G1 and G2 are both addition, then we write (g1 , g2 ) ⋆ (h1 , h2 ) as (g1 , g2 ) + (h1 , h2 ). 5.4 First properties Now we prove some basic properties that all groups have, and give some additional definitions. First, note that the definition of a group G doesn’t tell us immediately that the identity element is unique (in theory, there could be two elements e, e′ ∈ G such that eg = ge = g = e′ g = ge′ for all g ∈ G). But the identity element in a group is unique, as we shall soon see. And given g ∈ G, the inverse of g is also unique. We prove these properties in the following lemma. Lemma 5.18. Let G be a group. Then 1. There is a unique element e ∈ G such that eg = ge = g for all g ∈ G. 2. Given g ∈ G, there is a unique h ∈ G such that gh = hg = e. Proof. Note that for both statements, existence follows from the definition of a group. It suffices to prove uniqueness. 1. Assume e1 , e2 ∈ G satisfy e1 g = ge1 = g and e2 g = ge2 = g for all g ∈ G (so e1 and e2 are both identity elements). Then e1 e2 = e2 since e1 is an identity, and e1 e2 = e1 since e2 is an identity. Thus e1 = e2. 2. Let g ∈ G, and assume h, k ∈ G with gh = hg = e and gk = kg = e. Then gh = gk so h(gh) = h(gk). Since the binary operation on G is associative, we have (hg)h = (hg)k, which implies eh = ek by assumption. By by the definition of an identity element, we have h = k. 31 Now given a group G, it makes sense to say the identity element. And given g ∈ G, it makes sense to say the inverse of g (since both of the identity and the inverse of g are unique). Given g ∈ G, we often write g −1 for the inverse of g. If the binary operation on G is addition, we usually write −g for the inverse of g. Note that this is consistent with the inverses we found in (Z, +) (since here the inverse of n ∈ Z is indeed −n). The proof of the following proposition is very similar to the proof of statement 2 in Lemma 5.18. We leave it as an exercise (see Sheet 6). Proposition 5.19 (The cancellation law). Let G be a group and let g, h, k ∈ G. If gh = gk, then h = k. If hg = kg, then h = k. Corollary 5.20. Let G be a group, and let g, h ∈ G. Then there exists a unique x ∈ G such that gx = h and a unique y ∈ G such that yg = h. Proof. Let e be the identity element of G, and let x = g −1 h. Then gx = g(g −1 h) = (gg −1 )h = eh = h, which shows existence for the first statement. To show uniqueness, suppose gx1 = gx2 = h. Then by the cancellation law, we have x1 = x2. The proof of the existence and uniqueness of y is similar. We next consider some properties of inverses: Proposition 5.21. Let G be a group with identity element e, and let g, h ∈ G. Then 1. If gh = e, then g = h−1 and h = g −1. 2. (gh)−1 = h−1 g −1 3. (g −1 )−1 = g Proof. 1. Assume gh = e. We have gh = e g −1 (gh) = g −1 e (g −1 g)h = g −1 e, where we are using associativity. Then g −1 g = e by the definition of inverse, so eh = g −1 e. But this implies h = g −1 by the definition of the identity element. The proof that g = h−1 is similar. 2. By the first part of the proposition, we just need to show that (h−1 g −1 )(gh) = e. We have (h−1 g −1 )(gh) = h−1 (g −1 (gh)) = h−1 ((gg −1 )h) = h−1 (eh) = h−1 h = e. 32 3. By the first part of the proposition, we just need to show that g −1 g = e. But this is true by definition. We end this section with some definitions: Definition 5.22. Let (G, ∗) be a group. We say G is abelian if the binary operation ∗ on G is commutative. People also sometimes call an abelian group a commutative group. If a group G is not abelian, we often say it is nonabelian. Example 5.23. The groups (Z, +), (Zn , +), and Z× n are all abelian. For n ≥ 3, the groups Dn and Sn are nonabelian. Definition 5.24. The order of a group G is the number of elements in G. A group can have finite order or infinite order. If G has finite order, we write |G| for the order of G. Note that our notation for order is consistent with our notation for the number of elements in a set. Example 5.25. As we saw earlier, we have |Dn | = 2n, |Sn | = n!, and |Zn | = n. 5.5 Powers of group elements Throughout this section, we assume (G, ∗) is a group with identity element e. Here is some useful notation: given g ∈ G, we write g 2 for gg (= g ∗ g), and given n ∈ N we write g n for gg... g (= g ∗ g ∗ · · · ∗ g) (n times). Note that this matches the notation we already defined for Dn and Sn. We also define g 0 = e. Given n ∈ N, we also define g −n = (g −1 )n = g −1 g −1... g −1 (n times). Remark 5.26. If the operation on G is addition, we write ng instead of g n (for g ∈ G, n ∈ Z). Note that this is consistent with the usual meaning in Z since for g ∈ Z, g + g + · · · + g = ng. And if g ∈ Z, then the inverse of g is −g, so (−g) + (−g) + · · · + (−g) = (−n)g. Similarly, for [a]m ∈ Zm , we have [a]m + [a]m +... [a]m = [a + a + · · · + a]m = [na]m. Lemma 5.27. Let g ∈ G, and let n ∈ N. Then (g −1 )n = (g n )−1. Proof. The proof is an exercise (see Sheet 6). You can use induction on n. 33 By the lemma we could have equivalently defined g −n as (g n )−1 for n ∈ N. Proposition 5.28 (The law of exponents). Let g ∈ G, and let m, n ∈ Z. Then 1. g m g n = g m+n 2. (g m )n = g mn Proof. 1. First assume m, n ∈ N. Then g m g n = g m+n by definition. Now assume that m, n < 0. Then g m g n = (g −1 )−m (g −1 )−n = (g −1 )−m−n = g m+n also by definition. If m = 0, then g m g n = eg n = g n = g m+n. The proof is similar if n = 0. Thus the only case to consider is when m and n have different signs. Assume m < 0 and n > 0. We break into two cases. Case 1: m + n ≥ 0. In this case −m > 0 and m + n ≥ 0, so we have g −m g m+n = g −m+(m+n) g −m g m+n = g n. Now multiply both sides of the equation by g m on the left: g m (g −m g m+n ) = g m g n (g m g −m )g m+n = g m g n eg m+n = g m g n g m+n = g m g n where we are using the fact that g −m = (g m )−1. This completes Case 1. Case 2: m + n < 0. In this case m + n < 0 and −n < 0, so g m+n g −n = g (m+n)−n = gm. Similar to Case 1, we can multiply both sides of the equation by g n on the right to get g m+n = gmgn. If we assume m > 0 and n < 0, the proof is very similar. Definition 5.29. Let g ∈ G. If there exists n ∈ N such that g n = e, then we say g has finite order. If no such n exists, we say g has infinite order. If g has finite order, we define the order of g, written ord(g) to be the minimal n ∈ N such that g n = e. Example 5.30. The order of e is 1, and e is the only element of G with order 1. 34 Example 5.31. Let g = (1 2 3) ∈ S3. Then g 1 = (1 2 3) g 2 = (1 3 2) g 3 = e. so g has order 3. Example 5.32. Let s ∈ Dn be a reflection. Then s2 = e, so s has order 2. Let r ∈ Dn be a 2πk k rotation by 2π n. Then r is rotation by n , so we see that rk ̸= e if 1 ≤ k ≤ n − 1 rn = e. Thus r has order n. Example 5.33. Let G = Z12 , and let x = 12. Then 2x = 12 3x = 12 4x = 12 so x has order 4. Now let y = 12 ∈ Z12. Then 2y = 12 = 12 3y = 12 + 12 = 12 so ord(y) = 3. Example 5.34. Now let G = Q× = Q \ {0}. Recall that G is a group under multiplication with identity element 1. We have (−1)2 = 1 so ord(−1) = 2. But if we choose y ∈ Q× with y ̸= ±1, then y n ̸= 1 for all n ∈ N. Thus y has infinite order. Example 5.35. Similarly, let G = Z, which is a group under addition. If we choose x ∈ Z with x ̸= 0, then nx ̸= 0 for all n ∈ N. Thus x has infinite order. We now generalize Example 5.31 with a result that will help us find orders in Sn. Proposition 5.36. Let a1 , a2 ,... , ak be distinct elements of {1, 2,... , n}. Then the k-cycle (a1 a2... ak ) has order k. Proof. Let f = (a1 a2... ak ). First we will show that if 1 < m < k, then f m ̸= e. Indeed if 1 ≤ m ≤ k, we have f m (a1 ) = am+1 ̸= e(a1 ), so f m ̸= e. Now we need to show that f k = e. We have f k (a1 ) = a1. For 2 ≤ j ≤ k, we have f k−j+1 (aj ) = a1 , and f k (aj ) = f j−1 (a1 ) = aj. Thus f k (aj ) = aj for all j. If b ∈ / {a1 ,... , ak }, then f k (b) = b also. Thus f k = e, and f has order k. Theorem 5.37. Let g ∈ G, and let n ∈ Z. 35 1. If g has infinite order, then g n = e if and only if n = 0. 2. If g has finite order d, then g n = e if and only if d|n. Proof. 1. Assume g has infinite order. If n = 0, then by definition g n = e. So assume g n = e. If n ∈ N, then this contradicts the definition of infinite order, so we must have n ≤ 0. Assume for contradiction that n < 0. Then (g −1 )−n = e. Lemma 5.27 tells us that (g −n )−1 = e, i.e. the inverse of g −n is e, which means that we must have g −n = e, which again contradicts the fact that g has infinite order. Thus n = 0. 2. Assume g has finite order d. If d|n, then n = dk for some k ∈ Z. By the law of exponents, we have g n = (g d )k = ek = e. Now assume g n = e. Using the division algorithm we can write n = dq + r for some q, r ∈ Z with 0 ≤ r < d. Then e = g dq+r = g dq g r (by the law of exponents) = eg r (since d|dq) = gr. By the definition of order, since r < d, we must have r = 0. Thus d|n. Corollary 5.38. Let g ∈ G, and let m, n ∈ Z. 1. If g has infinite order, then g m = g n if and only if m = n. 2. If g has finite order d, then g m = g n if and only if m ≡ n mod d. Proof. 1. Assume g has infinite order. Clearly if m = n, then g m = g n. So assume g m = g n. By the law of exponents, g m−n = g m g −n = g n g −n (by assumption) = e (by Lemma 5.27). By Theorem 5.37, we have m − n = 0, i.e. m = n. 2. The proof is similar and will be left as an exercise (see Sheet 7). 6 6.1 Subgroups and homomorphisms Subgroups Throughout the section, let (G, ∗) be a group with identity element e. Definition 6.1. A subset H ⊂ G is called a subgroup of G if H is a group under ∗. 36 People sometimes write H ≤ G to indicate that H is a subgroup of G. Note that if H is a subgroup of G, then the definition implies that ∗ is a binary operation on H. This means that if h1 , h2 ∈ H, we must have h1 ∗ h2 ∈ H. We say that H is closed under ∗. Here are some examples: Example 6.2. Let G = Z, and let H = 2Z, the set of even integers. Then addition is a binary operation on H (if x, y ∈ 2Z, then x + y ∈ 2Z). We already know that addition is associative. We know that 0 ∈ H is an identity element. If x ∈ H, then −x ∈ H, so every element of H has an inverse in H. Thus H is a subgroup of G. Now consider a non-example: let G = Z and let H = N. Then addition gives a binary operation on N (if x, y ∈ N, then x + y ∈ N). But N does not contain an identity element for addition. So N is not a subgroup of Z. To continue with another non-example, let G = Z and let H = {n ∈ Z | n ≥ 0}. Then addition is a binary operation on H, and H contains the identity element 0. But 1 ∈ H does not have an inverse in H under addition. So H is not a subgroup of Z. Example 6.3. Let G = D3 , let r ∈ D3 be a rotation, and let H = {e, r, r2 }. Then the composition of two elements of H is again in H. This tells you that composition is a binary operation on H. We already know that composition is associative. The set H contains the identity element e. And every element of H has an inverse in H: the inverse of e is e, and the inverse of r is r2. Thus H is a subgroup of G. Example 6.4. If we let H = G, then clearly H is a subgroup of G. Proposition 6.5 (The subgroup test). A subset H ⊂ G is a subgroup of G if and only if the following three conditions are satisfied: 1. If h1 , h2 ∈ H, then h1 h2 ∈ H; 2. e ∈ H; 3. If h ∈ H, then h−1 ∈ H. Proof. Assume conditions 1, 2, and 3 are satisfied. By 1, we know that ∗ is a binary operation on H. Since G is a group, we know that ∗ is associative. Since e ∈ H, we know that H contains an identity element. By 3, we know that every element of H has an inverse. So (H, ∗) forms a group, and by definition this means that H is a subgroup of G. Assume H is a subgroup of G. Then ∗ is a binary operation on H, so 1 is satisfied. Since H is a group, we know that H contains an identity element eH for the binary operation ∗ on H. This means that h ∗ eH = eH ∗ h = h for all h ∈ H. In particular eH ∗ eH = eH. (6.1) We claim that eH = e (recall that e is the identity element in G). Since H ⊂ G, we have eH ∈ G. By the definition of the identity in G, we have e ∗ eH = eH. 37 (6.2) Putting together (6.1) and (6.2), we have e ∗ eH = eH ∗ eH. The cancellation law then tells us that e = eH. So e ∈ H, and e is the identity element in H. Now, let h ∈ H. Since H is a group, we know that there exists k ∈ H such that hk = kh = e. By Proposition 5.21, we have k = h−1. Thus h−1 ∈ H. Example 6.6. The set H = {e} is a subgroup of G: we know that e ∗ e = e, so condition 1 holds. We have e ∈ H so condition 2 holds. And e is its own inverse so condition 3 holds. This H is called the trivial subgroup of G. Now we generalize examples 6.2 and 6.3 in the following way. Proposition 6.7. Let g ∈ G, and let H = {g n | n ∈ Z}. Then H is a subgroup of G. Proof. We will use the subgroup test. 1. Let h1 , h2 ∈ H. By the definition of H, there exist n, m ∈ Z such that h1 = g n , h2 = g m. By the law of exponents, we have h1 h2 = g n+m ∈ H. Thus H is closed under ∗. 2. By definition e = g 0 , so e ∈ H. 3. If h ∈ H, then h = g n for some n ∈ Z. By Lemma 5.27, h−1 = g −n ∈ H. Thus H is a subgroup of G. Some notation: we write ⟨g⟩ for the subgroup {g n | n ∈ Z} from the proposition. Definition 6.8. If g ∈ G, the subgroup ⟨g⟩ is called the subgroup of G generated by g. A subgroup of the form ⟨g⟩ for some g ∈ G is called a cyclic subgroup of G. Example 6.9. The subgroup 2Z of Example 6.2 is equal to ⟨2⟩. The subgroup {e, r, r2 } of Example 6.3 is equal to ⟨r⟩. The trivial subgroup {e} is equal to ⟨e⟩. These are all cyclic subgroups. Example 6.10. We can generalize Example 6.2. Let G = Z, and let m ∈ Z. Then ⟨m⟩ = {nm | n ∈ Z is the set of multiples of m. This is a subgroup of Z by Proposition 6.7. Example 6.11. In S3 , the subgroup generated by (1 2) is ⟨(1 2)⟩ = {e, (1 2)}. The subgroup generated by (1 2 3) is ⟨(1 2 3)⟩ = {e, (1 2 3), (1 3 2)}. In the previous example, you can see that ⟨(1 2)⟩ has order 2, which is equal to ord((1 2)). Similarly, ⟨(1 2 3)⟩ has order 3, which is equal to ord((1 2 3)). This fact generalizes: Proposition 6.12. Let g ∈ G. 1. If g has infinite order, then the order of ⟨g⟩ is infinite. 2. If g has finite order d, then |⟨g⟩| = d. 38 Proof. 1. This follows directly from Corolary 5.38 part 1, since the elements g n are distinct for distinct choices of n ∈ Z. 2. By Corollary 5.38 part 2, we know that g n = g m if and only if n ≡ m mod d. So the elements of ⟨g⟩ are indexed by {0, 1,... , d − 1}, which are the possible remainders when dividing by d. Thus ⟨g⟩ = {e, g, g 2 ,... , g d−1 }. 6.2 Cyclic groups As in the previous section, we assume throughout this section that (G, ∗) is a group with identity element e. Building on Definition 6.8, we also have the following more general definition: Definition 6.13. If G contains an element g ∈ G such that G = ⟨g⟩, then G is called a cyclic group, and we say that G is generated by g. Example 6.14. Let G = Z. Then G = ⟨1⟩, so Z is a cyclic group. Similarly, for n ∈ Z, we have Zn = ⟨n ⟩, so Zn is cyclic. Example 6.15. Let G = S3. Then G is not cyclic. Using Example 6.11, we see that if g ∈ G, then ⟨g⟩ has order 1, 2, or 3. Since |S3 | = 6, we see that we cannot have S3 = ⟨g⟩ for any g. But there is an even more obvious reason why S3 is not cyclic, which is summarized in the following proposition: Proposition 6.16. If G is cyclic, then G is abelian. Proof. Assume G is cyclic. By definition, there exists g ∈ G such that G = ⟨g⟩. Let h, k ∈ G. We need to show that hk = kh. Since G = ⟨g⟩, there exist m, n ∈ Z such that h = g n and k = g m. Then hk = g n g m = g n+m = g m+n = g m g n = kh by the law of exponents. Proposition 6.17. Assume G has order n. 1. If g ∈ G, then ord(g) ≤ n. 2. The group G is cyclic if and only if G contains an element of order n. 3. If G is cyclic and g ∈ G, then G is generated by g if and only if ord(g) = n. Proof. 1. This is an exercise. 2. Assume G is cyclic. Then there exists g ∈ G such that G = ⟨g⟩, so n = |G| = |⟨g⟩|, and by Proposition 6.17, |⟨g⟩| = ord(g). So ord(g) = n. Assume that G contains an element g of order n. Then ⟨g⟩ ⊂ G, so |⟨g⟩| ≤ |G| = n with equality if and only if ⟨g⟩ = G. But again we have |⟨g⟩| = ord(g), so G = ⟨g⟩, which shows that G is cyclic. 3. This again follows from the fact that |⟨g⟩| = ord(g). 39 Theorem 6.18. Every subgroup of a cyclic group is cyclic. Proof. Assume G is a cyclic group and H is a subgroup of G. Choose g ∈ G such that G = ⟨g⟩. We must show that there exists h ∈ G such that H = ⟨h⟩. If H = {e}, then H = ⟨e⟩, so H is cyclic. Otherwise H contains a nontrivial element x ∈ G. Since G = ⟨g⟩, we know that x = g k for some k. Let S = {m ∈ N | g m ∈ H}. We claim that S is nonempty. If k > 0, then k ∈ S. If k < 0, then since H is a subgroup we must have x−1 ∈ H. But by Lemma 5.27 we know that x−1 = g −k , and so −k ∈ S. This proves the claim. By the well-ordering principal, we know that S has a least element, call it d. We claim that H = ⟨g d ⟩. First note that ⟨g d ⟩ ⊂ H. This follows from the fact that g d ∈ H by the definition of S, and so Proposition 6.7 tells us that ⟨g d ⟩ is a subgroup of H. Now let h ∈ H. Since G = ⟨g⟩, we know that h = g n for some n ∈ Z. By the division algorithm we have n = qd + r with q, r ∈ Z and 0 ≤ r < d. Since g qd ∈ ⟨g d ⟩, we know g qd ∈ H. Since H is a subgroup, this tells us that the inverse g −qd ∈ H. Since H is closed under multiplication, we see that h(g −qd ) = g r ∈ H. Since d is minimal in S, we must have r = 0. Thus h = g qd ∈ ⟨g d ⟩, and H = ⟨g d ⟩. Corollary 6.19. Every subgroup of Z is of the form mZ for some m ∈ Z. Proof. Let H be a subgroup of Z. By Example 6.14, the group Z is cyclic, so by Theorem 6.18, H is of the form ⟨m⟩ for some m ∈ Z. But ⟨m⟩ = mZ. 6.3 Homomorphisms In algebra, the most important functions are those that preserve some algebraic structure. For example, in linear algebra, we study linear functions on vector spaces. When we study groups, we study functions that preserve the binary operations on the groups. These are called homomorphisms. We now give a formal definition. Definition 6.20. Let (G1 , ∗1 ) and (G2 , ∗2 ) be groups. A function ϕ : G1 → G 2 is called a group homomorphism (or just a homomorphism) if ϕ(g ∗1 h) = ϕ(g) ∗2 ϕ(h) for all g, h ∈ G1. Example 6.21. Let n ∈ N. Define a function ϕ : Z → Zn by ϕ(a) = [a]n for all a ∈ Z. Then ϕ is a homomorphism since ϕ(a + b) = [a + b]n = [a]n + [b]n = ϕ(a) + ϕ(b) 40 Example 6.22. Recall that GL2 (R) is a group with binary operation given by matrix multiplication, and R× = R \ {0} is a group under multiplication. Then det : GL2 (R) → R× is a group homomorphism since det(AB) = det(A) det(B) for all A, B ∈ GL2 (R). Example 6.23. Recall that Q× is a group under multiplication. Let ϕ : Z → Q× be defined by ϕ(n) = 2n for all n ∈ Z. Then ϕ(n + m) = 2n+m = 2n 2m = ϕ(n)ϕ(m) for all n, m ∈ Z. Thus ϕ is a group homomorphism. Here is a non-example: define ϕ : Z → Z by ϕ(n) = n + 1 for all n ∈ Z. Let m, n ∈ Z. Then ϕ(m + n) = m + n + 1, but ϕ(m) + ϕ(n) = (m + 1) + (n + 1) = m + n + 2. Since these are not equal, ϕ is not a homomorphism. Example 6.24. Let G and H be any groups, and let eH ∈ H be the identity element. Define a function ϕ : G → H by ϕ(g) = eH for all g ∈ G. Then ϕ is a homomorphism, sometimes called the trivial homomorphism from G to H. Example 6.25. Let G be any group, and let H be a subgroup of G. The function ι : H → G given by ι(h) = h for all h ∈ H is a homomorphism, and is often called an inclusion map. If H = G, then ι is often called the identity map. Example 6.26. Let n ∈ Z with n ≥ 3. We now build a group homomorphism Dn → Sn. Recall that Dn is the group of symmetries of a regular n-gon. Let V be the set of vertices of the n-gon, and label V as 1, 2,... , n. If f ∈ Dn , then f restricts to a bijective map from V to V. Write f |V for the restriction of f to V. Then f |V ∈ Sn. The map Dn → Sn f 7→ f |V is a group homomorphism, since the binary operation on both Dn and Sn is composition of functions. As an example of this homomorphism, let’s look at the case when n = 3. In Example 4.10, we labeled the vertices of our equilateral triangle as A, B, C, but let’s re-label them as 1, 2, 3. The rotation 1 3 2π 3 3 r1 2 2 maps to (1 2 3) ∈ S3. The reflection 41 1 1 1 s1 2 3 2 3 maps to (2 3) ∈ S3. Proposition 6.27. Let G be a group with identity element eG , and let H be a group with identity element eH. Assume ϕ : G → H is a group homomorphism. Then 1. ϕ(eG ) = eH 2. ϕ(g −1 ) = ϕ(g)−1 for all g ∈ G. Proof. 1. We have eH ϕ(eG ) = ϕ(eG ) (by the definition of identity) = ϕ(eG eG ) (by the definition of identity) = ϕ(eG )ϕ(eG ) (by the definition of homomorphism). The cancellation law then tells us eH = ϕ(eG ). 2. We have ϕ(g)ϕ(g −1 ) = ϕ(gg −1 ) (since ϕ is a homomorphism) = ϕ(eG ) (by the definition of inverse) = eH (by 1). By Proposition 5.21, ϕ(g)−1 = ϕ(g −1 ). The proposition generalizes as follows. Proposition 6.28. Let ϕ : G → H be a group homomorphism. Then ϕ(g n ) = ϕ(g)n for all n ∈ Z. Proof. The proof is an exercise. We have the following definition related to homomorphisms. Definition 6.29. Let ϕ : G → H be a group homomorphism, and let eH be the identity element of H. The kernel of ϕ is the following subset of G: ker(ϕ) = {g ∈ G | ϕ(g) = eH }. Example 6.30. Let n ∈ N, and consider the function ϕ : Z → Zn defined by ϕ(a) = [a]n for all a ∈ Z, as in Example 6.21. We have ϕ(a) = n if and only if n|a. So ker(ϕ) = nZ. 42 We have the following useful lemma: Lemma 6.31. Let ϕ : G → H be a group homomorphism, and let eG be the identity element of G. Then ϕ is injective if and only if ker ϕ = {eG }. Proof. Assume ϕ is injective. By Proposition 6.27, we know that eG ∈ ker ϕ. If g ∈ ker ϕ, then ϕ(g) = ϕ(eG ), which implies g = eG since ϕ is injective. Thus ker ϕ = {eG }. Now assume ker ϕ = {eG }. Suppose g1 , g2 ∈ G with ϕ(g1 ) = ϕ(g2 ). Let eH be the identity element of H. Then eH = ϕ(g1 )(ϕ(g2 ))−1 = ϕ(g1 )ϕ(g2−1 ) (by Proposition 6.27) = ϕ(g1 g2−1 ) (since ϕ is a homomorphism). Thus g1 g2−1 ∈ ker ϕ = {eG }, so g1 g2−1 = eG. By Proposition 5.21, we have g1 = (g2−1 )−1 = g2. Also recall that for any function ϕ : G → H, we defined the image of ϕ to be im ϕ = {ϕ(g) | g ∈ G} (see Definition 2.4). Proposition 6.32. Let ϕ : G → H be a group homomorphism. Then 1. ker ϕ is a subgroup of G. 2. im ϕ is a subgroup of H. Proof. 1. We will use the subgroup test. Let eG be the identity of G, and let eH be the identity of H. Let g1 , g2 ∈ ker ϕ. Then ϕ(g1 g2 ) = ϕ(g1 )ϕ(g2 ) = eH eH = eH so g1 g2 ∈ ker ϕ. By Proposition 6.27, we know eG ∈ ker ϕ. Now assume g ∈ ker ϕ. Then ϕ(eG ) = eH , so ϕ(gg −1 ) = eH by definition of the inverse ϕ(g)ϕ(g −1 ) = eH since ϕ is a homomorphism eH ϕ(g −1 = eH since eG ∈ ker ϕ) ϕ(g −1 ) = eH (by definition of the identity). Thus g −1 ∈ ker ϕ. By the subgroup test, ker ϕ is a subgroup of G. 2. The proof of 2 is an exercise. Example 6.33. Now consider the homomorphism det : GL2 (R) → R× as in Example 6.22. The kernel of det is ker(det) = {A ∈ SL2 (R) | det(A) = 1}. By Proposition 6.32, this set is a subgroup of GL2 (R). It is called the special linear group and is denoted SL2 (R). 43 Proposition 6.34. Let ϕ : G1 → G2 and ψ : G2 → G3 be group homomorphisms. Then the composition ψ ◦ ϕ : G1 → G3 is a homomorphism. Proof. Let g, h ∈ G1. Then (ψ ◦ ϕ)(gh) = ψ(ϕ(gh)) = ψ(ϕ(g)ϕ(h)) (since ϕ is a homomorphism) = ψ(ϕ(g))ψ(ϕ(h)) (since ψ is a homomorphism) = ((ψ ◦ ϕ)(g))((ψ ◦ ϕ)(h)). Thus ψ ◦ ϕ is a homomorphism. 6.4 Isomorphisms Definition 6.35. Let G and H be groups. An isomorphism ϕ : G → H is a bijective group homomorphism. Example 6.36. Define ϕ : R × R → C by ϕ((a, b)) = a + bi. Then ϕ((a, b) + (c, d)) = ϕ((a + c, b + d)) = (a + c) + (b + d)i = (a + ci) + (b + di) = ϕ((a, b)) + ϕ((c, d)), so ϕ is a homomorphism. It’s not hard to check that ϕ is bijective. Thus ϕ is an isomorphism. Example 6.37. Let ϕ : Z6 → Z2 × Z3 be given by ϕ([a]6 ) = ([a]3 , [a]2 ). Note that ϕ is well defined, i.e. if [a]6 = [b]6 , then ϕ([a]6 ) = ϕ([b]6 ). (This follows from the fact that if 6|(a − b), then 2|(a − b) and 3|(a − b).) It’s not hard to check that ϕ is a homomorphism. We claim that ϕ is an isomorphism. To prove this, we need to show that ϕ is injective and surjective. To show ϕ is injective, we use Lemma 6.31. If ϕ([a]6 ) = (2 , 3 ), then [a]2 = 2 , so 2|a, and [a]3 = 3 , so 3|a. Since 2|a and 3|a, we have 6|a, so [a]6 = 6. Thus ker ϕ = {6 }, and by Lemma 6.31, ϕ is injective. Since ϕ is injective, we have |im ϕ| = |Z6 | = 6. Since |Z2 × Z3 | = 6, we see that ϕ is surjective. Thus ϕ is an isomorphism. Example 6.38. Consider the homomorphism Dn → Sn defined in Example 6.26. It’s not hard to check that this map is injective. Since |D3 | = |S3 |, it is an isomorphism when n = 3. If n > 3, then |Dn | = 2n < n! = |Sn |. Thus if n > 3, then the map Dn → Sn is not surjective, and is not an isomorphism. Before stating the next lemma, we must discuss inverses of bijective functions. Let S and T be any sets, and let f : S → T be a bijective function. Using the same logic as in Example 5.8, we can form a function, which we write as f −1 : T → S with the property that (f ◦ f −1 )(t) = t for all t ∈ T and (f −1 ◦ f )(s) = s for all s ∈ S. Explicitly, given t ∈ T , there exists st ∈ S such that f (st ) = t. We then define f −1 (t) = st. Given an isomorphism ϕ : G → H, there exists an inverse ϕ−1 : H → G defined in this way. 44 Lemma 6.39. Let G and H be groups, and let ϕ : G → H be an isomorphism. 1. The inverse map ϕ−1 : H → G is an isomorphism. 2. If K is a group, and ψ : H → K is an isomorphism, then the composition ψ ◦ ϕ : G → K is an isomorphism. Proof. 1. The inverse of a bijective function is bijective, so it suffices to show that ϕ−1 is a homomorphism. Let h1 , h2 ∈ H. Then ϕ−1 (h1 ) is the unique elements g1 ∈ G such that ϕ(g1 ) = h1. Similarly ϕ−1 (h2 ) is the element g2 ∈ G such that ϕ(g2 ) = h2. We have ϕ−1 (h1 )ϕ−1 (h2 ) = g1 g2. Since ϕ is a homomorphism, we have that ϕ(g1 g2 ) = h1 h2 , so ϕ−1 (h1 h2 ) = g1 g2. Thus ϕ−1 (h1 )ϕ−1 (h2 ) = ϕ−1 (h1 h2 ), and ϕ is a homomorphism. 2. By Proposition 6.34, we know that ψ ◦ ϕ is a homomorphism. By Proposition 2.10, we know that ψ ◦ ϕ is bijective. Thus ψ ◦ ϕ is an isomorphism. Definition 6.40. Let G and H be groups. If there exists an isomorphism ϕ : G → H we say that G is isomorphic to H, or that G and H are isomorphic. We write this as G ∼ = H. Note that by Lemma 6.39, if G is isomorphic to H, then H is isomorphic to G, so calling G and H isomorphic makes sense. By the second part of the lemma, if G is isomorphic to H and H is isomorphic to K, then G is isomorphic to K. We think of isomorphic groups as being the same as each other (the prefix “iso” means “equal”): an isomorphism ϕ : G → H gives us a way to associate to every element in G an element in H, and since ϕ is a homomorphism, we can think of the binary operation on G in the same way as the binary operation on H. In this spirit, the next results show that isomorphisms preserve the algebraic structure we’ve been discussing. Proposition 6.41. Let G and H be groups, and assume G is isomorphic to H. 1. G is abelian if and only if H is abelian. 2. G is cyclic if and only if H is cyclic. Proof. 1. This is left as an exercise. 2. Let ϕ : G → H be an isomorphism. Assume G is cyclic. Then there exists an element g ∈ G such that G = ⟨g⟩. We claim that H = ⟨ϕ(g)⟩. Indeed, if h ∈ H, then since ϕ is surjective there exists x ∈ G such that ϕ(x) = h. But x = g n for some n, so h = ϕ(g n ) = ϕ(g)n. Thus h ∈ ⟨ϕ(g)⟩ and H ⊂ ⟨ϕ(g)⟩. Since ⟨ϕ(g)⟩ ⊂ H, we have equality, and H is cyclic. Note that the proposition tells us that if G is abelian and H is not, then G and H are not isomorphic. For example, Z6 and S3 are not isomorphic (even though they have the same order). 45 Example 6.42. The proposition also tells us that if G is cyclic and H is not, then G and H are not isomorphic. For example, Z4 is cyclic. The group Z2 × Z2 is not cyclic, since it has no element of order 4 (the order of every non-identity element is 2). Thus Z4 is not isomorphic to Z2 × Z2. Proposition 6.43. Let G and H be groups, and assume ϕ : G → H is an isomorphism. Let g ∈ G. Then ord(g) = ord(ϕ(g)). Proof. Let eG be the identity element of G, and let eH be the identity element of H. First assume that g has finite order d. Then g d = e, so ϕ(g d ) = eH (by Proposition 6.27). By Proposition 6.28, we have ϕ(g)d = eH. Thus ord(ϕ(g)) ≤ d. If 1 ≤ k ≤ d−1, then g k ̸= eG. Since ϕ is injective, ϕ(g k ) ̸= eH. By Proposition 6.28, ϕ(g)k ̸= eH. Thus ord(ϕ(g)) ≥ d. We see that ord(ϕ(g)) = d. Now assume g has infinite order. Then g n ̸= eG for all n ∈ N. Since ϕ is injective, we have that ϕ(g n ) ̸= eH for all n ∈ N, which implies ϕ(g)n ̸= eH for all n ∈ N. Thus ϕ(g) has infinite order. We end this section with a proposition that tells us that, up to isomorphism, the only cyclic groups are Z and Zn (for n ∈ N). Proposition 6.44. Let G be a cyclic group. 1. If G has infinite order, then G ∼ = Z. 2. If |G| = n, then G ∼ = Zn. Proof. 1. Assume G has infinite order. Since G is cyclic, there exists g ∈ G such that G = ⟨g⟩. Define a function ϕ : Z → G by ϕ(n) = g n for all n ∈ Z. By Corollary 5.38, ϕ is injective. By the definition of ⟨g⟩, ϕ is surjective. We just need to check that ϕ is a homomorphism. We have ϕ(n + m) = g m+n = g m g n = ϕ(n)ϕ(m) by the law of exponents. Thus ϕ is an isomorphism. 2. By Proposition 6.17, there exists g ∈ G of order n with G = ⟨g⟩. Define a function ϕ : Zn → G by ϕ([a]n ) = g a for all [a]n ∈ Zn. Note that if [a]n = [b]n , then b = a + kn for some n, so g b = g a+kn = g a g kn = g a , so ϕ is well defined. It’s not hard to check that ϕ is a homomorphism. Note that ϕ is injective by Corollary 5.38, and ϕ is surjective by the definition of ⟨g⟩. Thus ϕ is an isomorphism. 46 7 Binary relations and cosets 7.1 Binary relations Now let S be a set. A binary relation ∼ on S is a function S × S → {True, False}. If (a, b) maps to True, we write a ∼ b. If (a, b) maps to False, we write a ≁ b. Example 7.1. Here are some examples of binary relations: Let S = R, and define ∼ by a ∼ b if a ≤ b. Let S = Z, and define ∼ by a ∼ b if a|b. Let S = Z, and fix n ∈ N. Define ∼ by a ∼ b if a ≡ b mod n. Definition 7.2. Let ∼ be a binary relation on S. Then ∼ is called reflexive if x ∼ x for all x ∈ S. symmetric if x ∼ y implies y ∼ x. transitive if x ∼ y and y ∼ z together imply x ∼ z. Example 7.3. As above, let S = R, and define ∼ by a ∼ b if a ≤ b. Then ∼ is reflexive since a ≤ a for all a ∈ R. ∼ is not symmetric, since 0 ≤ 1 so 0 ∼ 1, but 1 ≁ 0. ∼ is transitive, since a ≤ b and b ≤ c imply a ≤ c. Example 7.4. As above, let S = Z, and fix n ∈ N. Define ∼ by a ∼ b if a ≡ b mod n. Then ∼ is reflexive since a ≡ a mod n for all a ∈ Z. ∼ is symmetric since a ≡ b mod n implies b ≡ a mod n. ∼ is transitive since a ≡ b mod n and b ≡ c mod n imply a ≡ c mod n. Definition 7.5. Let ∼ be a binary relation on a set S. We call ∼ an equivalence relation if ∼ is reflexive, symmetric, and transitive. For example, the relation ≤ on R is not an equivalence relation. The relation ∼ on Z given by a ∼ b if a ≡ b mod n is an equivalence relation. Example 7.6. Let G be a group with identity element e, and let H be a subgroup of G. Let ∼ be the binary relation on G given by g1 ∼ g2 if g2−1 g1 ∈ H. Then ∼ is reflexive, since given g ∈ G, we have g −1 g = e ∈ H. We claim that ∼ is symmetric. Let g1 , g2 ∈ G, and assume g1 ∼ g2. This means g2−1 g1 ∈ H. Since H is a subgroup, (g2−1 g1 )−1 ∈ H, i.e. g1−1 g2 ∈ H. Thus g2 ∼ g1 , and ∼ is symmetric. 47 Next we claim that ∼ is transitive. Assume g1 , g2 , g3 ∈ G with g1 ∼ g2 and g2 ∼ g3. Then g2−1 g1 ∈ H and g3−1 g2 ∈ H. Since H is a subgroup, we have (g3−1 g2 )(g2−1 g1 ) ∈ H, i.e. g3−1 g1 ∈ H. Thus g1 ∼ g3 , and ∼ is transitive. We have shown that ∼ is an equivalence relation. Remark 7.7. Note that Example 7.6 is a generalization of Example 7.4. Indeed, let G = Z, let n ∈ N, and let H = nZ. Then the relation in Example 7.6 becomes a ∼ b if −b + a ∈ nZ, which is the same as a − b ∈ nZ, which is the same as n|(a − b), which is the same as a ≡ b mod n. Definition 7.8. Let ∼ be an equivalence relation on a set S, and let x ∈ S. The equivalence class of x is the set [x] := {y ∈ S | x ∼ y}. Since ∼ is symmetric, we also have [x] = {y ∈ S | y ∼ x}. Example 7.9. Consider the equivalence relation on Z in Example 7.4 (given by a ∼ b if a ≡ b mod n). Then the equivalence class of a ∈ Z is exactly the congruence class [a]n = {b ∈ Z | a ≡ b mod n}. Example 7.10. Let S = R, and let ∼ be defined by x ∼ y if x = y. It’s not hard to check that ∼ is an equivalence relation. Given x ∈ R, the equivalence class of x under this relation is [x] = {x}. Proposition 7.11. Let S be a set, and let ∼ be an equivalence relation on S. Then each element of S is in one and only one equivalence class. Proof. Let x ∈ S. Since ∼ is reflexive, we have x ∼ x, so x ∈ [x]. This shows that every element of S is in at least one equivalence class. Now let x ∈ [y]. We must show [x] = [y]. Assume z ∈ [x]. By definition this means x ∼ z. Since x ∈ [y], we have y ∼ x. By transitivity, we have y ∼ z, so z ∈ [y]. Thus [x] ⊂ [y]. The proof that [y] ⊂ [x] is similar. So we have [x] = [y], and x lies in a unique equivalence class. The proposition tells us that if ∼ is an equivalence relation on S, then 1. S = S [x]. x∈S 2. Given x, y ∈ S, either [x] = [y] or [x] ∩ [y] = ∅. In other words, the set of equivalence classes forms a partition of S. 7.2 Cosets and Lagrange’s theorem We now study a particular example of equivalence classes, those coming from the relation of Example 7.6. Throughout the section, let G be a group, and let H be a subgroup of G. Definition 7.12. Let g ∈ G. The subset gH ⊂ G defined by gH = {gh | h ∈ H} is called a left coset of H in G. 48 We often say coset instead of left coset if the context is understood. There is also a notion of right cosets, which are defined similarly. If the binary operation on G is addition, we write g + H instead of gH. Example 7.13. Let G = Z, n ∈ N, and H = nZ. Given a ∈ Z, the corresponding coset of H is given by a + nZ = {a + nk | k ∈ Z} = [a]n. Example 7.14. Let G = S3 and let H = ⟨(1 2)⟩ = {e, (1 2)}. Let g = (1 3). The corresponding coset of H is given by (1 3)H = {(1 3)e, (1 3)(1 2)} = {(1 3), (1 2 3)}. If g = (1 2), then the corresponding coset of H is (1 2)H = {(1 2)e, (1 2)(1 2)} = {(1 2), e} = H. This is not a coincidence: we have gH = H if and only if g ∈ H (this will follow from the next proposition). Proposition 7.15. Let g1 , g2 ∈ G. The following are equivalent: 1. g1 ∈ g2 H 2. g1 H = g2 H 3. g2−1 g1 ∈ H Proof. (1 ⇒ 3). Assume g1 ∈ g2 H. Then g1 = g2 h for some h ∈ H, so g2−1 g1 = h ∈ H. (3 ⇒ 2). Assume g2−1 g1 ∈ H. Let x ∈ g1 H. Then x = g1 h for some h ∈ H, so x = (g2 g2−1 )g1 h = g2 (g2−1 g1 h) ∈ g2 H. Thus g1 H ⊂ g2 H. Similarly, if y ∈ g2 H, then y = g2 k for some k ∈ H, so y = g1 (g1−1 g2 k) ∈ g1 H. Here we’re using the fact that H is a subgroup, which implies that (g2−1 g1 )−1 ∈ H. (2 ⇒ 1). Assume g1 H = g2 H. Since e ∈ H, g1 e ∈ g1 H = g2 H, i.e. g1 ∈ g2 H. Now we return to the setting of Example 7.6. Recall that we defined a relation ∼ on G by g1 ∼ g2 if g2−1 g1 ∈ H, and we showed that ∼ is a binary relation. Using the proposition we can find the equivalence classes for ∼: we have [g] = {x ∈ G | x ∼ g} = {x ∈ G | g −1 x ∈ H} = {x ∈ G | x ∈ gH} = gH Using Proposition 7.11, we see that G= [ g∈G 49 gH, (7.1) and that any two cosets of H are either equal or disjoint. The following lemma tells us that all cosets have the same size. Lemma 7.16. Assume H is finite of order d. Then every left coset of H contains d elements. Proof. Let g ∈ G. Define a function f : H → gH by f (h) = gh for all h ∈ H. We claim that f is bijective. If gh1 = gh2 for some h1 , h2 ∈ H, then by the cancellation law, we have h1 = h2. Thus f is injective. Let x ∈ gH. Then x = gh for some h ∈ H, so x = f (h). Thus f is surjective. Since f is bijective, H and gH have the same number of elements. Theorem 7.17 (Lagrange’s Theorem). Let G be a group of finite order, and let H be a subgroup of G. Then |H| divides |G|. Proof. Using (7.1), we can choose g1 , g2 ,... , gk ∈ G such that G= k [ gj H j=1 with gi H ∩ gj H = ∅ whenever i ̸= j. Then |G| = k X |gj H| = k|H| j=1 by Lemma 7.16. Now we may reap the benefits of Lagrange’s Theorem. Corollary 7.18. Assume G is finite, and let g ∈ G. Then ord(g) divides |G|. Proof. Let H = ⟨g⟩. By Proposition 6.17, |H| = ord(g), and by Lagrange’s Theorem, |H| divides |G|. Corollary 7.19. Let p ∈ N be a prime number, and let G be a group of order p. Then G is cyclic. Proof. Since |G| > 1, G contains a non-identity element g. By Corollary 7.18, we must have ord(g) = p. By Proposition 6.17, G is cyclic. Corollary 7.20. Let G be a group of order n with identity element e. Then g n = e for all g ∈ G. Proof. Let d = ord(g). By Corollary 7.18, n = dk for some k ∈ Z. By the law of exponents, g n = (g d )k = ek = e. Corollary 7.21 (Fermat’s Little Theorem). Let p ∈ N be a prime, and let a ∈ Z. Then 1. ap ≡ a mod p 50 2. If p ∤ a, then ap−1 ≡ 1 mod p p−1 × Proof. We prove 2 first. If p ∤ a, then gcd(a, p) = 1, so [a]p ∈ Z× = p p. Since |Zp | = p − 1, [a]p p−1 by Corollary 7.20. This tells us that a ≡ 1 mod p. If p ∤ a, then 1 follows from 2 by multiplying both sides of this equation by a. If p|a, then ap ≡ a ≡ 0 mod p. 8 Rings 8.1 Definition and basic properties Definition 8.1. A ring is a set R with two binary operations + and ∗ satisfying the following properties: 1. (R, +) is an abelian group; 2. ∗ is associative and has an identity element (the identity is an element r ∈ R such that r ∗ s = s ∗ r = s for all s ∈ S); 3. (x + y) ∗ z = x ∗ z + y ∗ z and z ∗ (x + y) = z ∗ x + z ∗ y for all x, y, z ∈ R (this is called the distributive property). We sometimes write that (R, +, ∗) is a ring to indicate the two binary operations. Note the following: The binary operation + is usually called addition. The binary operation ∗ is usually called multiplication, and we usually write rs instead of r ∗ s. The identity for + is denoted 0R (or 0) and is called the additive identity. The identity for ∗ is denoted 1R (or 1) and is called the multiplicative identity. If r ∈ R, we write −r for the inverse of r under addition. Example 8.2. Here are some common examples of rings: 1. Z forms a ring under the usual addition and multiplication. We have 0Z = 0 and 1Z = 1. 2. R, C, and Q all form rings under the usual addition and multiplication. 3. Zn forms a ring under addition and multiplication of congruence classes. 4. M2 (R) forms a ring under matrix addition and multiplication. Example 8.3. Let R[x] be the set of polynomials with coefficients in R. In other words, R[x] = {an xn + an−1 xn−1 + · · · + a1 x + a0 | n ≥ 0, ai ∈ R}. 51 We define addition and multiplication on R[x] as the P Pusual addition and multiplication of polynomials. For example, if f (x) = ai xi and g(x) = bi xi , then X f (x) + g(x) = (ai + bi )xi. Note that R[x] forms a group P under addition: the P identity element is the zero polynomial, and the additive inverse of f (x) = ai xi is −f (x) = (−ai )xi. Multiplication on R[x] is associative with identity equal to the constant polynomial 1, and the distributive property is not hard to check. Remark 8.4. Note that if R is a ring, then every element r ∈ R must have an inverse under addition, but r does not need to have an inverse under multiplication. Remark 8.5. Note that if R is a ring, then addition must be commutative, but multiplication need not be commutative. For example, multiplication in M2 (R) is not commutative. If multiplication on R is commutative, then R is called a commutative ring. Example 8.6. If (R1 , +1 , ∗1 ) and (R2 , +2 , ∗2 ) are rings, then R1 × R2 is a ring with addition defined by (r1 , r2 ) + (s1 , s2 ) = (r1 +1 s1 , r2 +2 s2 ) and multiplication defined by (r1 , r2 ) ∗ (s1 , s2 ) = (r1 ∗1 s1 , r2 ∗2 s2 ). The proof that R1 × R1 forms a ring is an exercise. Note that the additive identity 0R1 ×R2 = (0R1 , 0R2 ) and the multiplicative identity 1R1 ×R2 = (1R1 , 1R2 ). Proposition 8.7. Let (R, +, ∗) be a ring. 1. 0R ∗ r = r ∗ 0R = 0R for all r ∈ R. 2. (−r) ∗ s = r ∗ (−s) = −(r ∗ s) for all r, s ∈ R. It’s worth explaining what the second statement means: −r is the additive inverse of r, and −(r ∗ s) is the additive inverse of r ∗ s. So (−r) ∗ s = −(r ∗ s) means that when we multiply the additive inverse of r by s, the product is the additive inverse of r ∗ s. Proof. 1. Let r ∈ R. We will show that 0R ∗ r = 0R. We have 0R ∗ r = (0R + 0R ) ∗ r (since 0R is the identity under +) 0R ∗ r = 0R ∗ r + 0R ∗ r (by the distributive property) 0R ∗ r − 0R ∗ r = 0R ∗ r + (0R ∗ r − 0R ∗ r) (using the associativity of +) 0R = 0R ∗ r + 0R (since −0R ∗ r is the inverse of 0R ∗ r under addition) 0R = 0R ∗ r (since 0R is the identity under +). The proof that r ∗ 0R = 0R is similar. 2. First we claim that (−r) ∗ s = −(r ∗ s), i.e. that (−r) ∗ s is the additive inverse of r ∗ s. We have (−r) ∗ s + r ∗ s = (−r + r) ∗ s (by the distributive property) = 0R ∗ s (since −r is the additive inverse of r) = 0R by the first part of the proposition. By Propostion 5.21, (−r) ∗ s = −(r ∗ s). The proof that r ∗ (−s) = −(r ∗ s) is similar. 52 8.2 Subrings Definition 8.8. Let (R, +, ∗) be a ring. A subring of R is a subset S ⊂ R such that (S, +, ∗) forms a ring and 1R = 1S. Just like in the definition of subgroup, if S ⊂ R is a subring, then + and ∗ must be binary operations on S, i.e. S must be closed under addition and multiplication. Example 8.9. We have containment of subrings Z ⊂ Q ⊂ R ⊂ C (Z is a subring of Q, which is a subring of R, which is a subring of C). The following theorem lets us check if a subset of a ring forms a subring. Theorem 8.10. Let (R, +, ∗) be a ring, and let S ⊂ R. Then S is a subring of R if and only if the following 3 properties hold: 1. If x, y ∈ S, then x + y ∈ S and x ∗ y ∈ S. 2. 0R , 1R ∈ S. 3. If x ∈ S, then −x ∈ S. Proof. Assume S is a subring of R. Then + and ∗ are binary operations on S, so 1 holds. Since (S, +, ∗) forms a ring, we know by definition that (S, +) forms an abelian group. Thus S is a subgroup of (R, +), which implies that 0R ∈ S and that 3 holds (this follows from the subgroup test). Finally 1R ∈ S by definition of a subring. Thus 1, 2, and 3 hold. Now assume S satisfies 1, 2, and 3. By 1, we have that + and ∗ are binary operations on S. By the subgroup test, (S, +) forms a group, and (S, +) is abelian since + is commutative on R. SInce ∗ is associative on R, it is also associative on S. Since 1R ∈ S, we know that S contains an identity element for ∗. Finally, since the distributive property holds on R, it also holds on S.  Example 8.11. Let R = M2 (R), and let S = { a0 0b | a, b ∈ R} (the set of diagonal matrices in M2 (R)). Then 1. If A = a0 0 b  and B = c 0 0d  , then     a+c 0 ac 0 A+B = and AB = 0 b+d 0 bd so A + B, AB ∈ S. 2. 0R = ( 00 00 ) ∈ S, 1R = ( 10 01 ) ∈ S.   0 3. If A = a0 0b , then −A = −a 0 −b Thus S is a subring of R. Example 8.12. Let R be a ring, and consider the ring R × R. Let S = {(r, 0) | r ∈ R}. Then S is not a subring of R × R since 1R×R = (1R , 1R ) ∈ / S. 53 8.3 Units Definition 8.13. Let (R, +, ∗) be a ring. An element r ∈ R is called a unit in R if there exists s ∈ R such that r ∗ s = s ∗ r = 1R. So units are the ele

Use Quizgecko on...
Browser
Browser