QT203 Lecture Notes - Mathematical Preliminaries PDF
Document Details
IISc
Baladitya Suri
Tags
Summary
These notes provide a concise overview of mathematical preliminaries, specifically focusing on number theory. They discuss various types of numbers, including natural, whole, integers, rational and irrational numbers. The notes also cover important concepts like the least integer and largest integer principles, arithmetic operations on integers, as well as the uniqueness of prime factorization.
Full Transcript
Part I Mathematical Preliminaries 1 Chapter 1 Number Theory 1.1 Types of numbers 1. Natural numbers N: Intended for counting. Numbers N = {1, 2, 3,... , ∞} form the set of natural numbers. 2. Whole numbers W: If we include zero to the set of natural number...
Part I Mathematical Preliminaries 1 Chapter 1 Number Theory 1.1 Types of numbers 1. Natural numbers N: Intended for counting. Numbers N = {1, 2, 3,... , ∞} form the set of natural numbers. 2. Whole numbers W: If we include zero to the set of natural numbers, we get the set of whole numbers, i.e., W = {0, 1, 2,... ∞}. More modern texts refer to them as non-negative integers. 3. Integers Z: Natural numbers with positive and negative signs and zero. Z = {∞,... , −2, −1, 0, 1, 2,... , ∞} 4. Real Numbers R: The set of real numbers is a continuous, uncountable set of numbers ranging from {−∞,... , ∞}. Unlike the above sets, it is uncountable. The set of real numbers forms an ordered set. That is, any subset of real numbers can be arranged in an ascending or descending order. 5. Rational Numbers Q: The set of all real numbers that can be expressed as a ratio of integers. Typically, a rational number can be written as a finite decimal number, or a repeating decimal number. 6. Irrational Numbers Q̄: Any real number that is not a rational number is an irrational number. An irrational number cannot be written as a finite decimal number or a repeating decimal number. 1.1.1 The various quirks of infinities The set of integers contains the set of whole numbers as a proper subset, which in turn contains the natural numbers as a proper subset. N⊂W⊂Z This is because every natural number is an integer, but not all integers are natural numbers. 3 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri However, if we ask the question “how many integers or whole numbers or natural numbers are there?”, the answer is the same – infinite. And it is the same grade or degree of infinity – called a countable or enumerable infinity. In fact, any set that can be mapped ( element by element) to the set of Natural numbers is said to be the same order of countable (enumerable) infinity. While the enumerable infinitude of the above sets does not surprise us, it should come as a surprise that the set of rational numbers is also an enumerable infinity. It was due to Cantor that this strange result was proved. Cantor proved using an ingenious technique called diagonalisation that the set of rational numbers maps on to the set of integers, and therefore forms a countable infinity. In contrast, the set of irrational and thereby real numbers are continuous, uncountable infinities. Unsur- prisingly, the set of integers is sparse in the set of real numbers, meaning that a real number cannot be approximated to arbitrary precision using an integer. However, quite surprisingly, the set of rational numbers is dense. Any real number can be approximated to arbitrary precision by a rational number. A related result is that in any finite interval of the real line, say (0, 1), there are an infinite number of rational numbers. Strangely enough, this tiny line segment has the same number of points as the entire real line. This above account should give you enough intuition to not treat infinities lightly and casually as they are full of puzzles. In all that follows, we will only focus our attention on integers and not worry about real numbers in general. More specifically, we will focus only on positive integers. The results that we derive here are fully applicable to negative integers as well. 1.1.2 Least Integer and Largest Integer Principles The set of real numbers, and thereby integers, is an ordered set. But the sparseness and enumerability of integers lead to the following two important principles. The lowest/least integer principle states that any nonempty subset of integers, bounded below by a finite bound R, where ∞ > R > −∞, contains a smallest integer. The largest integer principle states that any nonempty subset of integers, bounded above by a finite bound R, where ∞ > R > −∞, contains a largest integer. Note that the same principles cannot hold for rational numbers, even though they too are enumerable. 1.1.3 Arithmetic operations on integers Two basic arithmetic operations over integers, namely addition (a + b) and multiplication (a · b or simply ab), satisfy the following properties: Commutativity : a+b=b+a ; a·b=b·a 4 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Associativity : a + (b + c) = (a + b) + c ; a · (b · c) = (a · b) · c Distributivity: a · (b + c) = a · b + a · c Note that subtraction is addition of positive and negative integers, so the above definition of addition suffices. However, division is not the same as multiplication, and it is easy to check that it does not satisfy any of the above properties. 1.2 Division, Divisors and GCD We say that an integer b ̸= 0 divides another integera, denoted as b | a if and only if there exists an integer c such that bc = a. 1.2.1 Theorem 1– Division Algorithm For any two positive integers a and b ̸= 0 and b ≤ a, we have a unique pair of integers q and 0 ≤ r ≤ b − 1 such that a = bq + r. This theorem defines the quotient and remainder and also posits their uniqueness. Proof: Consider the set of integers {a, a − b, a − 2b,... , }. If you consider subset containing only the non-negative integers in this set, that subset is bounded below by 0 and therefore, by the least integer principle, has a smallest integer. Let that integer be r. By construction, we have r = a − qb for some integer q. To prove their uniqueness, consider the contrary. Let there be two positive integers q, r and q1 , r1 such that a = qb + r = q1 b + r1 This leads to b(q − q1 ) + r − r1 = 0. This means b | r − r1. Since both r, r1 lie between 0 and b, we have −b < r −r1 < b. However, in this interval, the only value divisible by b is 0. Therefore, r = r1 and since b ̸= 0, we have that q = q1. This proves the uniqueness. From this, the following follow (for positive integers): If a | b, and b | c, then a | c. If a | b and a | c, then a | (b + c). By extension, if a | b1 , a | b2 ,..., a | bn , then a | (c1 b1 + c2 b2 + · · · cn bn for any integers c1 , c2 ,... cn. 5 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1.2.2 GCD and Euclid’s algorithm The greatest common divisor GCD of two positive integers a and b is, as the name suggests, the largest number that divides both the numbers. It is denoted by (a, b). The mathematical definition is as follows: The GCD of two positive integers a and b is an integer d such that d | a and d | b, and if any integer c | a and c | b, then d ≥ c. Lemma 1: If a > b, since we have a = bq + r and (a, b) = (b, r). The proof is quite simple. Let d = (a, b). Then since d | a and d | b, we have integers m, n such that d = ma and d = nb. This leads to r = md − ndq. Therefore, clearly, d | r. Now, if c | b and c | r and c ̸= d, we have c | a. But since we said d is the GCD of a, b, c ≤ d. Therefore, by definition of GCD, d = (b, r). 1.2.3 Euclidean algorithm for computing GCD Given two positive integers, a, b, from the division algorithm, we have a = bq + r. We can also have b = q1 r + r1 , where r1 ≤ r. We can continue to write r = q2 r1 + r2. Proceeding thus, we progressively get ri−1 = qi ri + ri+1. And the remainders satisfy r ≥ r1 ≥ r2... ri−1 ≥ ri ≥ 0. This set of remainders is bounded below by 0 and by the least integer principle, contains a smallest value. For some large integer t, rt+1 = 0 and therefore rt−1 = qt+1 rt. Now invoking the previous lemma, (a, b) = (b, r) = (r, r1 ) = · · · = (rt−1 , rt ). Since rt | rt−1 , we have that rt is the GCD of the numbers a, b. Corollary 1: From the Euclid algorithm for GCD, it is easy to see that for any two positive integers a, b, and d = (a, b), there exist two integers x, y (not necessarily positive) such that ax + by = d. Proof: Work backwards in the Euclid algorithm to write rt in terms of the original a, b. Corollary 2: If d | ab and (d, a) = 1, then d | b. Proof: Starting from ax+dy = 1 (from Corollary 1 above), multiplying both sides by b, we get abx+dby = b. Since d | ab, we have a c ∈ Z such that dc = ab the left side as d(cx + by) = b. Since cx + by is an integer, by definition, d | b. Corollary 3: If d = (a, b) and c | a and c | b, then c | d. That is every common divisor of a, b is a divisor of the GCD of the two numbers. Proof: Starting with ax + by = d, dividing both sides by c ̸= 0, we have mx + ny = d/c, for m, n ∈ Z. Since the LHS is an integer, we have the result that c | d. Corollary 4: If a | m and b | m and (a, b) = 1, then ab | m. Proof: Starting from ax + by = 1, and multiplying both sides by m we get amx + bmy = m. Writing m = pb = qa for p, q ∈ Z, we get ab(px + qy) = m, or px + qy = m/ab. Since LHS is an integer, we get ab | m. Corollary 5: If (a, b) = d, then (a/d, b/d) = 1. Proof : Assume to the contrary that (a/d, b/d) = m > 1. We immediately have a contradiction that d is not the GCD, since md > d. 6 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1.3 Application of GCD – linear diophantine equations An important application of the GCD is in solving a class of integer equations called the linear diophantine equations. These equations are of the form ax + by = c where a, b, c ∈ Z, where we seek integral solutions for x, y. Lemma 1: If d = (a, b) = 1, then ax + by = c has a solution. To prove this, we use the fact that ar + bs = 1 for some r, s ∈ Z (Euclidean algorithm). Multiplying both sides by c, we get a(rc) + b(sc) = c. From this we can see that x0 = rc, y0 = sc is an integral solution. Lemma 2: If ax + by = c has one solution (x0 , y0 ), then it has an infinite number of solutions x0 + bt, y0 − at, for t ∈ Z. – this result is trivial to prove by a simple substitution. Theorem 2: If the (a, b) ∤ c, then the equation ax + by = c has no integral solutions. If (a, b) | c, then the equation has an infinite number of integral solutions given by b a x=r+ t, y = s − t, (a, b) (a, b) where r, s is any solution to the equation and t is any integer. Proof: Let d = (a, b). We prove this theorem in stages. Lemma 3: If d ∤ c, ax + by = c has no solutions. – We can prove the contrapositive – “ if the equation has a solution, then d | c”. If x0 , y0 is a solution, we have ax0 + by0 = c. Since d | a and d | b, we have by dividing the equation by d, mx0 + ny0 = c/d, where a = md, b = nd. Since the LHS is an integer, RHS has to be an integer. Therefore d | c. By the rules of propositional logic, the contrapositive is true if and only if the original statement is true. Therefore, indirectly, we have proven the original statement. Lemma 4: If d | c, then ax + by = c has solutions. To prove this, we first note that c = md, for some m ∈ Z. Dividing the original equation by d on both sides, we get a′ x + b′ y = m, where a = da′ , b = db′. Now using (a′ , b′ ) = 1, and lemma 1 above, we prove that ax + by = c has solutions. Lemma 5: If (a′ , b′ ) = 1, all the solutions of the equation a′ x + b′ y = m are given by x = r + b′ t, y = s−a′ t where r, s is any solution. – Here we want to show that any solution r′ , s′ is given by r′ = r+b′ t, s′ = s − a′ t. From Lemma 1 and 2, we know that there exists an integral solution r, s such that a′ r + b′ y = m. If a′ r′ + b′ s′ = m is another solution, then subtracting the two, we get a′ (r − r′ ) + b′ (s − s′ ) = 0. From this, it is easy to see that a | a′ (r − r′ ) and therefore a′ | b′ (s − s′ ). Since a′ ∤ b′ , we have a′ | (s − s′ ). Therefore, there exists a t such that s − s′ = a′ t, or equivalently, s′ = s − a′ t. Substituting this in a′ r′ + b′ (s − a′ t) = m, and using a′ r + b′ s = m) we get a′ r′ − a′ b′ t − b′ a′ r = 0, or equivalently, r′ = r + b′ t. Finally, noting a′ = a/(a, b) and b′ = b/(a, b) proves the theorem completely. 7 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1.4 Factorisation of integers If b | a, for b, a ∈ Z and b ≤ a, then we say that b is a divisor or factor of b. Conversely, we say that a is divisible by b. We say that b | a if and only if in a = bq + r, r = 0. An integer that has only 1 and itself as its factors is called a prime number. Corollary: For any two prime numbers p, q, (p, q) = 1 – follows from the definition. An integer c that has an integer a, 1 < a < c as a factor is called a composite number. 1.5 Prime Numbers and Factorisation Lemma 1: Every integer n > 1 is divisible by a prime. Proof : Consider an integer n > 1. Consider the set D of all divisors c of n, such that 1 < c < n. This set is either empty or non-empty. If the set is empty, then n is a prime and is therefore divisible by itself. If the set is non-empty, from the least integer principle, this set has a lowest integer p. p is the smallest divisor of n and there are no smaller divisors of n. If p > 1 itself is divisible by another integer less than p itself, p would not be the smallest. Therefore p, by definition, is a prime number. Corollary 1: The smallest divisor p > 1 of a number n > 1 is a prime. Lemma 2: Every integer n > 1 can be written as a product of primes. Proof : From Lemma 1, we have that n = p1 n1 , for p a prime. Now, if n1 > 1, we have that n1 itself is divisible by a prime p2 , or is a prime in itself. Proceeding thus, we have n = p1 p2... pr nr. We have n1 > n2 >... > nr >.... This sequence is bounded below by 1 and will stop for some t, such that nt = pt ,a prime, and therefore nt+1 = 1. Therefore, we have the result that n = p1 p2... pt. Theorem 3: There are an infinite number or primes Proof : Euclid gave a very elegant proof for this theorem. Assume the contrary, that there only a finite number N of primes p1 , p2... , pN. Construct the number M = p1 p2... pN + 1. M > 1 is clearly divisible by a prime pi ∈ {p1 ,... pN }. However, for every pi , we see that M = pi qi + ri , where ri = 1. Since ri ̸= 0, pi ∤ M , ∀i. This means M cannot be written as a product of primes less than it, and therefore is a prime number in itself. This contracts the fact that there are only N primes, since we just constructed the N + 1-th prime. Therefore, we conclude that there are an infinite number of primes. Euclid’s proof also gives a recipe to construct an infinite sequence of primes starting from any two p1 , p2. √ Lemma 3: Every number n > 1 has a prime factor p, such that 1 < p < n. Proof is trivial. Lemma 4: If a prime p is such that p | ab, then either p | a or p | b. Lemma 5: If a prime p | q1... qk where qi are all prime, then p = qi for some 1 ≤ i ≤ k. 8 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Theorem 4: Unique prime factorisation: The prime factorisation of a number n = p1 p2... pr is unique. Proof : Assume the contrary. Let n = p1 p2... pr = q1 q2... qs where pi , qj are primes for all 1 ≤ i ≤ r, 1 ≤ j ≤ s. Since p1 divides the LHS, it has to divide the RHS. Since the RHS is a product of primes, we conclude that p1 must be equal to one of the qj on the RHS. Without loss of generality we can label that as q1 on the RHS. We can then cancel this factor, and we get p2... pr = q2... qs. Repeating the above logic again and again we see that every prime pi has to be present in the product q1... qs. By labeling the corresponding integer on the RHS as qi , we see that there is a one-one correspondence in the factors, and therefore r = s. Therefore, we have the result that the two factorisations contain the same prime factors and therefore are not different. Hence the prime factorisation of an integer is unique. Corollary: From the unique factorisation theorem we conclude that every integer n ≥ 1 can be written uniquely in the form n = pe11 pe22... perr , where pi ̸= pj when i ̸= j, and ei ≥ 0∀i. This is called the prime- power decomposition of n. Theorem 5: Given two integers m = pe11... pekk and n = pf11... pfkk , the GCD of the two numbers is given by d = pg11... pgkk , where gi = min(ei , fi ). Proof: You can complete the proof. It is trivial. Theorem 6: For every positive integer n ≥ 1, denoted by its prime-power factorisation n = pe11... pekk , where ei ≥ 0 ∀i, the total number of divisors, including 1 and itself is d(n) = (e1 + 1)(e2 + 1)... (ek + 1) Proof: Consider the set D = {d : d = pf11... pfkk , 0 ≤ fi ≤ ei ∀i}. Every element d ∈ D is a divisor of n. This can be shown trivially, and it can also be shown easily that every divisor of n is an element of D. Therefore, D is the set of all divisors of n, or D = {d, where d | n}. Therefore, the number of elements in D is the desired answer – the total number of divisors of n. For each pi , for 1 ≤ i ≤ k, since 0 ≤ fi ≤ ei , there are ei +1 values of fi possible. Considering that there are k such distinct primes in the factorisation of n, we get the final answer as d(n) = (e1 +1)(e2 +1)... (ek +1) 1.6 Congruences 1.6.1 Modulo arithmetic Definition: For positive integers a, b, m, we say that a ≡ b( mod m, read as “a is congruent to/with b, modulo m”, if m | (a − b). Theorem 7: We say that a ≡ b( mod m) if and only if there exists and integer k such that a = b + km. – Proof is just the definition of the congruence. Theorem 8: Every positive integer a is congruent with one of 0, 1, 2,... , m − 1 modulo m. – Proof: Since a, m are positive integers, it follows that a = km + r, where 0 ≤ r ≤ m − 1 for some positive integer k. From theorem 7 above, since there exists a k where a = r + km, we can state that a ≡ r( mod m). 9 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Therefore, Every positive integer is congruent with one of the integers 0, 1, 2,... m − 1 - all the possible values of r. Such an r, where 0 ≤ r ≤ m − 1 is called the least residue. This theorem states that ALL positive integers map to one of these m least residues. In fact, in a lot of cases, it is this basic feature of congruences that allows us to deduce extremely powerful results. Very often, we also see that negative integers are used in congruences. Such as 10 ≡ −1 (mod 11), 5 ≡ −2 (mod 7) etc. This is done for convenience to avoid larger remainders, in favour of smaller negative remainders. The rules of modulo arithmetic still remain the same. Theorem 9: a ≡ b (mod m) if and only if a, b leave the same remainder modulo m. – The proof is trivial and you should complete it. Corollary: If r such that 0 ≤ r ≤ m − 1 is the least residue/ remainder modulo m, and a ≡ r, then by definition, ∀n ∈ Za + nm ≡ r (mod m). 1.6.2 Arithmetic with congruences Congruences are a lot like equalities, as we can see from the following list of properties. All of these results assume (mod m for some m ∈ Z: 1. a ≡ a 2. a ≡ b means b ≡ a 3. a ≡ b, b ≡ c means a ≡ c 4. a ≡ b, c ≡ d means a + b ≡ c + d 5. a ≡ b, c ≡ d means ac ≡ bd One can prove all the above from the basic definitions of congruences. However, congruences also have some fundamental differences from equalities. Theorem 10: If ac ≡ bc (mod m) and (c, m) = 1, then a ≡ b. – Proof: From the definition of congruence, m | ac − bc, therefore, m | c(a − b). Since it is give that (m, c) = 1, so m ∤ c. So m | a − b, or equivalently, a ≡ b (mod m). Theorem 11: If ac ≡ bc (mod m) and (c, m) = d, then a ≡ b (mod m/d). Proof – Given (c, m) = d, we have (c/d, m/d) = 1. Since ac ≡ bc (mod m), we have ac = bc + km for some k ∈ Z. Dividing by d, we get ac/d = bc/d + km/d, or (a − b)c/d = km/d. Since m/d ∤ c/d, we have m/d | a − b, or by definition, a ≡ b (mod m/d). 10 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1.6.3 Divisibility rules in arithmetic In school we study the divisibility rules of certain integer divisors – notably 2, 3, 4, 5, 6, 9, 10. These results are transparent in a couple of cases, like 2, 5, 10, but quite strange for the others. We assume the decimal number representation in these rules listed below. If a number has an even last digit, it is divisible by 2. If the sum of digits in a number is divisible by 3, then the number is divisible by 3. If the last two digits of a number are divisible by 4 then the number itself is divisible by 4. If the number ends in a 5 or 0, then it is divisible by 5. If a number is divisble by both 2 and 3, it is divisible by 6. If the sum of digits in a number is divisible by 9, then number itself is divisible by 9. If the last digit is 0, then the number is divisible by 10. If we ask the question, “ what is the divisibility rule for 7, or 11, or 13?” is there a general approach to figure out the answer? The general answer to the divisibility rule of ANY integer, is in fact rooted in a study of the congruences. Let us take the example of divisibility by 11. Consider an n−digit number. In decimal system, we represent it by an−1 an−2... a1 a0 , where ∀i, 0 ≤ ai ≤ 9. The number has a value of an−1 10n−1 + an−2 10n−2 +... + a1 10 + a0. Since each of the digits is a least residue modulo 11, they are congruent with themselves. That leaves us to consider the congruences of powers of 10 (modulo 11). 10 ≡ −1, 102 ≡ 1, 103 ≡ −1 etc. From this, we deduce that “ If a0 − a1 + a2 − a3... + (−1)n−1 an−1 is divisible by 11, then the entire number is divisible by 11. Similarly, one can work out the divisibility rules of the other integers. 1.6.4 Linear congruences and connection with linear Diophantine equations The equation ax ≡ b (mod m) is called a linear congruence, where an integer solution for x is sought. Usually, by a solution for a congruence equation we mean a least residue. And by definition, if there is one solution, there exist an infinite number of integers that map to that least residue and are therefore also solutions to the same equation. Using the definition of congruence, m | ax − b. That is, there exists a k such that ax − b = km, or equivalently, we need integral solutions for x, k in the familiar linear Diophantine equation ax + mk = b. This equation has solutions only when (a, m) | b. The same result, stated in congruence language translates to the below lemma: 11 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Lemma 6: If (a, m) ∤ b, then ax ≡ b (mod m) does not have a solution. – The proof can be carried out of the contrapositive statement – “ If ax ≡ b (mod m) has a solution, then (a, m) | b.” If r is a solution to the equation, then ar ≡ b (mod m). Therefore, by definition, m | (ar − b). Therefore, there exists a k such that ar − b = km or ar + km = b Since (a, m) | a and (a, m) | m, dividing the whole equation by (a, b), we see that (a, b) | m. Therefore, by the rules of contraposition, if (a, m) ∤ b, ar ≡ b (mod m) has no solutions. Lemma 7 : If (a, m) = 1, the equation ax ≡ b (mod m) has exactly one solution. – Proof: We know that there exist r, s ∈ Z such that ar + ms = 1.Multiplying both sides by b, we get a(rb) + m(sb) = b. This has the form ap + mq = b, for some integers p, q. Since there exists a q such that ap − b = qm, we can say m | ap − b or ap ≡ b (mod m). Therefore, the least residue of p is clearly one solution. Let this be r. If there is another least residue s which is also a solution, we have ar ≡ b and as ≡ b. Therefore, ar ≡ as (mod m). Since (a, m) = 1 we can cancel the common factor based on theorem 10 and write r ≡ s (mod m). Therefore, we have m | (r − s). Since r, s are both least residues, we have −m < r − s < m. The only number that is divisible by m in this range is 0. Thereby, we get r = s, which proves that there is exactly one solution. Lemma 8: If (a, m) = d, and d | b, then ax ≡ b (mod m) has exactly d solutions. – Proof: Since (a, m) = d, we have (a/d, m/d) = 1. This means a′ x ≡ b/d(mod m′ ) has exactly one solution. Let that solution be r. Therefore, we have a′ r ≡ b/d ( mod m′ ) where 0 ≤ r ≤ m′ − 1. This means there exists a k ∈ Z such that a′ r = b/d + km′. If r is a least residue solution of a′ r ≡ b/d (mod m′ ), we know that r + nm′ , for ninZ are all integers that leave the same least residue (remainder) r modulo m′. If we now consider 0 ≤ n ≤ d − 1, we have the solutions r, r + m/d, r + 2m/d,... r + (d − 1)m/d. Since 0 ≤ r ≤ m/d − 1, we have 0 ≤ r + (d − 1)m/d ≤ m − 1. Therefore, these d values of r are least residues modulo m. Therefore, the equation ax ≡ b (mod m) has exactly d solutions. Theorem 12: The linear congruence ax ≡ b (mod m) has no solutions if (a, m) ∤ b. If (a, m) | d, then there are exactly (a, m) solutions to the equation. – The proof of this theorem is the combination of all the above Lemmas (6-8). Theorem 13 – Fermat’s Theorem: If p is a prime and a is any integer not divisible by p such that (a, p) = 1, ap−1 ≡ 1 (mod p). Lemma 9: If p is a prime and (a, p) = 1, the numbers a, 2a, 3a,... (p − 1)a are congruent to 1, 2, 3,... p − 1 in some order. – Complete the proof of this!! Now, using the above lemma, we can consider the product a.2a.3a.4a.... (p − 1)a ≡ 1.2.3.... (p − 1). Since the numbers 1,... p − 1 are relatively prime to p, by the rules of congruences, we can cancel the common factor of (p − 1)! from both sides and find that ap−1 ≡ 1. Theorem 14 - Euler’s theorem: This theorem is a generalisation of Fermat’s theorem above. If m > 1 and (a, m) = 1, aϕ(m) ≡ 1 (mod m). Here, ϕ(m) is called the Euler Totient function and is defined ∀m > 1 as ϕ(m) = “the number of positive integers less than m that are relatively prime to m 12 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Lemma 10: If m > 1 is a positive integer, and r1 , r2 ,... rϕ(m) are ϕ(m) positive integers less than m that are relatively prime to m, for every positive integer a that is relatively prime to m, the numbers r1 a, r2 a,... , rϕ(m) a are congruent with r1 , r2 ,... rϕ(m) in some order. – Complete the proof of this lemma. Using the above lemma, Euler’s theorem can be proved instantly. 1.7 Exercises 1. Prove all the above lemmas and theorems for yourself to get comfortable with the logic involved. 2. Derive the divisibility rules for 7 and 13. 3. Calculate (314, 159) and (10001, 100083) 4. Prove that (n, n + 1) = 1∀n > 0. 5. Under what conditions is (k, n + k) = 1, n, k > 1? 6. Find x, y such that 299x + 247y = 13. 7. Solve the linear congruence equations 2x ≡ 4 (mod 7), 14x ≡ 27 (mod 31), 4x ≡ 1 (mod 15). 8. Prove that if n is composite, 2n − 1 is composite. 9. Show that 9x + 4 for any x ∈ Z cannot be a sum of three cubes. Hint: Use congruences. 10. Show that for a prime p, the solution to the equation x2 ≡ 1 (mod p) has precisely two solutions. 13 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 14 Chapter 2 Linear Algebra Review 2.1 Introduction This notes is meant to be a quick review of certain important concepts in linear algebra and the student is advised to take up a more formal study to learn techniques more deeply. Given that all undergraduate curricula have a course in linear algebra, this treatment here is at best a compendium of results for ready reckoning. Moreover, the topics here are chosen based on relevance to quantum technologies, and are not representative of a complete course on the subject. Lastly, the treatment here is not rigorous up to a mathematician’s standards, and is only meant to gain intuition and understanding of the concepts so as to be able to apply them. 2.2 Linear Vector Spaces The concept of linearity is all-pervading in science, be it linear equations, linear programming, linear time-invariant systems, linear differential equations etc. In all these cases, the definition is simple: A system is linear if it allows for the principle of superposition – A general member of this set is a linear combination of other members of the set. We use the term vector as a name for any such object that satisfies the principle of superposition. Some examples are position vectors in real space, column vectors in matrix theory, solutions of a linear differential equation, etc. A vector space V is a collection of vectors which satisfy the principle of superposition – that is if ⃗a and ⃗b are vectors in V, then α⃗a + β⃗b also belongs to V, for any two scalars α, β. The word scalar here is defined to mean just a number. These numbers can be either real or complex. A null vector in this space is defined as the vector ⃗0 such that for any ⃗a ∈ V, ⃗a + ⃗0 = ⃗a. 15 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri If the scalars are restricted to be purely real numbers, then we have a real vector space. In general, unless otherwise specified, we are dealing with complex vector spaces. 2.2.1 Linear independence, Dimensionality and Bases of a vector space We say that n vectors ⃗a1 ,... ⃗an are linearly independent if and only if c1⃗a1 + c2⃗a2 +... cn⃗an = ⃗0 only for c1 = c2 = · · · cn = 0. That is, no non-trivial superposition leads to a sum of n linearly independent vectors to result in the null vector. If a non-trivial (i.e., for some n, cn ̸= 0) linear combination of vectors adds up to the null vector, the vectors are said to be linearly dependent. In other words, one of those vectors can be written as a linear combination of the others. The maximum number of linearly independent vectors in a space V is the dimensionality of the space V. For example, in the Euclidean 3D space, there can only be 3 linearly independent vectors. In a 2D plane, not more than 2 vectors can be independent. We usually deal with finite dimensional vector spaces, but in physics very often we deal with countably infinite vector spaces, and occasionally with uncountably infinite spaces. A set of n linearly independent vectors in an n−dimensional vector space V is sufficient to define every other vector in the space through linear combination. These n linearly independent vectors are said to span the entire space V Any n linearly independent vectors can form a basis set for the n−dimensional vector space. Any vector in this space can then be written as a (scaled) superposition of these basis vectors. If any vector in the space can be written as a linear combination of the basis vectors, the basis is said to be complete. For any given basis, a given vector in the space can be written as a unique superposition or linear combination of the basis vectors. In the study of linear differential equations, we sometimes encounter a set of basis functions – complex exponentials, sines, cosines, Legendre polynomials, Hermite-Gauss polynomials, etc. These basis functions usually form a countably, and sometimes uncountably infinite dimensional set. 2.2.2 Inner Product spaces and Hilbert Spaces An inner product is a function that takes two vectors from the vector space and gives a scalar as the output. It is denoted in general by ⃗a · ⃗b. If an inner product is defined on a vector space, we call it an inner product space. 16 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 2 The inner product of a vector with itself is used to define the norm or length of the vector ⃗a · ⃗a = |⃗a|. Given a vector ⃗a in an inner product space, we can define a normalised unit vector â = ⃗a/|a|. The inner product ⃗a · b̂, where b̂ is a unit vector gives the projection of a along the direction of b. Two vectors are said to be orthogonal to each other if their inner product is 0. Two orthogonal vectors are necessarily linearly independent, but not the other way around. In an n−dimensional inner product space, we can construct n orthonormal vectors – vectors of unit norm, and orthogonal pair-wise. These vectors specify n directions, orthogonal to each other. Exam- ple, x, y, z axes in the Euclidean space. These n orthonormal vectors are usually termed basis vectors. There are usually many bases possi- ble for a given inner product space. The inner product of any vector with these orthonormal vectors gives us the components along each of those directions. The definition of inner product is different for different vector spaces – For the real 3D Euclidean space, for two vectors ⃗a = a1 x̂ + a2 ŷ + a3 ẑ, ⃗b = b1 x̂ + b2 ŷ + b3 ẑ, we have ⃗a · ⃗b = a1 b1 + a2 b2 + a3 b3 x1 y1 x2 y2 and y = , we have x · y = n xi yi. This, in terms P – For two real column vectors x = .. .. i=1 . . xn yn y1 y2 of matrix multiplication translates to xT y = x1 x2 x3... xn .. . yn x1 y1 x2 y2 Pn ∗ – For two complex column vectors x = .. and y = .. , we have x · y = i=1 xi yi. This, in . . xn yn y1 y2 † ∗ ∗ ∗ terms of matrix multiplication translates to x y = x1 x2 x3... xn .. . yn 17 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri – For two complex functions f (x), g(x) over an interval (a, b), we have the inner product defined as an overlap integral Z b dxf ∗ (x)g(x) a A Hilbert space is an object we encounter a lot in Quantum Mechanics. It is usually a complex inner product space, typically of infinite dimensions, where all the vectors are of unit norm. It is typically a function space. Function spaces have basis functions, and any function satisfying the same boundary conditions as the basis functions can now be expressed as a linear superposition of the basis functions. – A typical example is a Fourier series. Any periodic function x(t) with a fundamental period T can be written as x(t) = n an einω0 t , where ωO = 2π/T. Here the functions inω0 t are an orthonormal P set of basis functions. nπx P – Any function f (x) that goes to zero at x = 0 and x = L can be written as f (x) = n an sin L. q These functions L2 sin nπx L again form an orthonormal basis. – There are many such examples – Fourier and other Transforms, Taylor series, expansions in Legendre polynomials, Hermite-Gauss polynomials etc etc. 2.3 Matrices We only deal with square n × n matrices. We also assume that in general all matrices are complex. The more important types of complex matrices are Hermitian and Unitary. A hermitian matrix is its own adjoint ( conjugate-transpose). Therefore it is also called Self-Adjoint. A unitary matrix is one whose adjoint is its inverse. Given a matrix A, we can compute its eigenvalues and eigenvectors from the eigenvalue equation A · x = λx. An n × n matrix always has n complex eigenvalues given by the roots of the characteristic polynomial equation |A − λI| = 0 The eigenvalues need not all be different. The eigenvectors corresponding to two different eigenvalues of any matrix are linearly independent. The determinant of a matrix is given by the product of its eigenvalues. If the determinant is 0, that is if one of the eigenvalues is 0, the matrix is not invertible. The inverse of a matrix exists only if its determinant is not zero. 18 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri The trace of a matrix is the sum of its diagonal elements. It can also be shown that the trace of a matrix is equal to the sum of its eigenvalues. Let two vectors x, y be related via the operation y = Ax. Now, consider all the vectors x in this space being transformed according to x′ = Rx, say due to a basis change or another coordinate transformation, where R is an invertible matrix. Under this transformation, we now have the modified equation y′ = A′ x′ , where A′ = RAR−1 The transformation RAR−1 , for some R is called a similarity transformation, and A, A′ are said to be similar matrices. Eigenvalues of a matrix remain invariant under a similarity transformation. Therefore, the determinant and trace. Even though a matrix always has n eigenvalues, there is no guarantee that it has n linearly indepen- dent eigenvectors, unless the eigenvalues are all non-degenerate ( n different eigenvalues). The rank of a matrix is the number of linearly independent eigenvectors it has. A matrix is said to be full rank if its rank is equal to the dimension - an n−dimensional matrix having n linearly independent eigenvectors. If a matrix Rd is formed by all the eigenvectors of a full-rank matrix A as its columns, this matrix Rd diagonalises A, i.e., Ad = RAR−1 is a diagonal matrix with diagonal entries as the eigenvalues of A. A matrix A for which such an Rd exists is called a diagonalisable. Only a full rank matrix is diagonalisable. This also means that an n×n diagonalisable matrix gives a set of n linearly independent and therefore complete basis vectors that span the space. Therefore, diagonalisable matrices have a special place in the sense that their eigenvectors form a complete set of basis vectors to expand any other vector in the space. The same holds for a complete set of basis functions. This immediately raises a question as to what class of matrices is always diagonalisable. The Spectral Theorem of linear algebra, concering the Normal matrices is important here, since it identifies an important class of matrices, called Normal Matrices. A normal matrix is one that commutes with its own hermitian conjugate. Hermitian and Unitary matri- ces are examples of normal matrices. The Spectral Theorem: Every normal matrix is diagonalisable by a unitary matrix. This theorem essentially states the following important results – 19 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1. That every normal matrix of n × n dimensions has n linearly independent eigen vectors. 2. The eigenvectors are not only linearly independent, but are also orthonormal. When these or- thonormal vectors are used to form a matrix, that matrix is unitary. 3. In other words, for every normal matrix N, we have a unitary matrix U such that Nd = UNd U† , where Nd is a diagonal matrix with the eigenvalues of N as the entries. 20 Chapter 3 Group Theory 3.1 Groups and Group Axioms A group G is a set of elements on which a binary operation ·( called group product, or group multiplication, or group operation) is defined, and has the following properties. 1. Closure Property: If a, b ∈ G, then a · b ∈ G. 2. Associative Property: The group operation is associative - a · (b · c) = (a · b) · c. This means the operation a · b · c is unambiguously defined. 3. Unique Identity: There exists a unique element e ∈ G such that a · e = e · a = a, ∀a ∈ G. 4. Unique inverse: For every a ∈ G, there exists a unique b ∈ G such that a · b = b · a = G. b is often denoted as a−1. The above definitive properties, or group axioms have a few consequences: If a ∈ G, then a · a... a(m times) also belongs to G. ax = bx =⇒ a = b xa = xb =⇒ a = b (ab)−1 = b−1 a−1. These group axioms are satisfied by a vast variety of objects: The set of integers under addition The set of non-zero rational numbers under multiplication The set of all invertible matrices under matrix multiplication 21 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri The number of elements in a group is called its order. If the order of a group is finite, it is called a finite group. A group can be finite, countably infinite or uncountably infinite in its order. A group is called Abelian if the group product is also commutative ∀a, b ∈ G, ab = ba in an Abelian group. Subgroup: A subset S of G is called a subgroup if S satisfies all the group axioms under the same group product as G. Every group is its own subgroup. E = {e} is a subgroup of every group. Therefore, E, G are called trivial subgroups. Any other subgroup is called non-trivial. E is also the only subgroup of order 1. Generating set: Many a time every element of a group can be generated using a product of finite powers of a finite number of elements of a subset S, or their inverses. The smallest such subset S is called a generating set and the elements of S are called generators of the group G. Cyclic group: A group that has a single generator is called a cyclic group. Theorem1: Every finite group has a cyclic subgroup Proof: Since the group G is given to be finite, and since if a ∈ G, then ∀n ∈ Z, an ∈ G, for some n = k, ak = e should be true. Then, we have the set of elements {e, a, a2... , ak−1 } which form a cyclic subgroup. Therefore, by construction, we proved that for every finite group G, we have a cyclic subgroup. In fact, we have proven that every element of a finite group is the generator of a cyclic subgroup of G. The order k such that ak = e for every element a ∈ G may be different, and is termed the order of the element a. This phrase essentially means that k is the order of the cyclic group generated by a. 3.2 Cosets, Conjugate Groups, Normal subgroups If G is a group, and H is a subgroup of G, then for every a ∈ G, Ha is a subset of G called the left coset of H. Similarly, aH is called the right coset of H. We use the term coset to refer to the left coset usually. From the above definition of a coset, it follows that for all elements a ∈ H, Ha = aH = H. This is a simple consequence of H being a group. Therefore, H is a coset of itself. For any a ∈ / H, Ha now forms a set of elements of G. Since e ∈ H, a ∈ Ha. Therefore, every element of G is an element of some coset of H. For any a ∈ / H, the coset Ha may be different from H in general. Different elements a ̸= b but both belonging to G may still result in the same coset of H. If two distinct elements a, b belong to the same coset of H, say Hc, for c ̸= a, b, then ∃h1 , h2 ∈ H such that h1 c = a, h2 c = b. Or equivalently, ∃h = h2 h−1 1 ∈ H such that ha = b or ,equivalently, ba −1 , ab−1 both belong to H. If for a ̸= b, and ab−1 ∈ H, then ∃h ∈ H such that h = ab−1 or hb = a, h−1 a = b. That is, a ∈ Hb and b ∈ Ha. But we also know that a ∈ Ha and b ∈ Hb. Therefore, both a, b belong to both Ha and Hb. So, there is at least one coset of H that contains both a, b. 22 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Another important result is that the order of all cosets of H is the same as that of H. For any a ∈ G, Ha = {ha | ∀h ∈ H}. The order of Ha will be less than that of H only if for some h1 , h2 , h1 ̸= h2 , we have h1 a = h2 a. But this is impossible, since that would mean h1 = h2. Therefore, the order of Ha is the same as that of H. Two different cosets of H are disjoint. That is, if Ha ̸= Hb, then Ha ∩ Hb = ∅. Let us prove this by assuming the contrary. Assume they are not disjoint. Then, there exists at least one common element in the two cosets. Therefore, there exist h1 , h2 ∈ H such that h1 a = h2 b. That is, h1 a = h2 b. Or, equivalently, ∃h = h−1 ′ 2 h1 ∈ H such that ha = b. Now, we can multiply both sides by any h ∈ H such that h′′ a = h′ b. That is, every element h′′ a ∈ Ha is also of the form hb ∈ Hb. That is Ha ⊆ Hb. We can also flip the roles of a and b and show similarly that Hb ⊆ Ha. Therefore, Ha = Hb. In other words, if there is even one common element between two cosets, the two cosets are equal. These results mean that the group G can be written as a union of different cosets of H, each of which has the same order of H. This leads us to the following important result. Lagrange’s Theorem: The order of every subgroup of G divides the order of G. That is, if G has an order n, then every subgroup H of the group G has an order m such that m | n. If a group G has a prime order, ever subgroup of G only has at most one element. In any group G we know that the only subgroup that has order 1 is the subgroup E = {e}. Therefore, a group with prime order has no non-trivial subgroups. Any group that has only E for a subgroup is called a simple group. Conjugate Groups: If H is a subgroup of G, for any element a ∈ G, aHa−1 is also a subgroup of G and is called a conjugate group of H. Different elements a ∈ G may generate different conjugate subgroups of H in general. Invariant Subgroups or Normal Subgroups: However, if ∀a ∈ G, aHa−1 = H, then H is called an invariant subgroup, or a normal subgroup of G. Clearly, the trivial subgroups E and G themselves are always normal subgroups of G. Another way to define a normal subgroup is the following: A subgroup all of whose left and right cosets are equal is a normal subgroup. A simple group does not have any non-trivial normal subgroups. Every subgroup of an Abelian group is a normal subgroup. Factor Group: If K is a normal subgroup of G, then the set of all cosets of K form a group, called the factor group of K. Conjugacy Classes of group elements: For every a ∈ G, the set of all elements given by gag −1 for any g ∈ G is called its conjugacy class. And the elements of a conjugate class are said to be mutually 23 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri conjugate to each other. There can be several conjugacy classes in any group G. Every element a ∈ G is in its own conjugacy class. If a is conjugate to b and b to c, then a is also conjugate to c. Two different conjugacy classes are always disjoint. Proof: This follows from the definition of a conjugacy class. If two conjugacy classes have a single common element, then they both have to be the same conjugacy class! The conjugacy classes of a group therefore divide into disjoint subsets. Each individual element of an Abelian group is its own conjugacy class. Therefore, there are many conjugacy classes as the order of the group. 3.3 Homomorphism and Isomorphism A homomorphism ϕ : G → H is a mapping between two groups G and H that preserves the group products. Specificially, if a ∈ G, and ϕ(a) ∈ H, b ∈ and ϕ(b) ∈ H, ϕ(ab) = ϕ(a)ϕ(b). Note that the definition of the group product need not be the same for groups G and H. The product ϕ(a)ϕ(b) on the RHS of the above definition is the group product of H, while the one on the LHS ab is the group product of G. If ϕ1 is a homomorphism from G0 into G1 and ϕ2 from G1 into G2 , then ψ(a) = ϕ2 (ϕ1 (a)), for a ∈ G0 is a homomorphism from G0 into G2. In general, a homomorphism need not be one-one. If the homomorphism is also one-one, then it is called an isomorphism. Two groups are said to be isomorphic if there exists an isomorphism between them. Two isomorphic groups are considered essentially the same group. Symmetric group The symmetric group Sn is the group of all permutations of n distinct objects. Consider the set of objects labeled {1, 2, 3,... n}. Any permutation of these n objects is denoted by a 2 × n matrix where the first row contains the n elements in the original order, and the second row containing the reordering. 1 2... n a1 a2... an Where a1 , a2 ,... an are just a particular rearrangement of the numbers 1, 2, 3,... n. This matrix is inter- preted as a permutation of the objects at the respective indices. For example, the object at position 1 is replaced with the object at a1 , the one at 2 with the one at a2 and so on. It is clear that with this interpretation, rearranging the columns of a given permutation matrix does not change the permutation operation. That is: 24 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1 2... n 2 1... n k 2...1... n ≡ ≡ a1 a2... an a2 a1... an ak a2... a1... an This equivalence ensures that all the permutation operations are counted precisely once by considering all matrices where the first row is the original ordering, and the bottom row is permuted accordingly. If we now have two such permutations, the product is denoted by 1 2... n 1 2... n · a1 a2... an b1 b2... bn Rule for multiplying permutations The multiplication operation is a sequence of two permutations. Let us assume that the first permutations act on an object such that one on the right-most acts first. It is to be understood as follows. 1. Given the product 1 2... n 1 2... n · a1 a2... an b1 b2... bn Leave the left-most matrix as it is. Write the matrix to the immediate right of that as follows (using the above equivalence rule): 1 2... n a1 a2... an · a1 a2... an ba1 ba2... b an 2. This now can be interpreted straightforwardly as follows. First permute the objects as per the left-most permutation. Then carry out the next permutation by replacing the object at position ak with the object at position bak etc. 3. Therefore, we get the resultant operation, which is again a permutation. 1 2... n ba1 ba2... ban Example of S3 :Take the example of the symmetric group S3 , given by the six permutations 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 , , , , , 1 2 3 3 1 2 2 3 1 3 2 1 2 1 3 1 3 2 1 2 3 Now consider three objects a, b, c. If we perform the permutation on it, we get a b c → b c a 2 3 1 25 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri 1 2 3 Now acting on b c a gives a c b 3 2 1 Therefore, 1 2 3 1 2 3 · {a, b, c} → {a, c, b} 3 2 1 2 3 1 It can easily be seen that the set of all permutations forms a group – called symmetric group with n! elements. A subset of permutations forming a subgroup is what we usually refer to as a permutation group. Cayley’s theorem: Every finite group is isomorphic to a permutation group embedded in a symmetric group. Consider a finite group G = {g1 (= e), g2 ,... gn } of order n. We can now construct a homomorphism from G to a permutation group as follows. For every g ∈ G, define a permutation pg as follows: g1 g2... gn pg = g1 g g2 g... gn g Since the group G is closed under group multiplication, the bottom row is just a reshuffling of the ele- ments in the top row. Therefore, pg is a permutation. It is easy to see that since G is a group which has a unique identity element, and a unique inverse for each g, by definition the set {pg , ∀g ∈ G} forms a group. It is easy to show that the mapping preserves products, since pg ph = pgh Moreover, the mapping is one-one by construction since g ̸= h, if and only if pg ̸= ph and therefore g → pg is an isomorphism. Therefore, any finite group is isomorphic to a permutation group of the same order. This permutation group with n elements is a subgroup of the full symmetric group, say of order k! > n. 3.4 Representations of groups Given a finite dimensional vector space V , we can often construct a homomorphism between a group G and a group of invertible matrices / operators ∀g, Γ(g) is a matrix. If the dimensionality of V is n, this is called an n−dimensional representation of G. Since Γ is a homomorphism, we have Γ(g1 · g2 ) = Γ(g1 ) · Γ(g2 ) If Γ(g1 ) ̸= Γ(g2 ) for g1 ̸= g2 , then the representation is one-one and is called faithful. 26 QT203 @ IISc – Foundations of Quantum Technologies Baladitya Suri Two representations Γ and Θ of the same dimensions on a vector space V are considered equivalent if they are related by a similarity transformation. That is, ∀g ∈ G, Γ(g) = RΘ(g)R−1 , where R is an invertible matrix on V. The dimension of V is also called the degree of the representation. There may be non-equivalent representations of the same degree. They are called distinct. Every representation of a finite group is equivalent to a unitary representation. A unitary repre- sentation of a group is made of only unitary matrices. Reducible vs Irreducible representations If a subspace V1 of V remains closed under the action of the group G, that is the vectors within V1 remain within V1 and vectors outside V1 stay outside V1 for all operations under the group G, then we can construct a sub-representation of G over the closed subspace V1. The representation of G over V now has a sub-representation over a subspace V1 of V. Therefore the representation of G over V is called reducible. In general, by choosing an appropriate basis, a reducible representation of G can be written as A(g) B(g) 0 D(g) A reducible representation of G over a vector space V is called completely reducible if we can find a set of basis vectors such that the representation looks like Γ1 (g) 0 0... 0 0 Γ2 (g) 0... 0 .......... ..... 0 0... 0 Γk (g) The representation can then be written as a direct sum of the sub-representations. Γ = Γ1 ⊕ Γ2... ⊕ Γk Each Γi is a sub-representation on a sub-space Vi of V. All reducible representations of a finite group are completely reducible. A representation that is not reducible is called an irreducible representation. The degree of an irreducible representation divides the order of a finite group G. The number of distinct irreducible representations in a finite group is finite, and is equal to the number of conjugacy classes in the group G. If there are r conjugacy classes in G, each with degrees l1 ,... lr , then X li2 = |G| i 27