Group Representation Theory 2014 PDF

Summary

These are notes on Group Representation Theory, specifically focusing on finite groups over complex numbers. The document covers representations as matrices, linear maps, construction of representations, and equivalent representations. The material includes examples and explores diagonalization methods.

Full Transcript

Group Representation Theory Ed Segal based on notes laTexed by Fatema Daya and Zach Smith 2014 This course will cover the representation theory of finite groups over C. We assume the reader knows the basic properties...

Group Representation Theory Ed Segal based on notes laTexed by Fatema Daya and Zach Smith 2014 This course will cover the representation theory of finite groups over C. We assume the reader knows the basic properties of groups and vector spaces. Contents 1 Representations 2 1.1 Representations as matrices................... 2 1.2 Representations as linear maps................. 7 1.3 Constructing representations................... 8 1.4 G-linear maps and subrepresentations.............. 13 1.5 Maschke’s theorem........................ 19 1.6 Schur’s lemma and abelian groups................ 28 1.7 Vector spaces of linear maps................... 33 1.8 More on decomposition into irreps................ 39 1.9 Duals and tensor products.................... 49 1 2 Characters 55 2.1 Basic properties.......................... 55 2.2 Inner products of characters................... 62 2.3 Class functions and character tables............... 72 3 Algebras and modules 78 3.1 Algebras.............................. 78 3.2 Modules.............................. 83 3.3 Matrix algebras.......................... 91 3.4 Semi-simple algebras....................... 99 3.5 Centres of algebras........................ 106 A Revision on linear maps and matrices 108 A.1 Vector spaces and bases..................... 109 A.2 Linear maps and matrices.................... 109 A.3 Changing basis.......................... 112 1 Representations 1.1 Representations as matrices Informally, a representation of a group is a way of writing it down as a group of matrices. Example 1.1.1. Consider C4 (a.k.a. Z/4), the cyclic group of order 4: C4 = {e, µ, µ2 , µ3 } 2 where µ4 = e (we’ll always denote the identity element of a group by e). Consider the matrices     1 0 0 −1 I= M= 0 1 1 0     2 −1 0 3 0 1 M = M = 0 −1 −1 0 Notice that M 4 = I. These 4 matrices form a subgroup of GL2 (R) - the group of all 2 × 2 invertible matrices with real coefficients under matrix multiplication. This subgroup is isomorphic to C4 , the isomorphism is µ 7→ M (so µ2 7→ M 2 , µ3 7→ M 3 , e 7→ I). Example 1.1.2. Consider the group C2 × C2 (the Klein-four group) gener- ated by σ, τ such that σ2 = τ 2 = e στ = τ σ Here’s a representation of this group:   1 −2 σ 7→S = 0 −1   −1 2 τ 7→T = 0 1 To check that this is a representation, we need to check the relations:   2 1 0 S = = T2 0 1   −1 0 ST = = TS 0 −1 So S and T generate a subgroup of GL2 (R) which is isomorphic to C2 × C2. Let’s try and simplify by diagonalising S. The eigenvalues of S are ±1, and 3 the eigenvectors are     1 1 7→ (λ1 = 1) 0 0     1 −1 7→ (λ2 = −1) 1 −1 So if we let     1 1 1 0 P = , Ŝ = 0 1 0 −1 Then P −1 SP = Ŝ Now let’s diagonalise T : the eigenvalues are ±1, and the eigenvectors are     1 1 7→ − (λ1 = −1) 0 0     1 1 7→ (λ2 = 1) 1 1 Notice T and S have the same eigenvectors! Coincidence? Of course not, as we’ll see later. So if   −1 0 T̂ = 0 1 Then P −1 T P = T̂. Claim. Ŝ and T̂ form a new representation of C2 × C2 Proof. Ŝ 2 = P −1 S 2 P = P −1 P = I T̂ 2 = P −1 T 2 P = P −1 P = I Ŝ T̂ = P −1 ST P = P −1 T SP = T̂ Ŝ Hence, this forms a representation. This new representation is easier to work with because all the matrices are diagonal, but it carries the same information as the one using S and T. We say the two representations are equivalent. 4 Can we diagonalise the representation from Example 1.1.1? The eigenvalues of M are ±i, so M cannot be diagonalised over R, but it can be diagonalized over C. So ∃P ∈ GL2 (C) such that   −1 i 0 P M P = M̂ = 0 −i and µ 7→ M̂ defines a representation of C4 that is equivalent to µ 7→ M. As this example shows, it’s easier to work over C. Definition 1.1.3 (First version). Let G be a group. A representation of G is a homomorphism ρ : G → GLn (C) for some number n. The number n is called the dimension (or the degree) of the representation. It is also possible to work over other fields (R, Q, Fp , etc.) but we’ll stick to C. We’ll also always assume that our groups are finite. There’s an important point to notice here: by definition, a representation ρ is a homomorphism, it is not just the image of that homomorphism. In particular we don’t necessarily assume that ρ is an injection, i.e. the image of ρ doesn’t have to be isomorphic to G. If ρ is an injection, then we say that the representation is faithful. In our previous two examples, all the representations were faithful. Here’s an example of a non-faithful representation: Example 1.1.4. Let G = C6 = hµ | µ6 = ei. Let n = 1. GL1 (C) is the group of non-zero complex numbers (under multiplication). Define ρ : G → GL1 (C) 2πi ρ : µ 7→ e 3 2πik so ρ(µk ) = e 3. We check ρ(µ)6 = 1, so this is a well-defined representation of C6. But ρ(µ3 ) = 1 also, so the kernel of ρ is {e, µ3 }. 5 n 2πi 4πi o the image of ρ is 1, e 3 , e 3 , which is isomorphic to C3. Example 1.1.5. Let G be any group, and n be any number. Define ρ : G → GLn (C) by ρ : g 7→ In ∀g ∈ G This is a representation, as ρ(g)ρ(h) = In In = In = ρ(gh) This is known as the trivial representation of G (of dimension n). The kernel is equal to G, and the image is the subgroup {In } ⊂ GLn , which is isomorphic to the trivial group. Let P be any invertible matrix. The map ‘conjugate by P ’ cP : GLn (C) → GLn (C) given by cP : M 7→ P −1 M P is a homomorphism. So if ρ : G → GLn (C) is a homomorphism, then so is cP ◦ρ (because composition of homomorphisms is a homomorphism). Definition 1.1.6. Two representations of G ρ1 : G → GLn (C) ρ2 : G → GLn (C) are equivalent if ∃P ∈ GLn (C) such that ρ2 = cP ◦ ρ1. Equivalent representations really are ‘the same’ in some sense. To understand this, we have to stop thinking about matrices, and start thinking about linear maps. 6 1.2 Representations as linear maps Let V be an n-dimensional vector space. The set of all invertible linear maps from V to V form a group which we call GL(V ). If we pick a basis of V then every linear map corresponds to a matrix (see Corollary A.3.2), so we get an isomorphism GL(V ) ∼ = GLn (C). However, this isomorphism depends on which basis we chose, and often we don’t want to choose a basis at all. Definition 1.2.1 (Second draft of Definition 1.1.3). A representation of a group G is a choice of a vector space V and a homomorphism ρ : G → GL(V ) If we pick a basis of V , we get a representation in the previous sense. If we need to distinguish between these two definitions, we’ll call a representation in the sense of Definition 1.1.3 a matrix representation. Notice that if we set the vector space V to be Cn then GL(V ) is exactly the same thing as GLn (C). So if we have a matrix representation, then we can think of it as a representation (in our new sense) acting on the vector space Cn. Lemma 1.2.2. Let ρ : G → GL(V ) be a representation of a group G. Let A = {a1 ,... , an } and B = {b1 ,... , bn } be two bases for V. Then the two associated matrix representations ρA : G → GLn (C) ρB : G → GLn (C) are equivalent. Proof. For each g ∈ G we have a linear map ρ(g) ∈ GL(V ). Writing this linear map with respect to the basis A gives us the matrix ρA (g), and writing it with respect to the basis B gives us the matrix ρB (g). Then by Corollary A.3.2 we have ρB (g) = P −1 ρA (g)P 7 where P is the change-of-basis matrix between A and B. This is true for all g ∈ G, so ρB = cP ◦ ρA Conversely, suppose ρ1 and ρ2 are equivalent matrix representations of G, and let P be the matrix such that ρ2 = cP ◦ ρ1. If we set V to be the vector space Cn then we can think of ρ1 as a representation ρ1 : G → GL(V ) Now let C ⊂ Cn be the basis consisting of the columns of the matrix P , so P is the change-of-basis matrix between the standard basis and C (see Section A.3). For each group element g, if we write down the linear map ρ1 (g) using the basis C then we get the matrix P −1 ρ1 (g)P = ρ2 (g) So we can view ρ2 as the matrix representation that we get when we take ρ1 and write it down using the basis C. MORAL: Two matrix representations are equivalent if and only if they de- scribe the same representation in different bases. 1.3 Constructing representations Recall that the symmetric group Sn is defined to be the set of all permutations of a set of n symbols. Suppose we have a subgroup G ⊂ Sn. Then we can write down an n-dimensional representation of G, called the permutation representation. Here’s how: Let V be an n-dimensional vector space with a basis {b1 ,... , bn }. Every element g ∈ G is a permutation of the set {1,... , n} (or, if you prefer, it’s a permutation of the set {b1 ,... , bn }). Define a linear map ρ(g) : V → V 8 by definining ρ(g) : bk 7→ bg(k) and extending this to a linear map. Now ρ(g) ◦ ρ(h) : bk 7→ bgh(k) so ρ(g) ◦ ρ(h) = ρ(gh) (since they agree on a basis). Therefore ρ : G → GL(V ) is a homomorphism. Example 1.3.1. Let G = {(1), (123), (132)} ⊂ S3. G is a subgroup, and it’s isomorphic to C3. Let V = C3 with the standard basis. The permutation representation of G (written in the standard basis) is   1 0 0 ρ((1)) = 0 1 0 0 0 1   0 0 1 ρ((123)) = 1 0 0 0 1 0   0 1 0 ρ((132)) = 0 0 1 1 0 0 [Aside: the definition of a permutation representation works over any field.] But remember Cayley’s Theorem! Every group of size n is a subgroup of the symmetric group Sn. Proof. Think about the set of elements of G as abstract set G of n symbols. Left multiplication by g ∈ G defines a bijection Lg : G → G Lg : h 7→ gh 9 and a bijection from a set of size n to itself is exactly a permutation. So we have a map G → Sn defined by g → Lg This is in fact an injective homomorphism, so its image is a subgroup of Sn which is isomorphic to G. So for any group of size n we automatically get an n-dimensional repre- sentation of G. This is called the regular representation, and it’s very important. Example 1.3.2. Let G = C2 × C2 = {e, σ, τ, στ } where σ 2 = τ 2 = e and τ σ = στ. Left multiplication by σ gives a permutation Lσ :G → G e 7→ σ σ 7→ e τ 7→ στ στ 7→ τ Let V be the vector space with basis {be , bσ , bτ , bστ }. The regular represen- tation of G is a homomorphism ρreg : G → GL(V ) With respect to the given basis of V , ρreg (σ) is the matrix   0 1 0 0 1 0 0 0 ρreg (σ) =  0 0 0 1  0 0 1 0 10 The other two non-identity elements go to   0 0 1 0 0 0 0 1 ρreg (τ ) =  1 0 0 0  0 1 0 0   0 0 0 1 0 0 1 0 ρreg (στ ) =  0 1 0 0  1 0 0 0 The next lemma is completely trivial to prove, but worth writing down: Lemma 1.3.3. Let G be a group and let H ⊂ G be a subgroup. Let ρ : G → GL(V ) be a representation of G. Then the restriction of ρ to H ρ|H : H → GL(V ) is a representation of H. Proof. Immediate. We saw an example of this earlier: for the group Sn we constructed an n- dimensional permutation representation, then for any subgroup H ⊂ Sn we considered the restriction of this permutation representation to H. Slightly more generally: Lemma 1.3.4. Let G and H be two groups, let f :H→G be a homomorphism, and let ρ : G → GL(V ) be a representation of G. Then ρ ◦ f : H → GL(V ) is a representation of H. 11 Proof. A composition of homomorphisms is a homomorphism. Lemma 1.3.3 is a special case of this, where we let f : H ,→ G be the inclusion of a subgroup. Example 1.3.5. Let H = C6 = hµ|µ6 = ei and let G = C3 = hν|ν 3 = ei. Let f : H → G be the homomorphism sending µ to ν. There’s a faithful 1-dimensional representation of C3 defined by ρ : G → GL1 (C) 2πi ρ:ν→e 3 Then ρ ◦ f is the non-faithful representation of C6 that we looked at in Example 1.1.4. Example 1.3.6. Let G = C2 = hµ|µ2 = ei, and let H = Sn for some n. Recall that there is a homomorphism sgn : Sn → C2  e if σ is an even permutation sgn(σ) = µ if σ is an odd permutation There’s a 1-dimensional representation of C2 given by ρ : C2 → GL1 (C) ρ : µ → −1 (this is a representation, because (−1)2 = 1). Composing this with sgn, we get a 1-dimensional representation of Sn , which sends each even permutation to 1, and each odd permutation to −1. This is called the sign representa- tion of Sn. Finally, for some groups we can construct representations using geometry. Example 1.3.7. D4 is the symmetry group of a square. It has size 8, and consists of 4 reflections and 4 rotations. Draw a square in the plane with vertices at (1, 1), (1, −1), (−1, −1) and (−1, 1). Then the elements of D4 naturally become linear maps acting on a 2-dimensional vector space. Using the standard basis, we get the matrices: 12   π 0 −1 rotate by 2 : 1 0   −1 0 rotate by π: 0 −1   3π 0 1 rotate by 2 : −1 0   1 0 reflect in x-axis: 0 −1   −1 0 reflect in y-axis: 0 1   0 1 reflect in y = x: 1 0   0 −1 reflect in y = −x: −1 0 Together with I2 , these matrices give a 2-dimensional representation of D4. 1.4 G-linear maps and subrepresentations You should have noticed that whenever you meet a new kind of mathemati- cal object, soon afterwards you meet the ‘important’ functions between the objects. For example: Objects Functions Groups Homomorphisms Vector spaces Linear maps Topological spaces Continuous maps Rings Ring Homomorphisms...... So we need to define the important functions between representations. 13 Definition 1.4.1. Let ρ1 : G → GL(V ) ρ2 : G → GL(W ) be two representations of G on vector spaces V and W. A G-linear map between ρ1 and ρ2 is a linear map f : V → W such that f ◦ ρ1 (g) = ρ2 (g) ◦ f ∀g ∈ G i.e. both ways round the square ρ1 (g) V V f f ρ2 (g) W W give the same answer (‘the square commutes’). So a G-linear map is a special kind of linear map that respects the group actions. For any linear map, we have f (λx) = λf (x) for all λ ∈ C and x ∈ V , i.e. we can pull scalars through f. For G-linear maps, we also have f (ρ1 (g)(x)) = ρ2 (g)(f (x)) for all g ∈ G, i.e. we can also pull group elements through f. Suppose f is a G-linear map, and suppose as well that f is an isomorphism between the vector spaces V and W , i.e. there is an inverse linear map f −1 : W → V such that f ◦ f −1 = 1W and f −1 ◦ f = 1V (recall that f has an inverse iff f is a bijection). Claim 1.4.2. f −1 is also a G-linear map. 14 In this case, we say f is a (G-linear) isomorphism and that the two repre- sentations ρ1 and ρ2 are isomorphic. Isomorphism is really the same thing as equivalence. Proposition 1.4.3. Let V and W be two vector spaces, both of dimension n. Let ρ1 : G → GL(V ) ρ2 : G → GL(W ) be two representations of G. Let A = {a1 ,... , an } be a basis for V , and let B = {b1 ,... , bn } be a basis for W , and let ρA 1 : G → GLn (C) ρB2 : G → GLn (C) be the matrix representations obtained by writing ρ1 and ρ2 in these bases. Then ρ1 and ρ2 are isomorphic if and only if ρA B 1 and ρ2 are equivalent. Proof. (⇒) Let f : V → W be a G-linear isomorphism. Then f A = {f (a1 ),... , f (an )} ⊂ W is a second basis for W. Let ρf2 A be the matrix representation obtained by writing down ρ2 in this new basis. Pick g ∈ G and let ρA 1 (g) = M , i.e. n X ρ1 (g)(ak ) = Mik ai i=1 (see Section A.2). Then by the G-linearity of f , ρ2 (g)(f (ak )) = f (ρ1 (g)(ak )) Xn = Mik f (ai ) i=1 So the matrix describing ρ2 (g) in the basis f A is the matrix M , i.e. it is the same as the matrix describing ρ1 (g) in the basis A. This is true for all fA g ∈ G, so the two matrix representations ρA 1 and ρ2 are identical. But by fA B Lemma 1.2.2, ρ2 is equivalent to ρ2. 15 (⇐) Let P be the matrix such that ρB2 = cP ◦ ρA 1 Let f :V →W be the linear map represented by the matrix P −1 with respect to the bases A and B. Then f is an isomorphism of vector spaces, because P −1 is an invertible matrix. We need to show that f is also G-linear, i.e. that f ◦ ρ1 (g) = ρ2 (g) ◦ f, ∀g ∈ G Using our given bases we can write each of these linear maps as matrices, then this equation becomes P −1 ρA B 1 (g) = ρ2 (g)P −1 , ∀g ∈ G or equivalently ρB2 (g) = P −1 ρA 1 (g)P, ∀g ∈ G and this is true by the definition of P. Of course, not every G-linear map is an isomorphism. Example 1.4.4. Let G = C2 = hτ | τ 2 = ei. The regular representation of G, written in the natural basis, is ρreg (e) = I2 and   0 1 ρreg (τ ) = 1 0 (since multiplication by τ transposes the two group elements). Let ρ1 be the 1-dimensional representation ρ1 : C2 → GL1 (C) τ 7→ −1 from Example 1.3.6. Now let f : C2 → C be the linear map represented by the matrix (1, −1) with respect to the standard bases. Then for any vector 16   x1 x= ∈ C2 , we have x2    0 1 x1 f ◦ ρreg (τ )(x) = (1, −1) 1 0 x2   x = −(1, −1) 1 x2 = ρ1 (τ ) ◦ f (x) So f is a G-linear map from ρreg to ρ1. Example 1.4.5. Let G be a subgroup of Sn. Let (V, ρ1 ) be the permu- tation representation, i.e. V is an n-dimensional vector space with a basis {b1 ,... , bn }, and ρ1 : G → GL(V ) ρ1 (g) : bk 7→ bg(k) Let W = C, and let ρ2 : G → GL(W ) = GL(C) be the (1-dimensional) trivial representation, i.e. ρ2 (g) = 1 ∀g ∈ G. Let f : V → W be the linear map defined by f (bk ) = 1 ∀k We claim that this is G-linear. We need to check that f ◦ ρ1 (g) = ρ2 (g) ◦ f ∀g ∈ G It suffices to check this on the basis of V. We have: f (ρ1 (g)(bk )) = f (bg(k) ) = 1 and ρ2 (g)(f (bk )) = ρ2 (g)(1) = 1 for all g and k, so f is indeed G-linear. Definition 1.4.6. A subrepresentation of a representation ρ : G → GL(V ) is a vector subspace W ⊂ V such that ρ(g)(x) ∈ W ∀ g ∈ G and x ∈ W 17 This means that every ρ(g) defines a linear map from W to W , i.e. we have a representation of G on the subspace W. Example 1.4.7. Let G = C2 and V = C2 with the regular representation as in Example   1.4.4. Let W be the 1-dimensional subspace spanned by the 1 vector ∈ V. Then 1        1 0 1 1 1 ρreg (τ ) = = 1 1 0 1 1     λ λ So ρreg (τ ) = , i.e. ρreg (τ ) preserves W , so W is a subrepresentation. λ λ It’s isomorphic to the trivial (1-dimensional) representation. Example 1.4.8. We can generalise the previous example. Suppose we have a matrix representation ρ : G → GLn (C). Now suppose we can find a vector x ∈ Cn which is an eigenvector for every matrix ρ(g), g ∈ G, i.e. ρ(g)(x) = λg x for some eigenvalues λg ∈ C∗ Then the span of x is a 1-dimensional subspace hxi ⊂ Cn , and it’s a subrep- resentation. It’s isomorphic to the 1-dimensional matrix representation ρ : G → GL1 (C) ρ : g 7→ λg Any linear map f : V → W has a kernel Ker(f ) ⊆ V and an image Im(f ) ⊆ W which are both vector subspaces. Claim 1.4.9. If f is a G-linear map between the two representations ρ1 : G → GL(V ) ρ2 : G → GL(W ) Then Ker(f ) is a subrepresentation of V and Im(f ) is a subrepresentation of W. Look back at Examples 1.4.4 and 1.4.7. The kernel of the map f is the subrepresentation W. 18 1.5 Maschke’s theorem Let V and W be two vector spaces. Recall the definition of the direct sum V ⊕W It’s the vector space of all pairs (x, y) such that x ∈ V and y ∈ W. Its dimension is dim V + dim W. Suppose G is a group, and we have representations ρV : G → GL(V ) ρW : G → GL(W ) Then there is a natural representation of G on V ⊕ W given by ‘direct- summing’ ρV and ρW. The definition is ρV ⊕W : G → GL(V ⊕ W ) ρV ⊕W (g) : (x, y) 7→ (ρV (g)(x), ρW (g)(y)) Claim 1.5.1. For each g, ρV ⊕W (g) is a linear map, and ρV ⊕W is indeed a homomorphism from G to GL(V ⊕ W ). Pick a basis {a1 ,... , an } for V , and {b1 ,... , bm } for W. Suppose that in these bases, ρV (g) is the matrix M and ρW (g) is the matrix N. The set {(a1 , 0),... , (an , 0), (0, b1 ),... , (0, bm )} is a basis for V ⊕ W , and in this basis the linear map ρV ⊕W (g) is given by the (n + m) × (n + m) matrix   M 0 0 N A matrix like this is called block-diagonal. Consider the linear map ιV : V → V ⊕ W ιV : x 7→ (x, 0) 19 It’s an injection, so it’s an isomorphism between V and Im(ιV ). So we can think of V as a subspace of V ⊕ W. Also ιV (ρV (g)(x)) = (ρV (g)(x), 0) = ρV ⊕W (g)(x, 0) So ιV is G-linear, and Im(ιV ) is a subrepresentation which we can identify with V. Similarly, the subspace {(0, y), y ∈ W } ⊂ V ⊕ W is a subrepresen- tation, and it’s isomorphic to W. The intersection of these two subrepresen- tations is obviously {0}. Conversely: Proposition 1.5.2. Let ρ : G → GL(V ) be a representation, and let W ⊂ V and U ⊂ V be subrepresentations such that i) U ∩ W = {0} ii) dim U + dim W = dim V Then V is isomorphic to W ⊕ U. Proof. You should recall that we can identify V with W ⊕U as vector spaces, because every vector in V can be written uniquely as a sum x+y with x ∈ W and y ∈ U. In other words, the map f :W ⊕U →V f : (x, y) 7→ x + y is an isomorphism of vector spaces. We claim that f is also G-linear. Let’s write ρW : G → GL(W ), ρU : G → GL(U ) for the representations of G on W and U , note that by definition we have ρW (g)(x) = ρV (g)(x), ∀x ∈ W and ρU (g)(y) = ρV (g)(y), ∀y ∈ U 20 Then the following square commutes: (x,_ y)  ρW ⊕U (g) / (ρW (g)(x),_ ρU (g)(y)) f f   x+y ρV (g) / ρV (g)(x + y) = ρV (g)(x) + ρV (g)(y) So f is indeed G-linear, and hence it’s an isomorphism of representations. Now suppose ρ : G → GL(V ) is a representation, and W ⊂ V is a subrepre- sentation. Given the previous proposition, it is natural to ask the following: Question 1.5.3. Can we find another subrepresentation U ⊂ V such that U ∩ W = {0} and dim V = dim W + dim U ? If we can, then we can split V up as a direct sum V =W ⊕U Such a U is called a complementary subrepresentation to W. It turns out the answer to this question is always yes! This is called Maschke’s Theorem. It’s the most important theorem in the course, but fortunately the proof isn’t too hard. Example 1.5.4. Recall Examples 1.4.4 and 1.4.7. We set G = C2 , and V was the regular representation. We found a (1 dimensional) subrepresentation   1 W = ⊂ C2 = V 1 Can we find a complementary subrepresentation? Let   1 U= ⊂ C2 = V −1 Then        1 0 1 1 1 ρreg (τ ) = =− −1 1 0 −1 −1 So U is a subrepresentation, and it’s isomorphic to ρ1. Furthermore, V = W ⊕ U because W ∩ U = 0 and dim U + dim W = 2 = dim V. 21 To prove Maschke’s Theorem, we need the following: Lemma 1.5.5. Let V be a vector space, and let W ⊂ V be a subspace. Suppose we have a linear map f :V →W such that f (x) = x for all x ∈ W. Then Ker(f ) ⊂ V is a complementary subspace to W , i.e. V = W ⊕ Ker(f ) Proof. If x ∈ Ker(f ) ∩ W then f (x) = x = 0, so Ker(f ) ∩ W = 0. Also, f is a surjection, so by the Rank-Nullity Theorem, dim Ker(f ) + dim W = dim V A linear map like this is called a projection. For example, suppose that V = W ⊕ U , and let πW be the linear map πW : V → W (x, y) 7→ x Then πW is a projection, and Ker(πW ) = U. The above lemma says that every projection looks like this. Corollary 1.5.6. Let ρ : G → GL(V ) be a representation , and W ⊂ V a subrepresentation. Suppose we have a G-linear projection f :V →W Then Ker(f ) is a complementary subrepresentation to W. Proof. This is immediate from the previous lemma. Theorem 1.5.7 (Maschke’s Theorem). Let ρ : G → GL(V ) be a represen- tation, and let W ⊂ V be a subrepresentation. Then there exists a comple- mentary subrepresentation U ⊂ V to W. 22 Proof. By Corollary 1.5.6, it’s enough to find a G-linear projection from V to W. Recall that we can always find a complementary subspace (not subrepre- sentation!) Ũ ⊂ V to W. For example, we can pick a basis {b1 ,... , bm } for W , then extend it to a basis {b1 ,... , bm , bm+1 ,... , bn } for V and let Ũ = hbm+1 ,..., bn i. Let f˜ : V = W ⊕ Ũ → W be the projection with kernel Ũ. There is no reason why f˜ should be G-linear. However, we can do a clever modification. Let’s define f :V →V by 1 X f (x) = (ρ(g) ◦ f˜ ◦ ρ(g −1 ))(x) |G| g∈G Then we claim that f is a G-linear projection from V to W. First let’s check that Im f ⊂ W. For any x ∈ V and g ∈ G we have f˜(ρ(g −1 )(x)) ∈ W and so ρ(g)(f˜(ρ(g −1 )(x))) ∈ W since W is a subrepresentation. Therefore f (x) ∈ W as well. Next we check that f is a projection. Let y ∈ W. Then for any g ∈ G, we know that ρ(g −1 )(y) is also in W , so f˜(ρ(g −1 )(y)) = ρ(g −1 (y)) 23 Therefore 1 X f (y) = ρ(g)(f˜(ρ(g −1 )(y))) |G| g∈G 1 X = ρ(g)(ρ(g −1 )(y)) |G| g∈G 1 X = ρ(gg −1 )(y) |G| g∈G 1 X = ρ(e)(y) |G| g∈G |G|y = |G| =y So f is indeed a projection. Finally, we check that f is G-linear. For any x ∈ V and any h ∈ G, we have 1 X f (ρ(h)(x)) = (ρ(g) ◦ f˜ ◦ ρ(g −1 ) ◦ ρ(h))(x) |G| g∈G 1 X = (ρ(g) ◦ f˜ ◦ ρ(g −1 h))(x) |G| g∈G 1 X = ρ(hg) ◦ f˜ ◦ ρ(g −1 ))(y) |G| g∈G = (ρ(h) ◦ f )(x) (the sums on the second and third lines are the same, we’ve just rela- belled/permuted the group elements appearing in the sum, sending g 7→ hg). So f is indeed G-linear. So if V contains a subrepresentation W , then we can split V up as a direct sum. Definition 1.5.8. If ρ : G → GL(V ) is a representation with no subrep- resentations (apart from the trivial subrepresentations 0 ⊂ V and V ⊆ V ) then we call it an irreducible representation. 24 The real power of Maschke’s Theorem is the following Corollary: Corollary 1.5.9. Every representation can be written as a direct sum U1 ⊕ U2 ⊕... ⊕ Ur of subrepresentations, where each Ui is irreducible. Proof. Let V be a representation of G, of dimension n. If V is irreducible, we’re done. If not, V contains a subrepresentation W ⊂ V , and by Maschke’s Theorem, V =W ⊕U for some other subrepresentation U. Both W and U have dimension less than n. If they’re both irreducible, we’re done. If not, one of them contains a subrepresentation, so it splits as a direct sum of smaller subrepresentations. Since n is finite, this process will terminate in a finite number of steps. So every representation is built up from irreducible representations in a straight-forward way. This makes irreducible representations very impor- tant, so we abbreviate the name and call them irreps. They’re like the ‘prime numbers’ of representation theory. Obviously, any 1-dimensional representation is irreducible. Here is a 2- dimensional irrep: Example 1.5.10. Let G = S3 , it’s generated by σ = (123) τ = (12) with relations σ 3 = τ 2 = e, τ στ = σ −1 Let     ω 0 0 1 ρ(σ) = ρ(τ ) = 0 ω −1 1 0 2πi where ω = e 3. This defines a representation of G (either check the relations, or do the Problem Sheets). Let’s show that ρ is irreducible. Suppose (for a contradiction) that W is a non-trivial subrepresentation. Then dim W = 1. 25 Also, W is preserved by the action of ρ(σ) and ρ(τ ), i.e. W is an eigenspace for both matrices. The eigenvectors of ρ(τ ) are   1 (λ1 = 1) 1   1 (λ2 = −1) −1 But the eigenvectors of ρ(σ) are     1 0 & 0 1 So there is no such W. Now let’s see some examples of Maschke’s Theorem in action: Example 1.5.11. The regular representation of C3 = hµ|µ3 = 3i is   0 0 1 ρreg (µ) = 1 0 0 0 1 0 (c.f. Example 1.3.1). Suppose x ∈ C3 is an eigenvector of ρreg (µ). Then it’s also an eigenvector of ρreg (µ2 ), so hxi ⊂ C3 is a 1-dimensional subrepresen- tation. The eigenvectors of ρreg (µ) are       1 1 1 1(λ1 = 1) ω −1 (λ2 = ω)  ω (λ3 = ω −1 ) 1 ω ω −1 So ρreg is the direct sum of 3 1-dimensional irreps: *1+ * 1 + * 1 + U1 = 1 U2 = ω −1  U3 =  ω  1 ω ω −1 In the eigenvector basis,   1 0 0 ρreg (µ) = 0 ω 0  0 0 ω −1 26 Look back at Examples 1.1.1, 1.1.2 and 1.5.4. In each one we took a matrix representation and found a basis in which every matrix became diagonal, i.e. we split each representation as a direct sum of 1-dimensional irreps. Proposition 1.5.12. Let ρ : G → GLn (C) be a matrix representation. Then there exists a basis of Cn in which every matrix ρ(g) is diagonal iff ρ is a direct sum of 1-dimensional irreps. Proof. (⇒) Let {x1 ,... , xn } be such a basis. Then xi is an eigenvector for every ρ(g), so hxi i is a 1-dimensional subrepresentation, and Cn = hx1 i ⊕ hx2 i ⊕... ⊕ hxn i (⇐) Suppose Cn = U1 ⊕... ⊕ Un with each Ui a 1-dimensional subrepre- sentation. Pick a (non-zero) vector xi from each Ui. Then {x1 ,... , xn } is a basis for Cn. For any g ∈ G, the matrix ρ(g) preserves hxi i = Ui for all i, so ρ(g) is a diagonal matrix with respect to this basis. We will see soon that if G is abelian, every representation of G splits as a direct sum of 1-dimensional irreps. When G is not abelian, this is not true. Example 1.5.13. Let ρ : S3 → GL3 (C) be the permutation representation (in the natural basis). Recall S3 is gener- ated by σ = (123), τ = (12). We have     0 0 1 0 1 0 ρ(σ) = 1 0 0 ρ(τ ) = 1 0 0 0 1 0 0 0 1 Notice that   1 x1 = 1  1 is an eigenvector for both ρ(σ) and ρ(τ ). Therefore, it’s an eigenvector for ρ(σ 2 ), ρ(στ ) and ρ(σ 2 τ ) as well, so U1 = hx1 i is a 1-dimensional subrepresen- tation. It’s isomorphic to the 1-dimensional trivial representation. Let    + * 1 0 U2 = x2 = −1 , x3 =  1  0 −1 27 Clearly, C3 = U1 ⊕ U2 as a vector space. We claim U2 is a subrepresentation. We check: ρ(σ) : x2 7→ x3 ∈ U2 x3 7→ −x2 − x3 ∈ U2 ρ(τ ) : x2 7→ −x2 ∈ U2 x3 7→ x2 + x3 ∈ U2 In this basis, U2 is the matrix representation     0 −1 −1 1 ρ2 (σ) = , ρ(τ ) = 1 −1 0 1 So ρ is the direct sum of two subrepresentations U1 ⊕ U2. In the basis {x1 , x2 , x3 } for C3 , ρ becomes the (block-diagonal) matrix representation     1 0 0 1 0 0 ρ(σ) = 0 0 −1 ρ(τ ) = 0 −1 1 0 1 −1 0 0 1 The representation U2 is irreducible. Either (i) Check that ρ2 (σ) and ρ2 (τ ) have no common eigenvector, or    −1  1 ω (ii) Change basis to and , then −ω −ω     ω 0 0 1 ρ2 (σ) = , ρ2 (τ ) = 0 ω −1 1 0 (remember that 1 + ω + ω −1 = 0) and we proved that this was irreducible in Example 1.5.10. 1.6 Schur’s lemma and abelian groups Theorem 1.6.1 (Schur’s Lemma). Let ρV : G → GL(V ) and ρW : G → GL(W ) be irreps of G. 28 (i) Let f : V → W be a G-linear map. Then either f is an isomorphism, or f is the zero map. (ii) Let f : V → V be a G-linear map. Then f = λ1V for some λ ∈ C. Proof. (i) Suppose f is not the zero map. Ker(f ) ⊂ V is a subrepresentation of V , but V is an irrep, so either Ker(f ) = 0 or V. Since f 6= 0, Ker(f ) = 0, i.e. f is an injection. Also, Im(f ) ⊂ W is a subrepresentation, and W is irreducible, so Im(f ) = 0 or W. Since f 6= 0, Im(f ) = W , i.e. f is a surjection. So f is an isomorphism. (ii) Every linear map from V to V has at least one eigenvalue. Let λ be an eigenvalue of f and consider fˆ = (f − λ1V ) : V → V Then fˆ is G-linear, because fˆ(ρV (g)(x)) = f (ρV (g)(x)) − λρV (g)(x) = ρV (g)(f (x)) − ρV (g)(λx) = ρV (g)(fˆ(x)) for all g ∈ G and x ∈ V. Since λ is an eigenvalue, Ker(fˆ) is at least 1- dimensional. So by part 1, fˆ is the zero map, i.e. f = λ1V. [Aside: (i) works over any field whereas (ii) is special to C.] Schur’s Lemma lets us understand the representation theory of abelian groups completely. Proposition 1.6.2. Suppose G is abelian. Then every irrep of G is 1- dimensional. Proof. Let ρ : G → GL(V ) be an irrep of G. Pick any h ∈ G and consider the linear map ρ(h) : V → V 29 In fact this is G-linear, because ρ(h)(ρ(g)(x)) = ρ(hg)(x) = ρ(gh)(x) as G is abelian = ρ(g)(ρ(h)(x)) for all g ∈ G, x ∈ V. So by Schur’s Lemma, ρ(h) = λh 1V for some λh ∈ C. So every element of G is mapped by ρ to a multiple of 1V. Now pick any x ∈ V. For any h ∈ G, we have ρ(h)(x) = λh x ∈ hxi so hxi is a (1-dimensional) subrepresentation of V. But V is an irrep, so hxi = V , i.e. V is 1-dimensional. Corollary 1.6.3. Let ρ : G → GL(V ) be a representation of an abelian group. Then there exists a basis of V such that every g ∈ G is represented by a diagonal matrix ρ(g). Proof. By Maschke’s Theorem, we can split ρ as a direct sum V = U1 ⊕ U2 ⊕... ⊕ Un of irreps. By Proposition 1.6.2, each Ui is 1-dimensional. Now apply Propo- sition 1.5.12. As remarked before, this is not true for non-abelian groups. However, there is a weaker statement that we can prove for any group: Corollary 1.6.4. Let ρ : G → GL(V ) of any group G, and let g ∈ G. Then there exists a basis of V such that ρ(g) is diagonal. Notice the difference with the previous statement: with abelian groups, ρ(g) becomes diagonal for every g ∈ G, here we are diagonalizing just one ρ(g). This is not very impressive, because ‘almost all’ matrices are diagonalizable! 30 Proof. Consider the subgroup hgi ⊂ G. It’s isomorphic to the cyclic group of order k, where k is the order of g. In particular, it is abelian. Restricting ρ to this subgroup gives a representation ρ : hgi → GL(V ) Then Corollary 1.6.3 tells us we can find a basis of V such that ρ(g) is diagonal. Let’s describe all the irreps of cyclic groups (the simplest abelian groups). Let G = Ck = hµ | µk = ei. We’ve just proved that all irreps of G are 1-dimensional. A 1-dimensional representation of G is a homomorphism ρ : G → GL1 (C) This is determined by a single number ρ(µ) ∈ C 2πi q such that ρ(µ)k = 1. So ρ(µ) = e k for some q = [0,... , k − 1]. This gives us k irreps ρ0 , ρ1 ,..., ρk−1 where 2πi q ρq : µ 7→ e k Claim 1.6.5. These k irreps are all distinct, i.e. ρi and ρj are not isomor- phic if i 6= j. Example 1.6.6. Let G = C4 = hµ | µ4 = ei. There are 4 distinct (1- dimensional) irreps of G. They are ρ0 : µ 7→ 1 (the trivial representation) 2πi ρ1 : µ 7→ e 4 =i 2πi ×2 ρ2 : µ 7→ e 4 = −1 2πi ×3 ρ3 : µ 7→ e 4 = −i Look back at Example 1.1.1. We wrote down a representation   0 −1 ρ : µ 7→ 1 0 31 After diagonalising, this became the equivalent representation   i 0 ρ : µ 7→ 0 −i So ρ is the direct sum of ρ1 and ρ3. More generally, let G be a direct product of cyclic groups G = Ck1 × Ck2 ×... × Ckr G is generated by elements µ1 ,... , µr such that µkt t = e and every pair µs , µt commutes. An irrep of G must be a homomorphism ρ : G → GL1 (C) and this is determined by r numbers ρ(µ1 ),... , ρ(µr ) 2πi q such that ρ(µt )kt = 1 for all t, i.e. ρ(µt ) = e kt t for some qt ∈ [0,... , kt − 1]. This gives k1 ×... × kr 1-dimensional irreps. We label them ρq1 ,...,qr where 2πi qt ρq1 ,...,qr : µt 7→ e kt Claim 1.6.7. All these irreps are distinct. Notice that the number of irreps is equal to the size of G! We’ll return to this fact later. Example 1.6.8. Let G = C2 × C2 = hσ, τ | σ 2 = τ 2 = e, στ = τ σi. There are 4 (1-dimensional) irreps of G. They are: ρ0,0 :σ 7→ 1, τ 7→ 1 (the trivial representation) ρ0,1 :σ 7→ 1, τ 7→ −1 ρ1,0 :σ 7→ −1, τ 7→ 1 ρ1,1 :σ 7→ −1, τ 7→ −1 32 Look back at Example 1.1.2. We found a representation of C2 × C2   1 0 ρ(σ) = Ŝ = 0 −1   −1 0 ρ(τ ) = T̂ = 0 1 So ρ is the direct sum of ρ0,1 and ρ1,0. You may have heard of the fundamental result: Theorem (Structure theorem for finite abelian groups). Every finite abelian group is a direct product of cyclic groups. So now we know everything (almost!) about representations of finite abelian groups. Non-abelian groups are harder... 1.7 Vector spaces of linear maps Let V and W be vector spaces. You should recall that the set Hom(V, W ) of all linear maps from V to W is itself a vector space. If f1 , f2 are two linear maps V → W then their sum is defined by (f1 + f2 ) : V → W x 7→ f1 (x) + f2 (x) and for a scalar λ ∈ C, we define (λf1 ) : V → W x 7→ λf1 (x) If {a1 ,... , an } is a basis for V , and {b1 ,... , bm } is a basis for W , then we can define fji : V → W  bj if k = i ak 7→ 0 if k 6= i 33 i.e. ai 7→ bj and all other basis vectors go to zero. The set {fji | 1 ≤ i ≤ n, 1 ≤ j ≤ m} is a basis for Hom(V, W ). In particular, dim Hom(V, W ) = (dim V )(dim W ) Once we’ve chosen these bases we can identify Hom(V, W ) with the set Matn×m (C) of n×m matrices, and Matn×m (C) is obviously an (nm)-dimensional vector space. The maps fji correspond to the matrices which have one of their entries equal to 1 and all other entries equal to zero. Example 1.7.1. Let V = W = C2 , equipped with the standard basis. Then Hom(V, W ) = Mat2×2 (C) This is a 4-dimensional vector space. The obvious basis is     1 0 0 1 f11 = f12 = 0 0  0 0 0 0 0 0 f21 = f22 = 1 0 0 1 Now suppose that we have representations ρV : G → GL(V ) ρW : G → GL(W ) There is a natural representation of G on the vector space Hom(V, W ). For g ∈ G, we define ρHom(V,W ) (g) : Hom(V, W ) → Hom(V, W ) f 7→ ρW (g) ◦ f ◦ ρV (g −1 ) Clearly, ρHom(V,W ) (g)(f ) is a linear map V → W. Claim 1.7.2. ρHom(V,W ) (g) is a linear map from Hom(V, W ) to Hom(V, W ). We need to check that (i) For all g, ρHom(V,W ) (g) is invertible. 34 (ii) The map g 7→ ρHom(V,W ) (g) is a homomorphism. Observe that ρHom(V,W ) (h) ◦ ρHom(V,W ) (g) : f 7→ ρW (h) ◦ (ρW (g) ◦ f ◦ ρV (g −1 )) ◦ ρV (h−1 ) = ρW (hg) ◦ f ◦ ρV (g −1 h−1 ) = ρHom(V,W ) (hg)(f ) In particular, ρHom(V,W ) (g) ◦ ρHom(V,W ) (g −1 ) = ρHom(V,W ) (e) = 1Hom(V,W ) = ρHom(V,W ) (g −1 ) ◦ ρHom(V,W ) (g) So ρHom(V,W ) (g −1 ) is inverse to ρHom(V,W ) (g). So we have a function ρHom(V,W ) : G → GL(Hom(V, W )) and it’s a homomorphism, so we indeed have a representation. Suppose we pick bases for V and W , so ρV and ρW become matrix represen- tations ρV : G → GLn (C) ρW : G → GLm (C) Then Hom(V, W ) = Matn×m (C) and ρHom(V,W ) (g) : Matn×m (C) → Matn×m (C) is the linear map M 7→ ρW (g)M (ρV (g))−1 Example 1.7.3. Let G = C2 , and let V = C2 be the regular representation, and W be the 2-dimensional trivial representation. So     0 1 1 0 ρV (τ ) = and ρW (τ ) = 1 0 0 1 35 Then Hom(V, W ) = Mat2×2 (C), and ρHom(V,W ) (τ ) is the linear map ρHom(V,W ) (τ ) : Mat2×2 (C) → Mat2×2 (C)   −1 0 1 M 7→ ρW (τ )M ρV (τ ) =M 1 0 ρHom(V,W ) is a 4-dimensional representation of C2. If we choose a basis for Hom(V, W ), we get a 4-dimensional matrix representation ρHom(V,W ) : C2 → GL4 (C) Let’s use our standard basis for Hom(V, W ). We have:     1 0 0 1 ρHom(V,W ) (τ ) : 7→ 0 0 0 0     0 0 0 0 7→ 1 0 0 1     0 1 1 0 7→ 0 0 0 0     0 0 0 0 7→ 0 1 1 0 So in this basis, ρHom(V,W ) (τ ) is given by the matrix   0 0 1 0 0 0 0 1   1 0 0 0 0 1 0 0 When V and W have representations of G, we are particularly interested in the G-linear maps from V to W. They form a subset of Hom(V, W ). Claim 1.7.4. The set of G-linear maps from V to W is a subspace of Hom(V, W ). In particular, the set of G-linear maps from V to W is a vector space. We call it HomG (V, W ) In fact, HomG (V, W ) is a subrepresentation of Hom(V, W ). 36 Definition 1.7.5. Let ρ : G → GL(V ) be any representation. We define the invariant subrepresentation VG ⊂V to be the set {x ∈ V | ρ(g)(x) = x, ∀g ∈ G} It’s easy to check that V G is actually a subspace of V , and it’s obvious that it’s also a subrepresentation (this justifies the name). It’s isomorphic to a trivial representation. Proposition 1.7.6. Let ρV : G → GL(V ) and ρW : G → GL(W ) be repre- sentations. Then HomG (V, W ) ⊂ Hom(V, W ) is exactly the invariant subrepresentation Hom(V, W )G of Hom(V, W ) Proof. Let f ∈ Hom(V, W ). Then f is in the invariant subrepresentation Hom(V, W )G iff we have f = ρHom(V,W ) (g)(f ) = ρW (g) ◦ f ◦ ρV (g −1 ) ∀g ∈ G ⇐⇒ f ◦ ρV (g) = ρW (g) ◦ f ∀g ∈ G which is exactly the condition that f is G-linear. Example 1.7.7. As in Example 1.7.3, let G = C2 , V = C2 be the regular representation and W = C2 be the 2-dimensional trivial representation. Then M ∈ Hom(V, W ) = Mat2×2 (C) is in the invariant subrepresentation if and only if ρHom(V,W ) (τ )(M ) = M In the standard basis ρHom(V,W ) is a 4 × 4-matrix and the invariant subrepre- sentation is the eigenspace of this matrix with eigenvalue 1. This is spanned by     1 0 0 1   1 &   0 0 1 37 So HomG (V, W ) = (Hom(V, W ))G is 2-dimensional. It’s spanned by     1 1 0 0 and ∈ Mat2×2 (C) 0 0 1 1 Now we can (partially) explain the clever formula in Maschke’s Theorem, when we cooked up a G-linear projection f out of a linear projection f˜. Proposition 1.7.8. Let ρ : G → GL(V ) be any representation. Consider the linear map Ψ :V → V 1 X x 7→ ρ(g)(x) |G| g∈G Then Ψ is a G-linear projection from V onto V G. Proof. First we need to check that Ψ(x) ∈ V G for all x. For any h ∈ G, 1 X ρ(h)(Ψ(x)) = ρ(h)ρ(g)(x) |G| g∈G 1 X = ρ(hg)(x) |G| g 1 X = ρ(g)(x) (relabelling g 7→ h−1 g) |G| g = Ψ(x) So Ψ is a linear map V → V G. Next, we check it’s a projection. Let x ∈ V G. Then 1 X Ψ(x) = ρ(g)(x) |G| g 1 X = x=x |G| g 38 Finally, we check that Ψ is G-linear. For h ∈ G, 1 X Ψ(ρ(h)(x)) = ρ(g)(h)(x) |G| g∈G 1 X = ρ(gh)(x) |G| g∈G 1 X = ρ(hg)(x) (relabelling g 7→ hgh−1 ) |G| g∈G = ρ(h)Ψ(x) As a special case, let V and W be representations of G, and consider the rep- resentation Hom(V, W ). The above proposition gives us a G-linear projection from Ψ : Hom(V, W ) → HomG (V, W ) In the proof of Maschke’s Theorem, we applied Ψ to f˜ to get f. This explains why f is G-linear, but we’d still have to check that f is a projection. 1.8 More on decomposition into irreps In Section 1.5 we proved the basic result (Corollary 1.5.9) that every repre- sentation can be decomposed into irreps. In this section, we’re going to prove that this decomposition is unique. Then we’re going to look at the decom- position of the regular representation, which turns out to be very powerful. Before we can start, we need some technical lemmas. Lemma 1.8.1. Let U, V, W be three vector spaces. Then we have natural isomorphisms (i) Hom(V, U ⊕ W ) = Hom(V, U ) ⊕ Hom(V, W ) (ii) Hom(U ⊕ W, V ) = Hom(U, V ) ⊕ Hom(W, V ) 39 Furthermore, if U, V, W carry representations of G, then (i) and (ii) are isomorphisms of representations. Before we start the proof, notice that all four spaces have the same dimension, namely (dim V )(dim W + dim U ) so the statement is at least plausible! Proof. Recall that we have inclusion and projection maps ιU πW / / Uo U ⊕W o W πU ιW where ιU : x 7→ (x, 0) πU : (x, y) 7→ x and similarly for ιW and πW. From their definition, it follows immediately that ιU ◦ πU + ιW ◦ πW = 1U ⊕W (i) Define P : Hom(V, U ⊕ W ) → Hom(V, U ) ⊕ Hom(V, W ) by P : f 7→ (πU ◦ f, πW ◦ f ) In the other direction, define P −1 : Hom(V, U ) ⊕ Hom(V, W ) → Hom(V, U ⊕ W ) by P −1 : (fU , fW ) 7→ ιU ◦ fU + ιW ◦ fW Claim 1.8.2. P and P −1 are linear maps. 40 Also, P and P −1 are inverse to each other (as our notation suggests!). We check that P −1 ◦ P : f 7→ιU ◦ πU ◦ f + ιW ◦ πW ◦ f = (ιU ◦ πU + ιW ◦ πW ) ◦ f =f but both vector spaces have the same dimension, so P ◦ P −1 must also be the identity map (or you can check this directly). So P is an isomorphism of vector spaces. Now assume we have representations ρV , ρW , ρU of G on V , W and U. We claim P is G-linear. Recall that ρHom(V,U ⊕W ) (g)(f ) = ρV ⊕W (g) ◦ f ◦ ρV (g −1 ) We have πU ◦ (ρHom(V,U ⊕W ) (g)(f )) = πU ◦ ρU ⊕W (g) ◦ f ◦ ρV (g −1 ) = ρU (g) ◦ πU ◦ f ◦ ρV (g −1 ) (since πU is G-linear) = ρHom(U,V ) (g)(f ) and similarly for W , so P (ρHom(V,U ⊕W ) (g)(f )) = (πU ◦ ρHom(V,U ⊕W ) (g)(f ), πW ◦ ρHom(V,U ⊕W ) (g)(f )) = (ρHom(V,U ) (g)(πU ◦ f ), ρHom(V,W ) (g)(πW ◦ f )) = ρHom(V,U )⊕Hom(V,W ) (g)(πU ◦ f, πW ◦ f ) So P is G-linear, and we’ve proved (i). (ii) Define I : Hom(U ⊕ W, V ) → Hom(U, V ) ⊕ Hom(W, V ) by I : f 7→ f ◦ ιU , f ◦ ιW ) and I −1 : Hom(U, V ) ⊕ Hom(W, V ) → Hom(U ⊕ W, V ) by I −1 : (fU , fV ) 7→ fU ◦ πU + fW ◦ πW Then use very similar arguments to those in (i). 41 Corollary 1.8.3. If U, V, W are representations of G, then we have natural isomorphisms (i) HomG (V, U ⊕ W ) = HomG (V, U ) ⊕ HomG (V, W ) (ii) HomG (U ⊕ W, V ) = HomG (U, V ) ⊕ HomG (W, V ) There are two ways to prove this corollary. We’ll just give the proofs for (i), the proofs for (ii) are identical. 1st proof. By Lemma 1.8.1, we have a isomorphism of representations P : Hom(V, U ⊕ W ) → Hom(V, U ) ⊕ Hom(V, W ) Suppose f ∈ Hom(V, U ⊕ W ) is actually G-linear. Then since πU and πW are G-linear, we have that P (f ) ∈ HomG (V, U ) ⊕ HomG (V, W ) Now suppose that fU ∈ Hom(V, U ) and fW ∈ Hom(V, W ) are both G-linear. Then P −1 (fU , fW ) ∈ HomG (V, U ⊕ W ) because ιU and ιW are G-linear and the sum of two G-linear maps is G-linear. Hence P and P −1 define inverse linear maps between the two sides of (i). 2nd proof. We have a G-linear isomorphism P : Hom(V, U ⊕ W ) → Hom(V, U ) ⊕ Hom(V, W ) Thus P must induce an isomorphism between the invariant subrepresenta- tions of each side. From Proposition 1.7.6, the invariant subrepresentation on the left-hand-side is Hom(V, U ⊕ W )G = HomG (V, U ⊕ W ) For the right-hand-side, we have (Hom(V, U ) ⊕ Hom(V, W ))G = Hom(V, U )G ⊕ Hom(V, W )G (this is true for any direct sum of representations) which is the same as HomG (V, U ) ⊕ HomG (V, W ) 42 Now that we’ve dealt with these technicalities, we can get back to learning more about the decompostion of representations into irreps. Let V and W be irreps of G. Recall Schur’s Lemma (Theorem 1.6.1), which tells us a lot about the G-linear maps between V and W and between V and V. Here’s another way to say it: Proposition 1.8.4. Let V and W be irreps of G. Then  0 if V and W aren’t isomorphic dim HomG (V, W ) = 1 if V and W are isomorphic Proof. Suppose V and W aren’t isomorphic. Then by Schur’s Lemma, the only G-linear map from V to W is the zero map, so HomG (V, W ) = {0} Alternatively, suppose that f0 : V → W is an isomorphism. Then for any f ∈ HomG (V, W ): f0−1 ◦ f ∈ HomG (V, V ) So by Schur’s Lemma, f0−1 ◦f = λ1V , i.e. f = λf0. So f0 spans HomG (V, W ). Proposition 1.8.5. Let ρ : G → GL(V ) be a representation, and let V = U1 ⊕... ⊕ Us be a decomposition of V into irreps. Let W be any irrep of G. Then the num- ber of irreps in the set {U1 ,... , Us } which are isomorphic to W is equal to the dimension of HomG (W, V ). It’s also equal to the dimension of HomG (V, W ). Proof. By Corollary 1.8.3, s M HomG (W, V ) = HomG (W, Ui ) i=1 so s X dim HomG (W, V ) = dim HomG (W, Ui ) i=1 By Proposition 1.8.4, this equals the number of irreps in {U1 ,... , Us } that are isomorphic to W. An identical argument works if we consider HomG (V, W ) instead. 43 Now we can prove uniqueness of irrep decomposition. Theorem 1.8.6. Let ρ : G → GL(V ) be a representation, and let V = U1 ⊕... ⊕ Us V = Û1 ⊕... ⊕ Ûr be two decompositions of V into irreducible subrepresentations. Then the two sets of irreps {U1 ,... , Us } and {Û1 ,... , Ûr } are the same, i.e. s = r and (possibly after reordering) Ui and Ûi are isomorphic for all i. Proof. Let W be any irrep of G. By Proposition 1.8.5, the number of irreps in the first decomposition that are isomorphic to W is equal to dim HomG (W, V ). But the number of irreps in the second decomposition that are isomorphic to W is also equal to dim HomG (W, V ). So for any irrep W , the two decom- positions contain the same number of factors isomorphic to W. Example 1.8.7. Let G = S3. So far, we’ve met three irreps of this group. Let ρ1 : S3 → GL(U1 ) the 1-dimensional trivial representation, let ρ2 : S3 → GL(U2 ) be the sign representation (see Example 1.3.6), which is also 1-dimensional, and let ρ3 : S3 → GL(U3 ) be the 2-dimensional irrep from Example 1.5.10. For any non-negative inte- gers a, b, c we can form the representation U1⊕a ⊕ U2⊕b ⊕ U3⊕c By the above theorem, all of these representations are distinct. So if we know all the irreps of a group G (up to isomorphism), then we know all the representations of G: each representation can be described, uniquely, as a direct sum of some number of copies of each irrep. This is similar to the relationship between integers and prime numbers: each integer can be 44 written uniquely as a product of prime numbers, with each prime occuring with some multiplicity. However, there are infinitely many prime numbers! As we shall see shortly, the situation for representations of G is much simpler. In Section 1.3 we constructed the regular representation of any group G. We take a vector space Vreg which has a basis {bg | g ∈ G} (so dim Vreg = |G|), and define ρreg : G → GL(Vreg ) by ρreg (h) : bg 7→ bhg (and extending linearly). We claimed that this representation was very im- portant. Here’s why: Theorem 1.8.8. Let Vreg = U1 ⊕... ⊕ Us be the decomposition of Vreg as a direct sum of irreps. Then for any irrep W of G, the number of factors in the decomposition that are isomorphic to W is equal to dim W. Before we look at the proof, let’s note the most important corollary of this result. Corollary 1.8.9. Any group G has only finitely many irreducible represen- tations (up to isomorphism). Proof. Every irrep occurs in the decomposition of Vreg at least once, and dim Vreg is finite. So for any group G there is a finite list U1 ,..., Ur of irreps of G (up to isomor- phism), and every representation of G can be written uniquely as a direct sum U1⊕a1 ⊕... ⊕ Urar for some non-negative integers a1 ,..., ar. In particular, Theorem 1.8.8 says that Vreg decomposes as Vreg = U1⊕d1 ⊕... ⊕ Ur⊕dr where di = dim Ui 45 Example 1.8.10. Let G = S3 , and let U1 , U2 , U3 be the three irreps of S3 from Example 1.8.7. The regular representation Vreg of S3 decomposes as Vreg = U1 ⊕ U2 ⊕ U3⊕2 ⊕... But dim Vreg = |S3 | = 6, and dim(U1 ⊕ U2 ⊕ U3⊕2 ) = 1 + 1 + 2 × 2 = 6 so there cannot be any other irreps of S3. The proof of Theorem 1.8.8 follows easily from the following: Lemma 1.8.11. For any representation W of G, we have a natural isomor- phism of vector spaces HomG (Vreg , W ) = W Proof. Recall that we have a basis vector be ∈ Vreg corresponding to the identity element in G. Define a function T : HomG (Vreg , W ) → W by ‘evaluation at be ’, i.e. T : f → f (be ) Let’s check that T is linear. We have T (f1 + f2 ) = (f1 + f2 )(be ) = f1 (be ) + f2 (be ) = T (f1 ) + T (f2 ) and T (λf ) = (λf )(be ) = λf (be ) = λT (f ) so it is indeed linear. Now let’s check that T is an injection. Suppose that f ∈ HomG (Vreg , W ), and that T (f ) = f (be ) = 0. Then for any basis vector bg ∈ Vreg , we have f (bg ) = f (ρreg (g)(be )) = ρW (g)(f (be )) = 0 So f sends every basis vector to zero, so it must be the zero map. Hence T is indeed an injection. Finally, we need to check that T is a surjection, so we 46 need to show that for any x ∈ W there is a G-linear map f from Vreg to W such that f (be ) = x. Fix an x ∈ W , and define a linear map f : Vreg → W by f : bg 7→ ρW (g)(x) Then in particular f (be ) = x, so we just need to check that f is G-linear. But for any h ∈ G, we have f ◦ ρreg (h) : bg 7→ ρW (hg)(x) ρW (h) ◦ f : bg → 7 ρW (h)(ρW (g)(x)) = ρW (hg)(x) So f ◦ ρreg (h) = ρW (h) ◦ f , since both maps are linear and they agree on the basis. Thus f is indeed G-linear, and we have proved that T is a surjection. Proof of Theorem 1.8.8. Let Vreg = U1 ⊕...⊕Us be the decomposition of the regular representation into irreps. Let W be any irrep of G. By Proposition 1.8.5, we have that dim HomG (Vreg , W ) equals the number of Ui that are isomorphic to W. But by Lemma 1.8.11, dim HomG (Vreg , W ) = dim W Corollary 1.8.12. Let U1 ,... , Ur be all the irreps of G, and let dim Ui = di. Then r X d2i = |G| i=1 Proof. By Theorem 1.8.8, Vreg = U1⊕d1 ⊕... ⊕ Ur⊕dr Now take dimensions of each side. 47 Notice this is consistent with out results on abelian groups. If G is abelian, di = 1 for all i, so this formula says that r X r= d2i = |G| i=1 i.e. the number of irreps of G is the size of G. This is what we found. Example 1.8.13. Let G = S4. Let U1 ,... , Ur be all the irreps of G, with dimensions d1 ,... , dr. Let U1 be the 1-dimensional trivial representation and U2 be the sign representation, so d1 = d2 = 1. For any symmetric group Sn these are the only possible 1-dimensional representations (see Problem Sheets), so we must have di > 1 for i ≥ 3. We have: d21 +... + d2r = |G| = 24 ⇒ d23 +... + d2r = 22 This has only 1 solution. Obviously dk ≤ 4 for all k, as 52 = 25. Suppose that dr = 4, then we would have d23 +... + d2r−1 = 22 − 16 = 6 This is impossible, so actually dk ∈ [2, 3] for all k. The number of k such that dk = 3 must be even because 22 is even, and we can’t have dk = 2 for all k since 4 - 22. Therefore, the only possibility is that d3 = 2, d4 = 3 and d5 = 3. So G has 5 irreps with these dimensions. Example 1.8.14. Let G = D4. Let the irreducible representations be U1 ,... , Ur with dimensions d1 ,... , dr. As usual, let U1 be the 1-dimensional trivial representation. So d22 +... + d2r = |G| − 1 = 7 So either (i) r = 8, and di = 1 ∀i (ii) r = 5, and d2 = d3 = d4 = 1, d5 = 2 48 In the Problem Sheets we show that D4 has a 2-dimensional irrep, so in fact (ii) is true. The 2-dimensional irrep U5 is the representation we constructed in Example 1.3.7 by thinking about the action of D4 on a square. If we present D4 as hσ, τ | σ 4 = τ 2 = e, τ στ = σ −1 i then the 4 1-dimensional irreps are given by ρij :σ 7→ (−1)i τ 7→ (−1)j for i, j ∈ {0, 1}. 1.9 Duals and tensor products Let V be a vector space. Recall the definition of the dual vector space: V ∗ = Hom(V, C) This is a special case of Hom(V, W ) where W = C. So dim V ∗ = dim V , and if {b1 ,... , bn } is a basis for V , then there is a dual basis {f1 ,... , fn } for V defined by  1 if i = j fi (bj ) = 0 if i 6= j Now let ρV : G → GL(V ) be a representation, and let C carry the (1- dimensional) trivial representation of G. Then we know that V ∗ carries a representation of G, defined by ρHom(V,C) (g) : f 7→ f ◦ ρV (g −1 ) We’ll denote this representation by (ρV )∗ , we call it the dual representation to ρV. Another way to say it is that we define (ρV )∗ (g) : V ∗ → V ∗ to be the dual map to ρV (g −1 ) : V → V If we have a basis for V , so ρV (g) is a matrix, then ρ∗V (g) is described in the dual basis by the matrix ρV (g)−T 49 Example 1.9.1. Let G = S3 = hσ, τ | σ 3 = τ 2 = e, τ στ = σ −1 i and let ρ be the 2-dimensional irrep of G. In the appropriate basis (see Problem Sheets)   ω 0 2πi ρ(σ) = −1 (where ω = e 3 ) 0 ω   0 1 ρ(τ ) = 1 0 The dual representation (in the dual basis) is  −1  ∗ ω 0 ρ (σ) = 0 ω   0 1 ρ(τ ) = 1 0 This is equivalent to ρ under the change of basis   0 1 P = 1 0 So in this case, ρ∗ and ρ are isomorphic. Example 1.9.2. Let G = C3 = hµ | µ3 = ei and consider the 1-dimensional representation 2πi ρ1 : µ 7→ ω = e 3 The dual representation is 4πi ρ∗1 : µ 7→ ω −1 = e 3 So in this case, ρ∗1 = ρ2 In particular, ρ1 and ρ∗1 are not isomorphic. You should recall that (V ∗ )∗ is naturally isomorphic to V as a vector space. The isomorphism is given by Φ :V → (V ∗ )∗ x 7→ Φx 50 where Φx :V ∗ → C f 7→ f (x) We claim Φ is G-linear. Pick x ∈ V , and consider Φ(ρV (g)(x)). This is the map ΦρV (g)(x) :V ∗ → C f 7→ f (ρV (g)(x)) Now consider (ρV ∗ )∗ (g)(Φ(x)). By definition, this is the map Φx ◦ ρV ∗ (g −1 ) :V ∗ → C f 7→ Φx ρV ∗ (g −1 )(f )  = Φx (f ◦ ρV (g)) = (f ◦ ρV (g)) (x) So Φ (ρV (g)(x)) and (ρV ∗ )∗ (g) (Φ(x)) are the same element of (V ∗ )∗ , so Φ is indeed G-linear. Therefore, (V ∗ )∗ and V are naturally isomorphic as repre- sentations. Proposition 1.9.3. Let V carry a representation of G. Then V is irreducible if and only if V ∗ is irreducible. Proof. Suppose V is not irreducible, i.e. it contains a non-trivial subrepre- sentation U ⊂ V. By Maschke’s Theorem, there exists another subrepre- sentation W ⊂ V such that V = U ⊕ W. By Corollary 1.8.3, this implies V ∗ = U ∗ ⊕ W ∗ , so V ∗ is not irreducible. By the same argument, if V ∗ is not irreducible then neither is (V ∗ )∗ = V. So ‘taking duals’ gives an order-2 permutation of the set of irreps of G. Next we’re going to define tensor products. There are several ways to define these, of varying degrees of sophistication. We’ll start with a very concrete definition. Let V and W be two vector spaces and assume we have bases {a1 ,... , an } for V and {b1 ,... , bm } for W. 51 Definition 1.9.4. The tensor product of V and W is the vector space which has a basis given by the set of symbols {ai ⊗ bt | 1 ≤ i ≤ n, 1 ≤ t ≤ m} We write the tensor product of V and W as V ⊗W By definition, dim(V ⊗ W ) = (dim V )(dim W ). If we have vectors x ∈ V and y ∈ W , we can define a vector x⊗y ∈V ⊗W as follows. Write x and y in the given bases, so x = λ1 a1 +... + λn an y = µ 1 b1 +... + µ m bm for some coefficients λi , µt ∈ C. Then we define X x⊗y = λi µt ai ⊗ bt i∈[1,n] t∈[1,m] (think of expanding out the brackets). Now let V and W carry representa- tions of G. We can define a representation of G on V ⊗ W , called the tensor product representation. We let ρV ⊗W (g) : V ⊗ W → V ⊗ W be the linear map defined by ρV ⊗W (g) : ai ⊗ bt 7→ ρV (g)(ai ) ⊗ ρW (g)(bt ) Suppose ρV (g) is described by the matrix M (in this given basis), and ρW (g) is described by the matrix N. Then n ! m ! X X ρV ⊗W (g) : ai ⊗ bt 7→ Mji aj ⊗ Nst bs j=1 s=1 X = Mji Nst aj ⊗ bs j∈[1,n] s∈[1,m] 52 So ρV ⊗W (g) is described by the nm × nm matrix M ⊗ N , whose entries are [M ⊗ N ]js,it = Mji Nst This notation can be quite confusing! This matrix has n × m rows, and to specify a row we have to give a pair of numbers (j, s), where 1 ≤ j ≤ n and 1 ≤ s ≤ m. When we write js above, we mean this pair of numbers, we don’t mean their product. Similiarly to specify a column we have to give another pair of numbers (i, t). Fortunately we won’t have to use this notation much. We haven’t checked that ρV ⊗W is a homomorphism. However, there is a more fundamental question: how do we know that this construction is independent of our choice of bases? Both questions are answered by the following: Proposition 1.9.5. V ⊗ W is isomorphic to Hom(V ∗ , W ). We can view this proposition as an alternative definition for V ⊗ W. It’s better because it doesn’t require us to choose bases for our vector spaces, but it’s less explicit. [Aside: this definition only works for finite-dimensional vector spaces. There are other basis-independent definitions that work in general, but they’re even more abstract.] Proof. Let {α1 ,... , αn } be the basis for V ∗ dual to {a1 ,... , an }. Then Hom(V ∗ , W ) has a basis {fti | 1 ≤ i ≤ n, 1 ≤ t ≤ m} where fti :αi 7→ bt α6=i 7→ 0 Define an isomorphism of vector spaces between Hom(V ∗ , W ) and V ⊗ W by mapping fti 7→ ai ⊗ bt To prove the proposition it’s sufficient to check that the representation ρHom(V ∗ ,W ) agrees with the definition of ρV ⊗W when we write it in the basis {fti }. Pick g ∈ G and let ρV (g) and ρW (g) be described by matrices M and N in the given bases. By definition, ρHom(V ∗ ,W ) (g) : fti 7→ ρW (g) ◦ fti ◦ ρV ∗ (g −1 ) 53 Now n X −1 ρV ∗ (g ) : αk 7→ Mkj αj j=1 −1 because ρV ∗ (g ) is given by the matrix M T in the dual basis. So fti ◦ ρV ∗ (g −1 ) : αk 7→ Mki bt and ! m X ρW (g) ◦ fti ◦ ρV ∗ (g −1 ) : αk 7→ Mki Nst bs j=1 −1 Therefore, if we write ρW (g) ◦ fti ◦ ρV ∗ (g ) in terms of the basis {fsj }, we have X ρW (g) ◦ fti ◦ ρV ∗ (g −1 ) = Mji Nst fsj j∈[1,n] s∈[1,m] (since both sides agree on each basis vector αk ) and this is exactly the formula for the tensor product representation ρV ⊗W. Corollary 1.9.6. Hom(V, W ) is isomorphic to V ∗ ⊗ W. Proof. V ∗ ⊗ W = Hom((V ∗ )∗ , W ) = Hom(V, W ) In general, tensor products are hard to calculate, but there is an easy special case, namely when the vector space V is 1-dimensional. Then for any g ∈ G, ρV (g) is just a scalar, so if ρW (g) is described by a matrix N (in some basis), then ρV ⊗W is described by the matrix ρV (g)N. Example 1.9.7. Let G = S3 , and W be the 2-dimensional irrep, so     ω 0 0 1 ρW (σ) = , ρW (τ ) = 0 ω −1 1 0 Let V be the 1-dimensional sign representation, so ρV (σ) = 1, ρV (τ ) = −1 54 Then V ⊗ W is given by     ω 0 0 −1 ρV ⊗W (σ) = , ρV ⊗W (τ ) = 0 ω −1 −1 0 In general, the tensor product of two irreducible representations will not be irreducible. For example, if W is the 2-dimensional irrep of S3 as above, then W ⊗ W is 4-dimensional and so cannot possibly be an irrep. However, Claim 1.9.8. If V is 1-dimensional, then V ⊗ W is irreducible iff W is irreducible. Therefore in the above example the 2-dimensional representation V ⊗ W is irreducible. We know that there’s only one 2-dimensional irrep of S3 , so V ⊗ W must be isomorphic to W. Find the change-of-basis matrix! 2 Characters 2.1 Basic properties Let M be an n × n matrix. Recall that the trace of M is n X Tr(M ) = Mii i=1 If N is another n × n matrix, then n X Tr(N M ) = Nij Mji = Tr(M N ) i,j=1 which implies that Tr(P −1 M P ) = Tr(P P −1 M ) = Tr(M ) 55 Definition 2.1.1. Let V be a vector space, and f :V →V a linear map. Pick a basis for V and let M be the matrix describing f in this basis. We define Tr(f ) = Tr(M ) This definition does not depend on the choice of basis, because choosing a different basis will produce a matrix which is conjugate to M , and hence has the same trace. Now let G be a group, and let ρ be a representation ρ : V → GL(V ) on a vector space V. Definition 2.1.2. The character of the representation ρ is the function χρ :G → C g 7→ Tr (ρ(g)) Notice that χρ is not a homomorphism in general, since generally Tr(M N ) 6= Tr(M ) Tr(N ) Example 2.1.3. Let G = C2 × C2 = hσ, τ | σ 2 = τ 2 = e, στ = τ σi. Let ρ be the direct sum of ρ1,0 and ρ1,1 , so     1 0 −1 0 ρ(e) = , ρ(σ) = 0 1  0 −1 1 0 −1 0 ρ(τ ) = , ρ(στ ) = 0 −1 0 1 Then χρ :ρ 7→ 2 σ 7→ −2 τ 7→ 0 στ 7→ 0 56 Proposition 2.1.4. Isomorphic representations have the same character. Proof. In Proposition 1.4.3 we showed that if two representations are isomor- phic, then there exist bases in which they are described by the same matrix representation. Later on we’ll prove the converse to this statement, that if two representations have the same character, then they’re isomorphic! Proposition 2.1.5. Let ρ : G → GL(V ) be a representation of dimension d, and let χρ be its character. Then (i) If g and h are conjugate in G then χρ (g) = χρ (h) (ii) For any g ∈ G χρ g −1 = χρ (g)  (iii) χρ (e) = d (iv) For all g ∈ G, |χρ (g)| ≤ d and |χρ (g)| = d if and only if ρ(g) = λ1V for some λ ∈ C Proof. (i) Suppose g = µ−1 hµ for some µ ∈ G. Then ρ(g) = ρ(µ−1 )ρ(h)ρ(µ) So in any basis, the matrices for ρ(g) and ρ(h) are conjugate, so Tr (ρ(g)) = Tr (ρ(h)) This says that χρ is a class function, more on these later. (ii) Let g ∈ G and let the order of g be k. By Corollary 1.6.4, there exists a basis of V such that ρ(g) becomes a diagonal matrix. Let λ1 ,... , λd be the 57 diagonal entries (i.e. the eigenvalues of ρ(g)). Then each λi is a kth root of unity, so |λi | = 1, so λ−1 i = λi. Then d X d X −1 −1 λ−1  χρ (g ) = Tr ρ(g ) = i = λi = χρ (g) i=1 i=1 (iii) In every basis, ρ(e) is the d × d identity matrix. (iv) Using the same notation as in (ii), we have d X d X |χρ (g)| = λi ≤ |λi | = d i=1 i=1 by the triangle inequality. Furthermore, equality holds iff arg(λi ) = arg(λj ) for all i, j ⇐⇒ λi = λj for all i, j (since |λi | = |λj | = 1) ⇐⇒ ρ(g) = λ1V for some λ ∈ C Property (iv) is enough to show: Corollary 2.1.6. Let ρ be a representation of G (of dimension d), and let χρ be its character. Then for any g ∈ G ρ(g) = 1 ⇐⇒ χρ (g) = d Proof. (⇒) is obvious. (⇐) Assume χρ (g) = d. Then |χρ (g)| = d, so by Proposition 2.1.5(iv) ρ(g) = λ1 for some λ ∈ C. But then χρ (g) = λd, so λ = 1. So if you know χρ , then you know the kernel of ρ. In particular you know whether or not ρ is faithful. Let ξ, ζ be any two functions from G to C. Then we define their sum and product in the obvious ‘point-wise’ way, i.e. we define (ξ + ζ)(g) = ξ(g) + ζ(g) (ξζ)(g) = ξ(g)ζ(g) 58 Proposition 2.1.7. Let ρV : G → GL(V ) and ρW : G → GL(W ) be repre- sentations, and let χV and χW be their characters. (i) χV ⊕W = χV + χW (ii) χV ⊗W = χV χW (iii) χV ∗ = χV (iv) χHom(V,W ) = χV χW Proof. (i) Pick bases for V and W , and pick g ∈ G. Suppose that ρV (g) and ρW (g) are described by matrices M and N in these bases. Then ρV ⊕W (g) is described by the block-diagonal matrix   M 0 0 N So Tr (ρV ⊕W (g)) = Tr(M ) + Tr(N ) = Tr (ρV (g)) + T r (ρW (g)) (ii)ρV ⊗W (g) is given by the matrix [M ⊗ N ]js,it = Mji Nst The trace of this matrix is X X [M ⊗ N ]it,it = Mii Ntt i,t i,t = Tr(M ) Tr(N ) i.e. χV ⊗W (g) = χV (g)χW (g). This formula is very useful, it means we can now forget the definition of the tensor product for most purposes! (iii) ρV ∗ (g) is described by the matrix M −T , so Tr (ρV ∗ (g)) = Tr(M −T ) = Tr(M −1 ) = χV (g −1 ) = χV (g) (by Proposition 2.1.5(ii)) 59 i.e. χV ∗ (g) = χV (g). (iv) By Corollary 1.9.6, the representation Hom(V, W ) is isomorphic to the representation V ∗ ⊗ W , so the statement follows by parts (ii) and (iii). If ρ is an irreducible representation, we say that χρ is an irreducible char- acter. We know that any group G has a finite list of irreps U1 ,... , Ur so there is a corresponding list of irreducible characters χ1 ,... , χ r We also know that any representation is a direct sum of copies of these irreps, i.e. if ρ : G → GL(V ) is a representation then there exist numbers m1 ,... , mr such that V = U1⊕m1 ⊕... ⊕ Ur⊕mr Then by Proposition 2.1.7(i) we have χρ = m1 χ1 +... + mr χr So every character is a linear combination of t

Use Quizgecko on...
Browser
Browser