Unit 5 Algebra Notes PDF

Summary

These notes provide a detailed explanation of concepts within algebra, such as the derivative of polynomials, properties of field extensions, and irreducible polynomials. The material appears to be suitable for an undergraduate course in abstract algebra.

Full Transcript

# More about Roots ## Unit - Y ### Defintion If $f(x) = a_0x^n + a_1x^(n-1) + ... + a_(n-1)x + a_n$ in $F[x]$, then the derivative of $f(x)$, written as $f'(x)$ is the polynomial $f'(x) = na_0x^(n-1) + (n-1)a_1x^(n-2) + ... + (n-1)a_(n-1)x^(n-2) + ... + a_(n-1)$, in $F[x]$. ### Lemma For any $...

# More about Roots ## Unit - Y ### Defintion If $f(x) = a_0x^n + a_1x^(n-1) + ... + a_(n-1)x + a_n$ in $F[x]$, then the derivative of $f(x)$, written as $f'(x)$ is the polynomial $f'(x) = na_0x^(n-1) + (n-1)a_1x^(n-2) + ... + (n-1)a_(n-1)x^(n-2) + ... + a_(n-1)$, in $F[x]$. ### Lemma For any $f(x), g(x) \in F[x]$ and any $a \in F$: 1. $(f(x) + g(x))' = f'(x) + g'(x)$ 2. $(af(x))' = af'(x)$ 3. $(f(x)g(x))' = f'(x)g(x) + f(x)g'(x)$ ### Proof Let $f(x) = a_0 + a_1x + ... + a_nx^m$ and $g(x) = b_0 + b_1x + ... + b_mx^n$ be any two polynomials in $F[x]$ and we assume that $m \ge n$. To prove (i): $(f(x) + g(x))' = (a_0 + a_1x + ... + a_nx^m) + (b_0 + b_1x + ... + b_mx^n)$ $= (a_0 + b_0) + (a_1 + b_1)x + ... + (a_n + b_n)x + ... + a_mx^m + b_nx^n$ $... + a_mx^(m-1) + ... + nb_nx^(n-1)$ $= (a_0 + b_0) + (a_1 + b_1)x + ... + (a_n + b_n)x + ... + a_mx^(m-1) + b_mx^(n-1)$ $= (a_1 + 2a_2x + ... + ma_mx^(m-1)) + (b_1 + 2b_2x + ... + nb_nx^(n-1))$ $= f'(x) + g'(x)$. i.e., $(f(x) + g(x))' = f'(x) + g'(x)$. To prove (ii): Let $d \in F$ Then $af(x) = da_0 + da_1x + da_2x^2 + ... + da_nx^n$ $(af(x))' = da_1 + 2da_2x + ... + nda_nx^(n-1)$ $= d(a_1 + 2a_2x + ... + na_nx^(n-1))$ $= df'(x)$. i.e., $(af(x))' = df'(x) \forall f(x) \in F[x], \forall d \in F$. To prove (iii): To prove (iii), from part (i) and (ii), it is enough to prove it in the highly special case $f(x) = x^i$ and $g(x) = x^j$ where Now, $f(x)g(x) = x^(i+j)$ $(f(x)g(x))' = (i + j)x^(i+j - 1)$ $= ix^(i+j - 1) + jx^(i+j - 1)$ $(f(x)g(x))' = ix^(i-1)x^j + jx^i(x^(j-1))$ Now, $f'(x)g(x) = (ix^(i-1))x^j$ and $f(x)g'(x) = x^i(jx^(j-1))$ :. $(f(x)g(x))' = f'(x)g(x) + f(x)g'(x)$. ## Corollary If $f(x) \in F[x]$ is irreducible and if the characteristic of $F$ is 0, then $f(x)$ has no multiple roots. ### Proof: To prove. Let $f(x) \in F[x]$ be irreducible and $F$ be a field of characteristic 0. $f(x)$ has no multiple roots Suppose that $f(x)$ has a multiple root. By lemma, $f(x)$ and $f'(x)$ have a non-trivial common factor, say $p(x)$ But $f(x)$ is irreducible, so we have the factors of $f(x)$ are 1 or $f(x)$. :. Either $p(x) = 1$ or $p(x) = f(x)$. = $p(x)f(x) = f(x)$. Also since $p(x)$ is a factor of $f'(x)$, we can write $f'(x) = p(x)g(x)$ for some polynomial $g(x) \in F[x]$. $f'(x) = f(x)g(x)$ [: $f(x) = p(x)$] = a multiple of $f(x)$ = $f(x)f(x)$ But we know that deg $f(x) =$ deg $f'(x)$. :. $f(x)/ f'(x)$ is valid only when $f'(x) = 0$. Since $F$ is a field of characteristic zero and since $f'(x) = 0$, we have $f(x) = \text{constant}$. :. $f(x)$ has no roots, which is a contradiction to the fact that $f(x)$ has a multiple roots. :. $f(x)$ has no multiple roots. ## Lemma The polynomial $f(X) \in F[X]$ has a multiple root if and only if $f(x)$ and $f'(x)$ have a non-trivial (that is, of positive degrees common factor. ### Proof: If $f(x)$ has a multiple root $a$. Then $f(x) = (x - a)^mg(x)$, where $m \ge 1$. $f'(x) = [(x-a)^mg(x)]'$ = $m(x-a)^(m-1)g(x) + (x-a)^mg'(x)$ = $(x-a)^(m-1)(mg(x) + (x-a)g'(x))$ , since $m \ge 1$. $f'(x) = (x - a)^mr(x)$ => $f(x)$ and $f'(x)$ have the common factor $(x - a)$. converse If $f(x)$ and $f'(x)$ have a non-trivial common factor. ### To prove: $f(x)$ has a multiple root. Suppose $f(x)$ has no multiple root. Then $f(x) = (x - d_1)(x - d_2)...(x - d_n)$ where the $d_i$ are all distinct. Then $f'(x) = \frac{\sum_{i=1}^n \prod_{j\neq i} (x - d_j)}{x}$. Suppose $\prod_{j \neq i} (d_i - d_j) \neq 0$, since the roots are all distinct. If $f(x)$ and $f'(x)$ have a non-trivial common factor, they must have a common root, namely, any root of this common factor. But $f(x)$ and $f'(x)$ have no nontrivial common factor. Which is a contradiction to the hypothesis. :. $f(x)$ has a multiple root. ## Corollary If $F$ is a field of characteristic $p \neq 0$, then the polynomial $x^p - x \in F[x]$, for any $n$, has distinct roots. ### Proof: Let $F$ be a field of characteristic $p \neq 0$ and let $f(x) = x^p - x, \forall n$. To prove. $f(x)$ has distinct roots. Now, $ f(x) = x^p - x$ $f'(x) = px^(p-1) - 1$ = $0 - 1 = -1$ [: Since $F$ is of characteristic $p$]. :. $f(x) = x^p - 1$ and $f'(x) = -1$ :. $f(x)$ and $f'(x)$ has no non-trivial common factors. :. By above lemma, $f(x)$ has no multiple roots. :. All the roots of $f(x)$ are distinct. ## Definition: The extension $K$ of F is a simple extension of F if $K = F(a)$ for some $a$ in $K$. ## Theorem: If $F$ is of characteristic 0 and if $a, b$ are algebraic over $F$, then there exists an element $c \in F(a,b)$ such that $F(a, b) = F(c)$. ### Proof: Let $F$ be a field of characteristic 0 and $a,b$ be algebraic elements over $F$. To prove. There exists an element $c \in F(a,b)$ such that $F(a,b) = F(c)$. Let $f(x)$ and $g(x)$, of degrees $m$ and $n$, be the irreducible polynomials over $F$ satisfied by $a$ and $b$ respectively. i.e, $f(a) = 0$ and $g(b) = 0$ ($a,b$ - algebraic). Let $K$ be an extension of $F$ which contains all the roots of $f(x)$ and $g(x)$. Since $f(x) \in F[x]$ and since $F$ is a field of characteristic zero and by Corollary, we have 'f(x) has no multiple roots'. :. All the roots of $f(x)$ are distinct. Similarly, all the roots of the polynomial $g(x)$ are also distinct. Let the roots of $f(x)$ be $a = a_1, a_2, ... a_m$ and those of $g(x), b = b_1, b_2, ... b_n$. If $ j = 1 $ , then $b_j \neq b_1 = b$. Consider the equation, $a_i + \lambda b_j \neq a_i + \lambda b_1 = a + b $ We note that $\lambda = \frac{a_i - a}{b_j - b}$ is the only solution of the equation in $K$ for all $i$ and $j \neq 1$. Since $F$ is of characteristic 0, it has an infinite number of elements. :. It is always possible to choose an element $\lambda$ in $F$ such that $a_i + \lambda b_j \neq a_i + \lambda b_1$ for all $i$ and all $j \neq 1$. Let $c = a + \lambda b$. Then, we have to prove that $c \in F(a,b)$ and $F(c) = F(a,b)$. . Since $F(a, b)$ is the field which contains both $a$ and $b$ and also $\lambda \in F$, i.e., $\lambda$, $a$, $b$ are all in $F(a,b)$ => $a + \lambda b \in F(a,b)$ i.e., $c \in F(a,b)$. To prove $F(c) = F(a,b)$ Since $c \in F(a,b)$, we have $F(c) \subseteq F(a,b)$. It remains to prove that $F(a,b) \subseteq F(c)$. For this, we have to prove that both $a$ and $b$ are in $F(c)$. Since $b$ is a root of $g(x)$, we have $g(b) = 0$ over $F$ => $b$ satisfies the polynomial $g(x)$ over $F(c)$. Let $K = F(c)$. Then $b$ satisfies the polynomial $g(x)$ over $K$. :. $(x-b)$ is a factor of $g(x)$ over some extension of $K$. Moreover, if $h(x) = f(c - \lambda x)$ then $h(x) \in K[x]$. And $h(b) = f(c - \lambda b)$ = $f(a)$ = 0 [: $a$ is a root of $f(x)$] i.e, $x-b$ is a factor of the polynomial $h(x)$ over $K$. From (2) and (3), we see that both $g(x)$ and $h(x)$ have $(x-b)$ as a common factor over some extension of $K$. We have to prove that $(x-b)$ is the gcd of both $g(x)$ and $h(x)$ over some extension of $K$ For, if $b_j \neq b$ is another root of $g(x)$ $h(b_j) = f(c - \lambda b_j)$ $\neq 0$ [: By the choice of $\lambda$ , $c - \lambda b_j$ avoids all roots $a_i$ of $f(x)$]. i.e., $b_j \neq b$ is not a root of $h(x)$ for $j \neq 1$. i.e., Any factor of $g(x)$ other than $x - b$ is not a factor of $h(x)$ in some extension of $K$. Also, since $g(x)$ has no multiple roots, we have $(x - b_j) | g(x)$. From (2) and (4), we see that $(x-b)$ is the gcd of $g(x)$ and $h(x)$ in some extension of $K$. Then they have a non-trivial greatest common divisor over $K$ which must be a divisor of $x-b$. Since the degree of $x-b$ is 1, we see that the greatest common divisor of $g(x)$ and $h(x)$ in $K[x]$ is exactly $x-b$. Thus $x - b \in K[x]$ $b \in K$ i.e., $b \in F(c)$ [$K=F(c)$] Now, $c = a + \lambda b$ $a = c - \lambda b$ Since $b, c \in F(c)$, $\lambda \in F$, we obtain that $c - \lambda b \in F(c)$ i.e., $a \in F(c)$. :. Both $a, b \in F(c)$. But $a, b \in F(a,b)$ always :. $F(a,b) \subseteq F(c)$ From, and (5) $F(a, b) = F(c)$. ## Unit IV (Examples). (Inner product space) 2) In $F^2$ define for $u = (a_1, a_2)$ and $v = (b_1, b_2)$, $(u, v) = a_1\overline{b_1} + a_2\overline{b_2}$. It is easy to verify that this defines an inner product on $F^2$. ### Proof: Let $u, v, w \in F^2$ and $\alpha, \beta \in F$. Then, $u = (a_1, a_2)$, $v = (b_1, b_2)$ and $w = (c_1, c_2)$ where $a_1, a_2, b_1, b_2, c_1, c_2 \in F$. (i) $(u, v) = a_1\overline{b_1} + a_2\overline{b_2}$ $(v, u) = \overline{a_1}b_1 + \overline{a_2}b_2$ = $a_1\overline{b_1} + a_2\overline{b_2}$ => (1) (ii) $(\alpha u, v) = \alpha a_1\overline{b_1} + \alpha a_2\overline{b_2}$ = $\alpha(a_1\overline{b_1} + a_2\overline{b_2})$ = $\alpha(u, v)$ => (2) (iii) $(u, \alpha v) = a_1\overline{\alpha b_1} + a_2\overline{\alpha b_2}$ = $\alpha a_1\overline{b_1} + \alpha a_2\overline{b_2}$ = $\alpha(a_1\overline{b_1} + a_2\overline{b_2})$ = $\alpha(u, v)$ => (3) (iv) $(u, u) = a_1\overline{a_1} + a_2\overline{a_2}$ = $a_1\overline{a_1} + \overline{a_1}a_2 + \overline{a_2}a_1 + a_2\overline{a_2}$ = $2|a_1|^2 + 2Re(a_1\overline{a_2}) + 2|a_2|^2$ = $|a_1|^2 + |a_1|^2 + |a_2|^2 + 2Re(a_1\overline{a_2})$ = $|a_1|^2 + |a_1|^2 + |a_2|^2 + 2Re(a_2\overline{a_1})$ [: $a_1\overline{a_2} = \overline{a_2\overline{a_1}}$] = $|a_1|^2 + |a_1|^2 + |a_2|^2 + 2Re(a_2\overline{a_1})$ = $2|a_1|^2 + 2Re(a_2\overline{a_1}) + |a_2|^2 + |a_2|^2$ = $|a_1|^2 + |a_1|^2 + |a_2|^2 + |a_2|^2 + 2Re(a_2\overline{a_1})$ = $|a_1|^2 + |a_1|^2 + |a_2|^2 + 2Re(a_1\overline{a_2}) + |a_2|^2$ = $2|a_1|^2 + 2Re(a_1\overline{a_2}) + 2|a_2|^2$ = $2|a_1|^2 + 2Re(a_1\overline{a_2}) + 2|a_2|^2$ = $(u, u) \ge 0$. Also, $(u, u) = 0$ => $a_1\overline{a_1} + a_2\overline{a_2} + \overline{a_1}a_2 + \overline{a_2}a_1 = 0$ => $|a_1|^2 + |a_2|^2 = 0$ => $a_1 = 0$ and $a_2 = 0$ => $u = (0, 0)$ => $u = 0$ (v) $(\alpha u + \beta v, w) = (\alpha(a_1, a_2) + \beta(b_1, b_2), (c_1, c_2))$ = $(\alpha a_1 + \beta b_1, \alpha a_2 + \beta b_2), (c_1, c_2))$ = $(\alpha a_1 + \beta b_1)\overline{c_1} + (\alpha a_2 + \beta b_2)\overline{c_2}$ = $\alpha a_1\overline{c_1} + \alpha a_2\overline{c_2} + \beta b_1\overline{c_1} + \beta b_2\overline{c_2}$ = $\alpha(\alpha a_1\overline{c_1} + a_2\overline{c_2}) + \beta(vb_1\overline{c_1} + b_2\overline{c_2})$ = $\alpha(u, w) + \beta(v, w)$ From (2) and (5) $(\alpha u + \beta v, w) = \alpha(u, w) + \beta(v, w)$. $\forall u, v, w \in F^2$ and $\forall \alpha, \beta \in F$. :.$F^2$ is an inner product space. 3) Let $V$ be the set of all continuous complex-valued functions on the closed unit interval [0,1]. If $f(t), g(t) \in V$, define $(f(t), g(t)) = \int_0^1 f(t)\overline{g(t)} dt$. Verify that this defines an inner product space. ### Proof Let $f(t), g(t), h(t) \in V$ and $\alpha, \beta \in F$. (i) $(f(t), g(t)) = \int_0^1 f(t)\overline{g(t)} dt $ $(g(t), f(t)) = \int_0^1 g(t)\overline{f(t)} dt $ $(g(t), f(t)) = \int_0^1 \overline{f(t)}\overline{g(t)} dt $ = $\int_0^1 \overline{f(t)g(t)} dt$ = $\overline{\int_0^1 f(t)g(t) dt}$ = $\overline{(f(t), g(t))}$ :. $(f(t), g(t)) = \overline{(g(t), f(t))}$ (ii) $(\alpha f(t), g(t)) = \int_0^1 \alpha f(t)\overline{g(t)} dt$ = $\alpha \int_0^1 f(t)\overline{g(t)} dt$ = $\alpha (f(t), g(t))$ => (1) (iii) $(f(t), \alpha g(t)) = \int_0^1 f(t)\overline{\alpha g(t)} dt$ = $\int_0^1 f(t)\overline{\alpha} \overline{g(t)} dt$ = $\overline\alpha \int_0^1 f(t)\overline{g(t)} dt$ = $\overline\alpha (f(t), g(t))$ => (2) (iv) $(f(t), f(t)) = \int_0^1 f(t)\overline{f(t) dt}$ = $\int_0^1 |f(t)|^2 dt$ => $(f(t), f(t)) \ge 0$. Now, $(f(t), f(t)) = 0$ => $\int_0^1 |f(t)|^2 dt = 0 $ => $\int_0^1 |f(t)|^2 dt \ge 0$ => $|f(t)|^2 = 0$ => $f(t) = 0$ Thus $(f(t), f(t)) = 0$ <=> $f(t) = 0$. (v) $(\alpha f(t) + \beta g(t), h(t)) = \int_0^1 (\alpha f(t) + \beta g(t))\overline{h(t)} dt$ = $\int_0^1 \alpha f(t)\overline{h(t)} dt + \int_0^1 \beta g(t)\overline{h(t)} dt$ = $\alpha\int_0^1 f(t)\overline{h(t)} dt + \beta\int_0^1 g(t)\overline{h(t)} dt$ = $\alpha (f(t), h(t)) + \beta(g(t), h(t))$ => (3) $(f(t), h(t)) = \int_0^1 f(t)\overline{h(t)} dt$ $\alpha(f(t), h(t)) = \alpha \int_0^1 f(t)\overline{h(t)} dt$ $(g(t), h(t)) = \int_0^1 g(t)\overline{h(t)} dt$ $\beta(g(t), h(t)) = \beta\int_0^1 g(t)\overline{h(t)} dt$ :. $(\alpha f(t) + \beta g(t), h(t)) = \alpha (f(t), h(t)) + \beta(g(t), h(t))$. :. $V$ is an inner product space. ## Corollary: Any finite extension of a field of characteristic 0 is a simple extension. ### Proof: Proof is as above theorem. (Induction method). [It $a_1, a_2, ..., a_n$ are algebraic over $F$, then there is an element $c \in F(a_1, a_2, ..., a_n)$ such that $F(c) = F(a_1, a_2, ..., a_n)$. ## The elements of Galois Theory. ## Theorem: If $K$ is a field and if $\sigma_1, \sigma_2, ..., \sigma_n$ are distinct automorphisms of $K$, then it is impossible to find elements $a_1, a_2, .., a_n$, not all 0, in $K$ such that $a_1\sigma_1(u) + a_2\sigma_2(u) + ... + a_n\sigma_n(u) = 0$ for all $u \in K$. ### Proof: Assume that we could find $a_1, a_2, ..., a_n$, not all zero, in $K$ such that $a_1\sigma_1(u) + a_2\sigma_2(u) + ... + a_n\sigma_n(u) = 0$ for all $u \in K$. Then we could find such a relation having as few nonzero terms as possible; on renumbering we can assume that this minimal relation is $a_1\sigma_1(u) + a_2\sigma_2(u) + ... + a_m\sigma_m(u) = 0$ where $a_1, a_2, ..., a_m$ are all different from 0. case(i) $m = 1$ In this case, $u$ becomes, $a_1\sigma_1(u) = 0$ for all $u \in K$. $a_1 = 0$ This is a contradiction to our assumption. Thus we may assume that $m \ge 1$. Since the automorphisms are distinct, there is an element $c \in K$ such that $\sigma_1(c) \neq \sigma_m(c)$. Since $c \in K$ for all $u \in K$, relation (1) must also hold for $c u$. i.e., $a_1\sigma_1(cu) + a_2\sigma_2(cu) + ... + a_m\sigma_m(cu) = 0$ for all $u \in K$. Using the hypothesis that the $\sigma_i$s are automorphisms of $K$, this relation becomes $a_1\sigma_1(c)\sigma_1(u) + a_2\sigma_2(c)\sigma_2(u) + ...+ a_m\sigma_m(c)\sigma_m(u) = 0$ $a_1(\sigma_1(c) - \sigma_m(c))\sigma_1(u) + a_2(\sigma_2(c) - \sigma_m(c))\sigma_2(u) + ... + a_m(\sigma_m(c) - \sigma_m(c))\sigma_m(u) = 0$ = $a_2[(\sigma_2(c) - \sigma_m(c))\sigma_1(u) + ... + a_m(\sigma_m(c) - \sigma_m(c))\sigma_m(u) = 0$. If we put $b_i = a_i(\sigma_i(c) - \sigma_m(c))$ for $i = 2, 3, ..., m$ Then the $b_i$ are in $K$, $b_m = a_m(\sigma_m(c) - \sigma_1(c)) = 0$, since $a_m \neq 0$ and $\sigma_m(c) - \sigma_1(c) \neq 0$ yet $b_2 \sigma_2(u) + ... + b_m \sigma_m(u) = 0$ for all $u \in K$. This produces a shorter relation. This is a contradiction. :. Our assumption is wrong. :. It is impossible to find elements $a_1, a_2, ..., a_n$, not all 0, in $K$ such that $a_1 \sigma_1(u) + a_2\sigma_2(u) + ...+ a_n\sigma_n(u) = 0$ for all $u \in K$. ## Definition: (Fixed field) If $G$ is a group of automorphisms of $K$, then the fixed field of $G$ is the set of all elements $a \in K$ such that $\sigma(a) = a$ for all $\sigma \in G$. i.e., fixed field of $G = \{ a \in K | \sigma(a) = a, \forall \sigma \in G \}$ ## Lemma: The fixed field of $G$ is a subfield of $K$. ### Proof: Let $a,b$ be any two elements of the fixed field of $G$. Then $\sigma(a) = a$ and $\sigma(b)=b$ for all $a, b \in K$ and for all $\sigma \in G$. Now, $\sigma(a + b) = \sigma(a) + \sigma(b)$ [$ \sigma$ - automorphism ] = $a + b$ :. $a + b$ belongs to the fixed field of $G$. Similarly $\sigma(ab) = \sigma(a)\sigma(b)$ = $ab$ :. $ab$ belongs to the fixed field of $G$. Next, $b \in K$, $b' \in K$ $\sigma(b') = \sigma(b^n) = b^n$ :. $\sigma(b') = b'$ :. $b'$ belongs to the fixed field of $G$. :. The fixed field of $G$ is a subfield of $K$. ## Definition: Let $K$ be a field and let $F$ be a subfield of $K$. Then the group of automorphisms of $K$ relative to $F$, written $G(K,F)$, is the set of all automorphisms of $K$ leaving every element of $F$ fixed; that is, the automorphism $\sigma$ of $K$ is in $G(K,F)$ if and only if $\sigma(a) = a$ for every $a \in F$. ## Example: 1) Let $K$ be the field of complex numbers and let $F$ be the field of real numbers. Compute $G(K,F)$. If $\sigma$ is any automorphism of $K$. Since $i^2 = -1$, $(\sigma(i))^2 = (\sigma(i))^2 = (\sigma(-1)) = -1$. Here $\sigma(i) = \pm i$. In addition, $\sigma$ leaves every real number fixed, then for any $a + bi$ where $a,b$ are real, $\sigma(a + bi) = \sigma(a) + \sigma(b)\sigma(i)$ = $a + bi$. Each of these possibilities, namely the mapping $\sigma(a + bi) = a + bi$ and $\sigma(a + bi) = a - bi$ defines an automorphism of $K$, $\sigma$ being the identity automorphism and $\sigma$ complex-conjugation. Thus $G(K, F)$ is a group of order 2. 2) Let $F_0$ be the field of rational numbers and let $K = F_0 (\sqrt[3]{2}$, where $\sqrt[3]{2}$ is the real cube root of 8. Every element in $K$ is of the form $a_0 + a_1\sqrt[3]{2} + a_2(\sqrt[3]{2})^2$, where $a_0, a_1, a_2$ are rational numbers. If $\sigma$ is an automorphism of $K$, then $\sigma(\sqrt[3]{2}) = \sqrt[3]{\sigma(2)} = \sqrt[3]{2}$. Hence $\sigma(\sqrt[3]{2})^2 = \sigma(\sqrt[3]{2} * \sqrt[3]{2}) = \sigma(\sqrt[3]{2})\sigma(\sqrt[3]{2})$. However, there is only one real cube root of 8 lying in $K$. Therefore, there is only one value of $\sigma(\sqrt[3]{2})$ and since $K$ is a subfield of the real field, we must have that $\sigma(\sqrt[3]{2}) = \sqrt[3]{2}$ But then $\sigma(a_0 + a_1\sqrt[3]{2} + a_2(\sqrt[3]{2})^2) = a_0 + a_1\sqrt[3]{2} + a_2(\sqrt[3]{2})^2$. i.e., $\sigma$ is the identity automorphism of $K$. We thus see that $G(K, F_0)$ consists only of the identity map - and in this case the fixed field of $G(K, F_0)$ is not $F_0$ but is, in fact, larger, being all ## Lemma: $G(K,F)$ is a subgroup of the group of all automorphisms of $K$. ### Proof: We note that $G(K, F)$ is non-empty because atleast the identity automorphism $1$ of $K$ is in $G(K,F)$. i.e., $1(a) = a, \forall a \in F$ Now, let $g, f \in G(K, F)$, then $f(a) = a, \forall a \in F$ and $g(a) = a, \forall a \in

Use Quizgecko on...
Browser
Browser