<<

MA3A6 ALGEBRAIC THEORY

SAMIR SIKSEK

Abstract. This is an incomplete set of lecture notes for for Algebraic . I do not know yet if it will be completed, so you are advised to continue taking notes in the lectures. Please send comments, misprints and corrections to [email protected]

Contents 1. Orientation 2 1.1. Local Arguments 2 1.2. Infinite Descent 2 1.3. Descent 3 2. Beginnings of Theory 4 3. Preliminaries on Rings and Ideals 5 3.1. Rings 5 3.2. Ideals 6 3.3. Quotient Rings 6 3.4. Units 6 3.5. Fields 7 3.6. Fields of Fractions 8 3.7. Prime and Maximal Ideals 8 3.8. Unique Domains 8 4. Algebraic and Algebraic 9 5. Minimal Polynomials 10 6. Conjugates 12 7. Factorization of Polynomials 13 8. Eisenstein’s Criterion For Irreducibility 14 9. Symmetric Polynomials 15 10. Algebraic Integers form a 17 11. Algebraic Numbers form a Field 19 12. Number Fields 19 13. Fields Generated by Conjugate Elements 21 14. Embeddings 22 15. Field Polynomial 23 16. 25 17. Determinants and Discriminants 28 18. Ideals 30 18.1. Quotient Rings 31

Date: September 26, 2006. 1 2 SAMIR SIKSEK

19. Prime and Maximal Ideals 34 20. Towards Unique Factorization for Ideals I 35 21. Towards Unique Factorization for Ideals II 36 22. Unique Factorization Proof—A Summary so Far 37 23. A Special Case of the Cancellation Lemma 37 24. Ideal Classes 38 25. Unique Factorization Proof—Summary So Far (again) 40 26. What are the prime ideals of K? 40

1. Orientation arose out of the study of Diophantine equations. A Diophantine equation is a polynomial equation in several variables with coefficients and one desires the solutions in integers. The study of Diophantine equations seems as old as human civilization itself; they are however named after Diophantus of Alexandria who lived in the 3rd century BC. Soon we will give our motivating example for algebraic number theory. Before that we look at three methods for attacking Diophantine equations which should be close to the heart of every student of number theory: Local Methods, Infinite Descent, Descent. 1.1. Local Arguments. This means that we look at the equation modulo some positive integer m and try to get information about integral solutions. Example 1.1. Show that the equation x2 + 1 = 4y2 does not have solutions in integers. Answer: Suppose x, y is an integer solution. Reducing the equation modulo 4 we get x2 + 1 ≡ 0 (mod 4). However, the squares modulo 4 are x2 ≡ 0 or 1 (mod 4). So x2 + 1 ≡ 1 or 2 (mod 4), giving a contradiction. Included in ‘local arguments’ is the idea that if an equation does not have real solutions then it does not have integeral solutions. For example, the equation x2 + 5 = −2z2 does not have integral solutions because it does not have real solutions. 1.2. Infinite Descent. ‘infinite descent’ is used to show that equations do not have solutions, or that they do not have non-trivial solutions. The idea is to suppose that a Diophantine equation has solutions, take the smallest one and show that there must be a smaller one, giving a contradiction. Example 1.2. Show that the equation X2 = 2Y 2 does not have non-trivial solu- tions in integers. √ You will recognize this as essentially the proof that 2 is irrational. Suppose that we have non-trivial solutions. Let (x, y) be a non-trivial solution with the 2 2 value of |x| minimal. Since x = 2y we get that x is even. So x = 2x1 for some ALGEBRAIC NUMBER THEORY 3

2 2 x1 ∈ Z. So 2x1 = y and we show that y = 2y1. Now (x1, y1) is a non-trivial 2 2 solution to X = 2Y and |x1| < |x|, giving a contradiction. The name ‘infinite descent’comes from the original way in which the method is applied. We start with a non-trivial solution (x, y) and construct another one (x1, y1), and then from (x1, y1) we get another one (x2, y2) and so on. We get an infinite list of non-trivial solutions satisfying

|x| > |x1| > |x2| > ··· > 0; but we cannot squeeze infinitely many integers between |x| and 0 and we have a contradiction. Here is another example due to Euler. Example 1.3. Show that the equation (1) X2 + 2Y 3 + 4Z3 = 0 has no non-trivial solutions. Answer: Suppose it has non-trivial solutions. Let (x, y, z) be a non-trivial solution with the value of |x| minimal. We see that x3 is even and so x is even. Write x = 2x1. Then 3 3 3 4x1 + y + 2z = 0. Thus y = 2y1 and applying the same trick again z = 2z1. We see that (x1, y1, z1) is a non-trivial solution to equation (1) with |x1| < |x| giving a contradiction. In both the above examples, we could have obtained a contradition by explaining that we may assume that x, y are coprime. Then the argument shows that x, y are not coprime. This is not always the case with infinite descent examples. 1.3. Descent. Descent is not the same as infinite descent. It is a technique invented by Fermat. It uses variants of the following very simple Lemma: Lemma 1.1. Suppose that U, V , W are non-zero integers, with U, V coprime. n n Suppose also that UV = W where n is a positive integer. Then U = ±W1 and n n V = ±W2 for some integers W1, W2. Moreover, if n is odd, we can take U = W1 n and V = W2 .

Proof. Let p1, . . . pr be the distinct prime divisors of U and q1, . . . , qs the distinct prime divisors of V . Since U, V are coprime, these lists are disjoint (this is a very important point). Notice that

p1, . . . , pr, q1, . . . , qs are the distinct prime divisors of W . By the Fundamental Theorem of Arithmetic, we can write

a1 ar b1 bs c1 cr d1 ds U = ±p1 . . . pr ,V = ±q1 . . . qs ,W = ±p1 . . . pr q1 . . . qr . Substituting in UV = W n we get

a1 ar b1 bs nc1 ncr nd1 nds p1 . . . pr q1 . . . qs = ±p1 . . . pr q1 . . . qr . From the uniqueness of factorization we deduce that

ai = nci, bj = ndj; it is here that we need the fact that the ps and qs are distinct. Hence  n c1 cr n d1 ds U = ± (p1 . . . pr ) ,V = ± q1 . . . qs . 4 SAMIR SIKSEK

This completes the proof for n even. For n odd, simply absorb the ± inside the n-th power.  We are ready to give a first example of descent. Example 1.4. Show that the equation 4X2 − 1 = Y 3 has no solutions in integers apart from (X,Y ) = (0, −1). Answer: First notice that if (X,Y ) is a solution then so is (−X,Y ). So we may suppose that we have a solution with X ≥ 0. Write (2X + 1)(2X − 1) = Y 3. Let d be the gcd of 2X +1 and 2X −1. Since d divides them both, it must divide their difference; thus d | 2 and so d = 1 or d = 2. However, d divides 2X + 1 which is odd, so d is odd and so d = 1. In other words, 2X + 1 and 2X − 1 are coprime. Applying Lemma 1.1 we see that 3 3 (2) 2X + 1 = Y1 , 2X − 1 = Y2 , where Y = Y1Y2 (this step is called ‘descent’). Subtracting we get 3 3 2 2 2 = Y1 − Y2 = (Y1 − Y2)(Y1 + Y1Y2 + Y2 ).

Recall our assumption that X ≥ 0. Hence 2X + 1 > 2X − 1 which gives Y1 > Y2; i.e. Y1 − Y2 > 0. We deduce that either 2 2 Y1 − Y2 = 1,Y1 + Y1Y2 + Y2 = 2, or 2 2 Y1 − Y2 = 2,Y1 + Y1Y2 + Y2 = 1. But, from (2) we know that Y1, Y2 are both odd, and so Y1 − Y2 is even. Hence we have only to deal with the last case: Y1 − Y2 = 2. Write Y1 = Y2 + 2 and 2 2 2 substitute in to Y1 + Y1Y2 + Y2 = 1. We get Y2 + 2Y2 + 1 = 0. Thus Y2 = −1 and Y1 = Y2 + 2 = 1. Hence Y = Y1Y2 = −1 and we see that X = 0 as required.

2. Beginnings of Algebraic Number Theory The following example is the beginning of algebraic number theory. Fermat claimed that he has shown that the only solutions to the equation X2 + 2 = Y 3 is (X,Y ) = (±5, 3). Euler ‘proved’Fermat’s assertion in 1770. The ‘proof’ mimics the standard factorization or descent argument used in the above example. We say ‘proof’ in quotes because what Euler wrote down was not rigorous, but can be made rigorous. Let us see Eulers argument: simply factor the left-hand side to get: √ √ (X + −2)(X − −2) = Y 3. √ √ We leave√ the usual integers Z and work with Z[ −2]. Now the ‘integers’ X + −2, X − −2 are ‘coprime’and so each must be a cube. So √ √ X + −2 = (a + b −2)3, where a, b are in Z. Once we get past the dodgy step above, the rest of the argument is respectable. Expand the brackets to get √ √ X + −2 = (a3 − 6ab2) + (3a2b − 2b3) −2. ALGEBRAIC NUMBER THEORY 5 √ Comparing the coefficients of −2 we get X = a3 − 6ab2, 1 = 3a2b − 2b3. Hence b | 1 and so b = ±1 which gives 3a2 − 2 = ±1. Therefore a = ±1 and we get 3 2 X = a − 6ab = ±5. Hence (X,Y ) = (±5, 3) as required. √ Is this argument respectable? It turns out that it is because Z[ −2] is a unique factorization domain, and so we have an analogue of the Fundamental Theorem of Arithmetic and can prove the needed analogue of Lemma 1.1. This and other Diophantine equations have lead us to reconsider what integers√ are. In Q we have rational (or usual) integers Z. But in other fields such as Q( d) we have an extension of the concept of integer.√ Unfortunately unique factorisation√ does not always hold. For example, in Q( −5), the ‘integers’is the ring Z[ −5]. Here we do not have unique factorization (i.e. there is no analogue of the Funda- mental Theorem of Arithmetic); for example, 6 can be factorized as a product of irreducibles in two different ways, √ √ 6 = 2 × 3 = (1 + −5)(1 − −5). Thus the analogue of our Lemma 1.1 does not hold for this ring. In this course we cover the following ideas: (1) The correct generalization of the concept of integer. (2) Whilst uniqueness of factorization fails for elements (as above), it holds for ideals; every ideal can be expressed as a product of powers of distinct prime ideals in a unique way. (3) Minkowski’s Theorem. Essentially this tells us that whilst unique factor- ization fails, it is not by too much. (4) Dirichelet’s Theorem.

3. Preliminaries on Rings and Ideals We begin by revising some ideas that you have met in previous algebra courses.

3.1. Rings. By a ring we shall always mean a with a unit element 1. Examples of rings that you are familiar with are Q, Z, Z[i] (the Gaussian integers), Q[x], Z[x]. Another important ring is Z/nZ (the integers modulo n). You probably called this ring Zn in your earlier courses, but we will stick with the notatin Z/nZ. Definition. Let R be a ring. A non-zero element x ∈ R is called a zero-divisor if there is some other non-zero element y such that xy = 0. A ring that does not have zero-divisors is called an integeral domain.

Example 3.1. Z, Q[x] do not have zero-divisors and are therefore integral domains. Example 3.2. In the ring Z/6Z (integers modulo 6) the elements 2 and 3 are zero-divisors, because 2 × 3 ≡ 0 (mod 6) but 2 6≡ 0 (mod 6) and 3 6≡ 0 (mod 6). Thus Z/6Z is not an integral domain. Exercise 3.3. Suppose n > 1 is an integer. Show that Z/nZ is an integral domain if and only if n is prime. 6 SAMIR SIKSEK

3.2. Ideals. Definition. Let R be a ring. A non-empty subset I is an ideal if • a − b ∈ I for every a, b ∈ I (this really says that I is an additive subgroup of R); • if a ∈ I and r ∈ R then ra ∈ I. Example 3.4. R is an ideal of R. Every other ideal of R is called a proper ideal. Example 3.5. Suppose R is an ring and a ∈ R. We define aR = {ar : r ∈ R} . It is easy to show that aR is an ideal of R (just check the definition). We call aR the generated by a. Another common notation for aR is (a). The ideal (0) = {0} is called the zero ideal. 3.3. Quotient Rings. Let I be an ideal of the ring R. A coset of I is of the form x + I = {x + a : a ∈ I}. Recall that two cosets are equal x + I = y + I if and only if x − y = I. We define the quotient R/I = {x + I : x ∈ R}. A priori R/I is just the set of cosets of R, but we can make it into a ring by defining addition and multiplication as follows: (x + I) + (y + I) = (x + y) + I, (x + I)(y + I) = xy + I. Exercise 3.6. Prove that these operations are well-defined and that they do give us a ring structure on R/I. 3.4. Units. Definition. An element u of a ring R is called a unit (or an invertible element) if there is some other element v ∈ R such that uv = 1. If R is an integral domain, then v is unique and we call it the inverse of u and write v = u−1. The set of units in R is denoted by R∗ or U(R).

Example 3.7. The units of Z are ±1. Thus Z∗ = {1, −1}. Example 3.8. The units of Z[i] = {a + bi : a, b ∈ Z} are ±1, ±i. It is clear that these are units. Let us prove that they are the only ones. Suppose u ∈ Z[i] is a unit and let v = u−1. Then u = a + bi and v = c + di for some integers a,. . . ,d such that (a + bi)(c + di) = 1. Conjugating we get (a − bi)(c − di) = 1. Multiplying the last two equalities (a2 + b2)(c2 + d2) = 1. Now noting that a2 + b2, c2 + d2 are in Z and non-negative we deduce a2 + b2 = c2 + d2 = 1. Hence (a, b) = (±1, 0) or (0, ±1) giving the u = ±1 or ±i. ALGEBRAIC NUMBER THEORY 7 √ √ Exercise 3.9. Determine the units of Z[ −2] = {a + b −2 : a, b ∈ Z} and of √ √ 1 + −3  1 + −3  = a + b : a, b ∈ . Z 2 2 Z Exercise 3.10. If R is a ring then R∗ is a group under multiplication (it is called the unit group of R). √ √ √ Example√ 3.11. Note that ( 2 + 1)( 2 − 1) = 1. Thus 2 + 1 is a unit in the ring Z√[ 2]. Since the units form a group under multiplication, we see immediately that ( 2 + 1)n is a unit for all integers n. In fact, it can be shown that √ √ ∗ n Z[ 2] = {±( 2 + 1) : n ∈ Z}. Dirichelet’s Units Theorem, which we will hopefully meet at the end of this course, describes unit groups R∗ for certain rings called rings of integers of number fields. Exercise 3.12. If R is a ring then R[x]∗ = R∗. 3.5. Fields. Definition. A field is a non-zero ring where every non-zero element is a unit. Example 3.13. Q, R and C are fields. Theorem 1. Every finite integral domain is a field. Proof. Suppose R is a finite integral domain. We want to show that every non-zero element is invertible. Suppose x is a non-zero element, and consider the map

φx : R → R, φx(a) = x a.

We want to show first that φx is one-to-one. So suppose that a, b ∈ R and φx(a) = φx(b). Thus x a = x b, or in other words x (a − b) = 0. But R is an integral domain, and so x is not a zero-divisor. Hence a − b = 0, which gives a = b showing indeed that φx is one-to-one. Here is where we use the fact that R is finite: recall that a one-to-one map from a finite set to itself must be onto. Hence φx is onto. In particular, there is some element y ∈ R such that φx(y) = 1. We can re-write this as xy = 1, clearly showing that x is invertible. This show that R is a field.  Example 3.14. Suppose p is a prime. We denote Z/pZ (the integers modulo p) by Fp. We know from Exercise 3.3 that Fp is an integral domain. Since Fp is finite, Theorem 1 tells us that it is a field. Note that the proof of Theorem 1 is non-constructive. It shows that if x is a non-zero element then it has an inverse, but it does not tell us how to find it. For Fp we can actually give a constructive proof. Suppose x is an integer satisfying x 6≡ 0 (mod p). This means that p - x and (as p is prime), the integers x and p are coprime. By Euclid’s algorithm we know that there are integers y, z such that y x + z p = 1. Recuding modulo p we get y x ≡ 1 (mod p), 8 SAMIR SIKSEK showing that x is invertible. Euclid’s algorithm is actually a recipe that will write down for us y (and z). Thus we can calculate the inverse of any non-zero element.

Exercise 3.15. Use Euclid’s algorithm to find the inverse of 14 in F101. 3.6. Fields of Fractions. Definition. Let R be an integral domain. A field K is said to be the field of fractions if • K contains R as a ; • every element α ∈ K is expressible as a/b for some a, b ∈ R and b 6= 0. Example 3.16. Q is the field of fractions of Z. Q(x) is the field of fractions of Q[x] and of Z[x]. Theorem 2. Every integral domain has a field of fractions (that is unique up to isomorphism). 3.7. Prime and Maximal Ideals. Definition. Let R be a ring. An ideal ℘ of R is said to be a prime ideal if, ab ∈ ℘ implies a ∈ ℘ or b ∈ ℘ for all a, b ∈ R. An ideal m of R is said to be maximal if it is a proper ideal and is not contained in any other proper ideal. Exercise 3.17. Show that the zero ideal (0) is prime if and only R is an integral domain. Exercise 3.18. Suppose p, q are distinct prime numbers. Let R = Z/pqZ (the integers modulo pq). Show that pR and qR are prime ideals. The proof of the following theorem is an easy exercise. Theorem 3. Let R be a ring. An ideal ℘ is a prime ideal if and only if R/℘ is an integral domain. An ideal m is maximal if and only if R/m is a field. 3.8. Unique Factorization Domains. Definition. Let R be a ring. A non-zero element x is called irreducible if x is not a unit and whenever x = ab with a, b ∈ R, one of a, b must be a unit. Example 3.19. In Z the irreducible elements are of the form ±p where p is prime. The composite elements are of the form ±n where n is composite. Example 3.20. 2 is irreducible in Z but not in Z[i], since 2 = (1 + i)(1 − i) and neither (1 + i) nor (1 − i) is a unit. Definition. Two elements x, y ∈ R are called associates (written x ∼ y) if there is a unit u ∈ R such that x = uy. Definition. A ring R is a unique factorization domain if it satisfies the following three conditions: • R is an integral domain; • Every non-zero, non-unit x ∈ R can be written as a product x = q1 . . . qr of irreducible elements; • The decomposition of x into irreducibles is unique up to units and permu- 0 0 tation of factors. This means that if x = q1 . . . qs is another factorization 0 into irreducibles then r = s and after possibly relabeling we have qi ∼ qi for all i = 1, . . . , r. ALGEBRAIC NUMBER THEORY 9

4. Algebraic Numbers and Algebraic Integers We now introduce the main objects of study of algebraic number theory.

Definition. Let α ∈ C. We say that α is an algebraic number if there is some non- zero polynomial f(x) ∈ Q[x] (i.e. it has rational coefficients) such that f(α) = 0. We say that α ∈ C is an algebraic integer if there is some f(x) ∈ Z[x] (i.e. with integral coefficients) such that f(α) = 0. Clearly every algebraic integer is an algebraic number. We will see that the converse need not hold. We denote the set of algebraic numbers by Q and the set of algebraic integers by O. Thus

O ⊂ Q ⊂ C. Lemma 4.1. Z ⊂ O and Q ⊂ Q. Proof. If α ∈ Z then α is the root of X − α which is a monic polynomial in Z[X], and so α ∈ O. Thus Z ⊂ O, and similarly Q ⊂ Q.  √ Example 4.1. −2 is an algebraic integer because it is a root of the monic poly- 2 nomial with√ integral√ coefficients X + 2. Let α = 2 + 3. We will show that α is also an algebraic integer. Note √ α2 = 5 + 2 6. Hence (α2 − 5) = 24. √ √ In other words, α = 2 + 3 is a root of f(X) = (X2 − 5)2 − 24 = X4 − 10X2 + 1. Since f(X) is monic with integral coefficients, it follows that α is an algebraic integer. Example 4.2. Not every is algebraic. Complex numbers that are not algebraic numbers are called transcendental. Examples of transcendental numbers are e and π. We shall not prove this as we do not need it. For proofs, see Stewart’s Galois Theory.

The elements of Z are called the rational integers. The reason is found in the following theorem which says that any algebraic integer which is also a must belong to Z. Theorem 4. (Gauss) O ∩ Q = Z. Proof. We know that Z ⊂ Q and Z ⊂ O so Z ⊂ O ∩ Q. It is enough to prove that O ∩ Q ⊂ Z. Suppose α ∈ O ∩ Q; we want to show that α ∈ Z. Thus there some polynomial f(X), monic with coefficients in Z such that f(α) = 0. Write n n−1 f(X) = X + cn−1X + ··· + c0, ci ∈ Z. Since α is rational, we may write α = a/b where a, b are integers, b > 0 and gcd(a, b) = 1. Substituting we get an an−1 + c + ··· c = f(α) = 0. b n−1 b 0 10 SAMIR SIKSEK

Multiplying by bn we obtain n n−1 n a + cn−1a b + ··· + c0b = 0. Rearranging we get n−2  n b −cn−1b − · · · − c0 = a . | {z } in Z We deduce that b | an. Recall b > 0. We want to show that α = a/b is in Z and for this it is enought to show that b = 1. Suppose that b 6= 1 and we will derive a contradiction. Then some prime p must divide b. So p | an which implies p | a. This contradicts the assumption that gcd(a, b) = 1. Hence b = 1 and α ∈ Z as required.  Example 4.3. All the examples of algebraic numbers that we have seen so far have been algebraic integers. Thanks to the above theorem, we can now give examples of algebraic numbers that are not algebraic integers: 1/2, −3/4, etc. Any rational number that is not an integer is an algebraic number that is not an algebraic integer. √ √ √ Exercise 4.4. Show that the following are algebraic numbers: 3, 3 + −3, e2πi/7, cos(2πi/3).

Eventually we will show that Q is a subfield of C and that O is a subring of C. For this we need symmetric polynomials which we will cover soon. In the meantime we content with proving the following Lemma.

Lemma 4.2. Suppose α ∈ Q and α 6= 0. Then α−1 ∈ Q. Proof. Suppose α ∈ Q and α 6= 0. Then α is a root of some non-zero polynomial f(X) with rational coefficients. Write n n−1 f(X) = cnX + cn−1X + ··· + c0. Then n n−1 cnα + cn−1α + ··· + c0 = 0. Dividing by αn we get −1 −n cn + cn−1α + ··· + c0α = 0. Hence α−1 is the root of the non-zero polynomial n g(X) = c0X + ··· + cn−1X + cn −1 which has rational coefficients; implying α ∈ Q. 

5. Minimal Polynomials We recall the definition of an algebraic number: α ∈ C is said to be algebraic if there is some non-zero polynomial f ∈ Q[x] such that f(α) = 0. For any algebraic α there are of course infinitely many such f. For example, if α = i, we may take f to be any of x2 + 1, (x2 + 1)(x − 7), (x2 + 1)3,.... It turns out that the ‘best’ choice for f is the one with least degree. By ‘best’ here we mean the choice which most closely reflects the properties of α. To such an f we give a special name: ALGEBRAIC NUMBER THEORY 11

Definition. Suppose α ∈ C is an algebraic number. We define the minimal polynomial of α, which we denote by fα to be the monic polynomial with rational coefficients and least possible degree satisfying fα(α) = 0. As with any other definition in Mathematics, we must be concerned with ex- istence and uniqueness. If α is an algebraic number, there is by definition some non-zero polynomial f with rational coefficients satisfying f(α) = 0. We can make f monic simply by dividing f by the leading coefficient. Out of all such polyno- mials we take fα to be the one of least possible degree, which will be the minimal polynomial. Of course here we run into the problem of uniqueness, for there may be two or more such polynomials of minimal degree. We prove below that minimal polynomials are unique, so until then read ‘let fα be a minimal polynomial for α’ when you see the phrase ‘let fα be the minimal polynomial for α’.

Lemma 5.1. Let α be an algebraic number and fα ∈ Q[x] be its minimal polyno- mial. Then 1 (i) fα is irreducible ; (ii) if g ∈ Q[x] satisfies g(α) = 0 then g | fα.

Proof. For (i) suppose otherwise. Then fα(x) = g(x)h(x) where g, h are polynomi- als with rational coefficients and deg(g), deg(h) < deg(fα); as fα is monic we can suppose that g and h are monic (check this). Since fα(α) = 0 we see that either g(α) = 0 or h(α) = 0. This contradicts that fact that fα is the monic polynomials of least degree satisfying fα(α) = 0. This prove (i). We now turn to (ii). Suppose that g ∈ Q[x] satisfies g(α) = 0. By Euclid’s algorithm we know that

g(x) = q(x)fα(x) + r(x), q(x), r(x) ∈ Q[x], where either r = 0, or otherwise deg(r) < deg(f). We want to show that r = 0; in this case g(x) = q(x)fα(x) showing that fα | g and this is what we desire to prove. So let us assume that r 6= 0, and hence r is a non-zero polynomial with rational coefficients of degree strictly less than f. Substituting α for x in the above and recalling that g(α) = fα(α) = 0 leads us at once to conclude that r(α) = 0. We now divide r by its leading coefficient to obtain a monic polynomial s ∈ Q[x] with degree strictly less than fα and satisfying s(α) = 0. This is a contradiction and proves (ii). There is a slightly different way of proving (ii). We know from (i) that fα is irreducible. Suppose fα - g. Then the polynomials fα and g are coprime. By Euclid there are polynomials u, v ∈ Q[x] such that

u(x)fα(x) + v(x)g(x) = 1.

Substituting α for x we obtain 0 = 1 which is a contradiction.  We now prove the uniqueness of the minimal polynomial.

Corollary 5.2. Suppose α is an algebraic number. The minimal polynomial fα is unique.

1 Of course we mean here that fα is irreducible over Q. Over C we know by the Fundamental Theorem of Algebra that we may factorize it as a product of linear factors. 12 SAMIR SIKSEK

Proof. Suppose there are two minimal polynomials f, g for α. By part (ii) of the above Lemma we know that f | g and g | f. Hence f = λg for some non-zero rational number λ. But f and g are both monic, and so λ = 1.  Theorem 5. Suppose α is an algebraic number. A polynomial f ∈ Q[x] is the minimal polynomial of α if and only if f is monic and irreducible and satisfies f(α) = 0. Proof. If f is the minimal polynomial of α then f is monic and satisfies f(α) = 0 by definition, and f is irreducible by part (i) of Lemma 5.1. Conversely suppose that f is monic and irreducible and satisfies f(α) = 0. By part (ii) Lemma 5.1 we see that fα | f. As f is irreducible, fα is a constant or fα = λf for some non-zero rational λ. Since fα is monic and fα(α) = 0 we see that fα is non-constant. Hence fα = λf for some non-zero rational λ. But fα and f are both monic, so λ = 1 proving that f = fα as desired.  2 Example 5.1. Let f(√X) = X − 2; it is monic, has rational coefficients, is irre- ducible√ and satisfies f( 2) = 0. By the above theorem, f is the minimal polynomial of 2. Notice that it would not have been straightforward to deduce this fact from the definition of minimal polynomial.

6. Conjugates

Definition. Suppose that α is an algebraic number and let fα be its minimal polyno- mial. We define the degree of α to be the degree of fα. We define the conjugates of α to be the roots fα.

By the Fundamental Theorem of Algebra we may factorize fα over C into a product of linear factors

fα(X) = (X − α1)(X − α2) ··· (X − αn).

Then α1,. . . ,αn are the conjugates of α. Notice that α is one of them. Here n is the number of conjugates of α, and at the same time the degree f (which by definition is the degree of α). Thus an algebraic number with degree n has n conjugates. The reader probably expects us to say that an algebraic number with degree n has n conjugates up to repetition, since polynomials can have repeated roots. However there is no repetition involved here since minimal polynomials (being irreducible) do not have repeated roots, thanks to the following theorem.

Theorem 6. Suppose f ∈ Q[x] is an . Then f does not have repeated roots in C. In particular, this applies to minimal polynomials of algebraic numbers. Thus an algebraic number has distinct conjugates.

Proof. Suppose f ∈ Q[x] is irreducible, but has repeated root β ∈ C. Consider the derivative f 0 of f. Clearly f 0 ∈ Q[x]. Since f is irreducible and f 0 has degree less than f we know that f and f 0 are coprime. By Euclid there are some polynomials u, v ∈ Q[x] such that (3) u(x)f(x) + v(x)f 0(x) = 1. However, β ∈ C is a repeated root of f. Thus f(x) = (x − β)2g(x) ALGEBRAIC NUMBER THEORY 13 where g(x) is a polynomial with coefficients in C. Differentiating we see that f 0(x) = (x − β)2g0(x) + 2(x − β)g(x). It is clear that f(β) = f 0(β) = 0. Substituting β for x in (3) gives 0 = 1 which is a contradiction.  √ 2 Example 6.1. By√ Theorem√ 5, f =√X − 3 is the minimal polynomial of 3. Thus the conjugates of 3 are 3 and − 3.

7. Factorization of Polynomials So far we have concerned ourselves with minimal polynomials of algebraic inte- gers and have neglected algebraic integers. To recall, we say α ∈ C is an algebraic integer if there exists some monic polynomial f with coefficients in Z such that f(α) = 0. Of course, algebraic integers are algebraic numbers and so anything that applies to algebraic numbers must apply to algebraic integers. It is however natural to ask, if the minimal polynomial of an algebraic integer must have coefficients in Z. Notice, that there is no reason a priori (on the basis of the above definition) to expect this. We do not know that the polynomial f is minimal. We know that α has some minimal polynomial fα which is monic, has rational coefficients, is irreducible and satisfies fα(α) = 0. As to the relationship between fα and the polynomial f in the above definition, all we can conclude is that fα | f. Since f has integral coefficients, should we expect that fα has rational coefficients. It turns out that the answer is yes. For this we need the following theorem of Gauss.

Theorem 7. (Gauss) Suppose f is a polynomial with coefficients in Z. Suppose that f(x) = g(x)h(x) where g, h are polynomials with coefficients in Q. Then there is some non-zero rational number λ such that g∗ = λg and h∗ = λ−1 both have coefficients in Z, and therefore we may factorize f over Z as f = g∗h∗. Before proving Gauss’ Theorem we need the following Lemma. Lemma 7.1. Suppose R is an integral domain. Then R[x] is also an integral domain. Proof. Suppose R is an integral domain. Suppose f, g are non-zero elements of R[x]. We would like to show that fg is also non-zero. For this write m m−1 f(x) = amx + am−1x + ··· + a0, a0, . . . , am ∈ R, and n n−1 g(x) = bnx + bn−1x + ··· + b0 b0, . . . , bn ∈ R, with am 6= 0, bn 6= 0. Thus m+n f(x)g(x) = ambnx + lower terms.

As R is an integral domain and am, bn are non-zero we see that ambn 6= 0. Thus fg is non-zero as required.  We may now return to prove Gauss’ Theorem.

Proof of Theorem 7. By clearing denominators we may write

(4) nf(x) = g1(x)h1(x) 14 SAMIR SIKSEK where n is a positive integer and g1, h1 are polynomials with coefficients in Z which are multiples of g, h. Write m m−1 n n−1 g1(x) = amx + am−1x + ··· + a0, h1(x) = bnx + bn−1x + ··· + b0, where the coefficients are in Z. We will eliminate the prime factors of n, one at a time, until we reach the desired factorization f = g∗h∗ where g∗ and h∗ have coefficients in Z. If n = 1 then of course there is nothing to prove. Suppose n > 1 and let p be any prime factor n. Reducing (4) modulo p we obtain

(5) 0 = g1(x)h1(x) where m m−1 n n−1 g1(x) = amx + am−1x + ··· + a0, h1(x) = bnx + bn−1x + ··· + b0.

The equality (5) takes place in Fp[x]. But Fp is an integral domain (indeed it is a field), and so by the above Lemma Fp[x] is an integral domain. Hence either g1 = 0 or h1 = 0. Without loss of generality, let us say that g1 = 0. This means that all the coefficients of g1 are divisible by p. Let g2 = g1/p; this has integral coefficients. Let h2 = h1. Thus 0 n f = g2h2 0 where n = n/p and g2, h2 have coefficients in Z. We repeat this until we have eliminated all prime factors of n.  We now deduce our desired result. Theorem 8. Suppose α is an algebraic number. Then alpha is an algebraic integer if and only if its minimal polynomial fα has coefficients in Z.

Proof. If fα has integral coefficients, then α is an algebraic integer by the very definition of algebraic integers. Conversely suppose that α is an algraic integer. By definition, h(α) = 0 for some monic polynomial h with coefficients in Z. By Lemma 5.1 we know that fα | h. Thus we may write

h(x) = fα(x)g(x) where g(x) ∈ Q[x]. By Gauss’ Theorem above, there is some non-zero rational λ −1 such that λfα and λ g(x) both have integral coefficients. But both h and fα are monic, and therefore g is monic. Examining the leading coefficients of λfα and −1 −1 λ g(x) we see that both λ and λ are integers. Thus λ = ±1. Hence ±fα(x) has integral coefficients which implies that fα(x) has integral coefficients, as desired. 

8. Eisenstein’s Criterion For Irreducibility Theorem 9. Let n n−1 f(x) = anx + an−1x + ··· + a0, be a polynomials with coefficients in Z, satisfying the following three conditions: (i) p - an; (ii) p | ai for 1 ≤ i ≤ n − 1; 2 (iii) p - a0. Then f is irreducible over Q. ALGEBRAIC NUMBER THEORY 15

Proof. We prove this by contradiction. Suppose that f is not irreducible over Q. By Gauss’ Theorem we know that f(x) = g(x)h(x) where g, h are polynomials with coefficients in Z and degrees strictly less than that of f. Write r s g(x) = brx + ··· + b0, h(x) = csx + ··· + c0.

Note that n = r + s and an = brcs. By assumption (i), an is not divisible by p and so neither br nor cs are. Reducing the identity f(x) = g(x)h(x) modulo p we obtain (using (ii)) n g(x)h(x) = anx ; n here the equality takes place in Fp[x]. The only possible way to factorize x as a product of two factors is xuxv for some positive u, v satisfying u + v = n. However, g(x), h(x) are polynomials of degrees r, s with leading coefficients br, cs respectively. We deduce that r s g(x) = brx , h(x) = csx .

Comparing the coefficients of g, h with those of g, h we see that p | b0 and p | c0. 2 However, a0 = b0c0. Thus p | a0 contradicting (iii).  Example 8.1. Let f(x) = x7 − 9x + 3. Letting p = 3 in Eisenstein’s criterion, we immediately see that f is irreducible. Many polynomials do not immediatly satisfy the conditions of Eisenstein’s cri- terion, but do satisfy them after making an appropriate substitution. For example, take g(X) = 8x3 − 6x + 1. Eisenstein’s criterion does not apply to g regardless of the prime p chosen. Now let h(x) = g(x + 1) = 8(x + 1)3 − 6(x + 1) + 1 = 8x3 + 24x2 + 18x + 3. We see that Eisenstein’s criterion applies to h with p = 3. Thus h is irreducible. But if g was reducible then h would also be reducible because of the relation h(x) = g(x + 1). Hence g is irreducible.

9. Symmetric Polynomials This section is based on the corresponding section in Stewart and Tall. Let Z[t1, . . . , tn] be the ring of polynomials in indeterminants t1,. . . ,tn with coeffi- cients in Z. Let Sn be the symmetric group of permutations on the set {1, 2, . . . , n}. For any permutation π ∈ Sn and any polynomial f ∈ Z[t1, . . . , tn] we define the polynomial f π by π f (t1, . . . , tn) = f(tπ(1), . . . , tπ(n)). 2 π For example, if n = 5, f = t1 + t2t3 + t4 − t5 and π = (132)(45) then f = 2 t3 + t1t2 + t5 − t4. π Definition. We call a polynomial f ∈ Z[t1, . . . , tn] symmetric if f = f for all π ∈ Sn.

For example, if n = 3, then t1 + t2 + t3 and t1t2t3 are symmetric. Perhaps less obvious is the fact that t1t2 + t2t3 + t3t1 is also symmetric. For general n, we define n X X X s1 = ti, s2 = titj, s3 = titjtk, . . . , sn = t1t2 . . . tn. i=1 1≤i

The polynomials s1, . . . , sn are called the elementary symetric polynomials in t1, . . . , tn. Note the identity n n−1 n−2 n (X − t1)(X − t2) ... (X − tn) = X − s1X + s2X − · · · + (−1) sn.

If we act on t1, . . . , tn by any permutation π ∈ Sn then we will leave the left-hand side of this identity unchanged, and so the coefficients on the right-hand must be unchanged. This shows that the si are indeed symmetric functions. It is easy to see that any polynomial in s1, . . . , sn with coefficients in Z is a symmetric function of t1, . . . , tn. We would like to prove the converse.

Theorem 10. (Newton) Every symmetric polynomial in Z[t1, . . . , tn] is expressible as a polynomial in s1, . . . , sn with coefficients in Z. Before proving the theorem, we will give an example to illustrate what we mean. Example 9.1. Let n = 2. The polynomial 2 2 f = t1 + t2 is obviously symmetric. We can rewrite it as 2 2 2 2 f = t1 + t2 = (t1 + t2) − 2t1t2 = s1 − 2s2. Hence f can be written as a polynomial in s1, s2 with coefficients in Z. Proof of Theorem 10. We give a constructive argument for writing a symmetric polynomial in terms of s1, . . . , sn. First we define an ordering on the monomials a1 an t1 ··· tn . We will the monomials ‘lexicographically’:  a1 > b1,  or a = b and a > b , a1 an b1 bn 1 1 2 2 t1 ··· tn < t1 ··· tn iff or a1 = b1 and a2 = b2 and a3 > b3,   etc.

a1 an Suppose that f is a symmetric polynomial and let ct1 . . . tn be the monomial appearing in f which is biggest according to the lexicographic ordering. We claim a1 ≥ a2; suppose otherwise that a1 < a2. Since f is symmetric, f also has the a2 a1 a3 an a1 an monomial ct1 t2 t3 . . . tn and this is bigger than ct1 . . . tn giving a contradiction. Therefore a1 ≥ a2 and similarly a2 ≥ a3 and so on. In other words

a1 ≥ a2 ≥ a3 ≥ · · · ≥ an. Now write

k1 = a1 − a2, k2 = a2 − a3, . . . , kn−1 = an−1 − an, kn = an.

These are all non-negative since a1 ≥ a2 ≥ a3 ≥ · · · an. Consider

k1 k2 kn k1 k2 kn s1 s2 . . . sn = (t1 + ··· + tn) (t1t2 + ... ) ... (t1 . . . tn) . The biggest monomial in this expression is

k1 k2 k3 k1+k2+···+kn k2+···+kn kn a1 a2 an t1 (t1t2) (t1t2t3) ··· = t1 t2 . . . tn = t1 t2 ··· tn .

k1 k2 kn Therefore, the biggest monomials in f and in cs1 s2 . . . sn are the same, and the polynomial k1 k2 kn f2 = f − cs1 s2 . . . sn will contain only smaller monomials. Our objective was to show that f can be written as a polynomial in s1, . . . sn with coefficients in Z. It is sufficient to do that ALGEBRAIC NUMBER THEORY 17 for f2 which has a smaller leading monomial. We apply the argument recursively to obtain a sequence of polynomails f1 = f, f2, f3,... each with a smaller leading monomial; moreover the leading monomial will always be of the shape

u1 u2 un t1 t2 . . . tn , u1 ≥ u2 ≥ · · · ≥ un ≥ 0.

Clearly this process will stop eventually, leaving us with a a polynomial in s1, . . . sn with integer coefficients.  2 2 2 2 2 2 Example 9.2. Express f = t1t2 + t2t3 + t3t1 in terms of elementary symmetric functions. Answer: We follow the recipe in the above proof. The biggest monomial of f is 2 2 t1t2. Hence a1 = 2, a2 = 2, a3 = 0. Thus k1 = 2 − 2 = 0, k2 = 2 − 0 = 2, k3 = 0. 0 2 0 2 The above recipe suggests that we subtract s1s2s3 = s2: 2 2 2 2 2 2 2 2 2 2 2 f − s2 = t1t2 + t2t3 + t3t1 − (t1t2 + t2t3 + t3t1) = −2t1t2t3 − 2t2t1t3 − 2t3t1t2. 2 The biggest monomial now is −2t1t2t3. Here a1 = 2, a2 = 1, a3 = 1. Thus 1 0 1 k1 = 2 − 1 = 1, k2 = 1 − 1 = 0 and k3 = 1. We subtract −2s1s2s3. In other words, we add 2s1s3: 2 f − s2 + 2s1s3 = 0. 2 It follows that f = s2 + 2s1s3. 10. Algebraic Integers form a Ring We defined O to be the set of algebraic integers. In this section we show that O is a subring of C. We recall first the elementary symmetric polynomials in indeterminants t1, . . . , tn: n X X X s1 = ti, s2 = titj, s3 = titjtk, . . . , sn = t1t2 . . . tn. i=1 1≤i

Theorem 11. Suppose α is an algebraic integer of degree n and let α1, . . . , αn be its conjugates. Let h(t1, . . . , tn) ∈ Z[t1, . . . , tn] be a symmetric polynomial. Then h(α1, . . . , αn) ∈ Z. For the proof of the theorem we need the following lemma, which in fact is a special case of the theorem.

Lemma 10.1. Suppose α is an algebraic integer of degree n and let α1, . . . , αn be its conjugates. Let s1, . . . , sn be the elementary symmetric polynomials in t1, . . . , tn as above. Then

s1(α1, . . . , αn), s2(α1, . . . , αn), . . . , sn(α1, . . . , αn) ∈ Z. Proof. Recall the identity n n−1 n−2 n (X − t1)(X − t2) ... (X − tn) = X − s1X + s2X − · · · + (−1) sn.

Now substitute α1, . . . , αn for t1, . . . , tn. We obtain

(X − α1)(X − α2) ... (X − αn) = n n−1 n−2 n X − s1(α1, . . . , αn)X + s2(α1, . . . , αn)X − · · · + (−1) sn(α1, . . . , αn).

Since α1, . . . , αn are the conjugates of α, we recognize the polynomial on the left as the minimal polynomial of α. As α is an algebraic integer, its minimal polynomial has integral coefficients, and the lemma follows.  18 SAMIR SIKSEK

Proof of Theorem 11. Suppose h(t1, . . . , tn) ∈ Z[t1, . . . , tn] is symmetric. By New- ton’s Theorem (Theorem 10), h can be expressed as a polynomial in s1, . . . , sn with coefficients in Z. Lemma 10.1 tells us that

s1(α1, . . . , αn), s2(α1, . . . , αn), . . . , sn(α1, . . . , αn) ∈ Z.

Hence h(α1, . . . , αn) ∈ Z. 

We are now ready to prove our goal.

Theorem 12. The set of algebraic integers O is a subring of C.

Proof. We know that Z ⊂ O, therefore O= 6 ∅. To show that O is a subring, it is enought to show that if α, β ∈ O then α + β, −α, αβ ∈ O. Suppose α, β ∈ O and let us prove that α + β ∈ O. To do this we construct a monic polynomial h with coefficients in Z such that h(α + β) = 0. First let m m−1 fα(x) = x + am−1x + ··· + a0 ∈ Z[x] be the minimal polynomial of α. Suppose β has degree n and let β1, . . . , βn be its conjugates where we take β1 = β. Consider the product (6) mn mn−1 fα(x−t1)fα(x−t2) . . . fα(x−tn) = x +umn−1(t1, . . . , tn)x +···+u0(t1, . . . , tn).

As fα has coefficients in Z, it is clear that ui(t1, . . . , tn) ∈ Z[t1, . . . , tn]. Note that if we permute the tj then we permute the factors of the left-hand side of (6), and so the product is unchanged. Hence the coefficients ui(t1, . . . , tn) are unchanged on permuting the tj; in other words they are symmetric polynomials. By Theorem 11 we deduce that

(7) ui(β1, . . . , βn) ∈ Z.

Now substitute β1, . . . , βn for t1, . . . , tn in (6), and let h(x) be the resulting poly- nomial. We obtain

h(x) = fα(x − β1)fα(x − β2) . . . fα(x − βn) mn mn−1 = x + umn−1(β1, . . . , βn)x + ··· + u0(β1, . . . , βn).

Now h(x) is monic. Moreover, h(x) ∈ Z[x] by (7). Finally,

h(α + β) = fα(α + β − β1)fα(α + β − β2) . . . fα(α + β − βn)

= fα(α)fα(α + β − β2) . . . fα(α + β − βn) since β1 = β

= 0 · fα(α + β − β2) . . . fα(α + β − βn) = 0. This shows indeed that α + β ∈ O. The proofs that −α and αβ ∈ O are left as exercises. 

3 Example 10.1. Let√ α be a root of the polynomial x +4x+2. What is the minimal polynomial of α + 2? Answer: Note that the polynomial f(x) = x3 +4x+2 is irreducible by Eisenstein’s criterion (take p = 2). Hence it√ is the minimal polynomial of α. We√ follow the steps√ in the above proof taking β = 2. The conjugates of β are β1 = 2 and β2 = − 2. ALGEBRAIC NUMBER THEORY 19

Then α + β is a root of

h(x) = f(x − β1)f(x − β2) √ √ = f(x − 2)f(x + 2)  √   √  = (x3 + 10x + 2) − (3x2 + 6) 2 (x3 + 10x + 2) + (3x2 + 6) 2 = (x3 + 10x + 2)2 − 2(3x2 + 6)2 = x6 + 2x4 + 4x3 + 28x2 + 40x − 68. √ We haven’t proved that h is the minimal polynomial for α + 2. To do this we must show that h is irreducible, which we leave as an exercise for the reader.

11. Algebraic Numbers form a Field Theorem 13. The set of algebraic numbers Q is in fact a subfield of C. Proof. We know that Q ⊂ Q. Hence Q is non-empty. We need to show that if α, β are in Q, then so are α + β, −α, αβ and α−1 (for non-zero α). The last we showed in Lemma 4.2. The first three are similar to the corresponding proofs for algebraic integers and so we leave them to the reader. 

12. Number Fields Suppose L is a field and K a subfield. It is easy to see that L is a vector space over K (you can add and subtract elements of L, and you can multiply them by elements of K—check that the vector space axioms follow from the field axioms). We denote the dimension of L regarded as a vector space over K by [L : K], and we say that L is a field extension of K of degree [L : K]. If the degree [L : K] is finite, we say that L is a finite extension of K. Notice that this phrase does not mean that the field L is finite (like Fp) but that L, as a vector space over K, has finite dimension. We finally come to the main object of study of algebraic number theory.

Definition. A number field K is a subfield of C that is a finite extension of Q. The degree of K is [K : Q]. A quadratic number field is a number field of degree 2, a cubic number field is a number field of degree 3 and so on.

Notice that any subfield K of C will contain contain the rationals. To see this note first that 1 ∈ K because K is a subfield. Now repeated addition of 1 will show that the natural numbers are in K, and ‘minusing’ that the integers are in K. Taking ratios shows that the rationals are contained in K. However we do not call K a number field unless it is a finite extension of Q. Example 12.1. In previous courses you probably defined √ √ Q( 2) = {a + b 2 : a, b ∈ Q}, √ √ and proved that this√ is a field. We see that 1, 2 is a basis for Q( 2) as a Q-vector space. Hence Q( 2) is a number field of degree 2; in other words it is a quadratic number field. 20 SAMIR SIKSEK

Definition. Suppose α is an algebraic number. Define  g(α)  (α) = : g, h ∈ [x], h(α) 6= 0 . Q h(α) Q We call Q(α) the field generated by α. Exercise 12.2. Check that Q(α) is indeed a field. Theorem 14. Suppose α is an algebraic number of degree n. Then Q(α) is a number field of degree n. The set 1, α, α2, . . . , αn−1 is a Q-basis for Q(α).

In other words, every element of Q(α) can be expressed uniquely as a√ linear combination of 1, α, α2, . . . , αn−1. This shows that our old definition of Q( 2) is consistent with the new one.

Proof of Theorem 14. It is sufficient to show that the set 1, α, α2, . . . , αn−1 is a Q- basis for Q(α). Suppose β ∈ Q(α) we would like to show that β can be written as a Q-linear combination of 1, α, α2, . . . , αn−1. By definition of Q(α), we know that β = g(α)/h(α) where g, h ∈ Q[x] and h(α) 6= 0. Our first step is to invert h(α). Let fα be the minimal polynomial of α. Then h(α) 6= 0 implies that fα - h; since fα is irreducible we deduce that fα and h are coprime. By Euclid there exists polynomials u, v ∈ Q[x] such that

u(x)h(x) + v(x)fα(x) = 1. Substituting α for x we obtain u(α) = 1/h(α). Hence β = g(α)/h(α) = g(α)u(α). Let w(x) = g(x)u(x) ∈ Q[x]. Thus β = w(α). Thus we have written β as a polynomial in α. In other words, we have written β as a linear combination of 1, α, α2, . . . αm where m is the degree of w. Our problem is that m might be greater than n − 1. Recall that n is the degree of α. We know that this is also the degree of fα. By Euclid again we know that there are two polynomials q, r ∈ Q[x] such that

w(x) = q(x)fα(x) + r(x), deg(r) < deg(f). Since deg(r) < deg(f) = n we can write n−1 r(x) = a0 + a1x + ··· an−1x , ai ∈ Q. Substitute α for x to obtain n−1 β = w(α) = r(α) = a0 + a1α + ··· an−1α . Thus every element of Q(α) can be written as a Q-linear combination of 1, α, . . . , αn−1. To complete the proof that 1, α, . . . , αn−1 is a basis, we must of course show that it is linearly independent. We leave this to the reader.  To make sure you understood the proof of the above theorem, do the following exercise. Exercise 12.3. Let f(X) = X3 + X2 + 1. (i) Show that f is irreducible. (ii) Let θ be a root of f and K = Q(θ). Write the following elements as Q-linear combinations of 1, θ, θ2: 1 θ + 1 (θ + 1)4, , . θ2 − 1 θ2 + 1 ALGEBRAIC NUMBER THEORY 21

Theorem 15. (Primitive Element Theorem) Suppose K is an number field of de- gree n. Then K = Q(α) for some algebraic number α of degree n. Proof. This will be proved in the Galois Theory course.  13. Fields Generated by Conjugate Elements

Theorem 16. Suppose that α is an algebraic number and fα(x) is its minimal polynomial. Then we have an isomorphism of fields ∼ Q[x]/(fα(x)) = Q(α), explicitly given by

φ : Q[x]/(fα(x)) → Q(α), φ (x + (fα(x))) = α. Proof. Define the map ψ : Q[x] → Q(α), ψ(g(x)) = g(α). The map ψ is an homomorphism of rings (check). By the First Isomorphism The- orem we have that Q[x]/Ker(ψ) =∼ Im(ψ). We need to calculate the kernel and the image. Now g(x) ∈ Ker(ψ) if and only if g(α) = 0. This is equivalent to fα | g which in turn is equivalent to g ∈ (fα). Hence

Ker(ψ) = (fα(x)). We claim that Im(ψ) = Q(α) (in other words the map ψ is surjective). To see this, suppose that β ∈ Q(α). By Theorem 14 we can write n−1 β = a0 + a1α + ··· + an−1α , ai ∈ Q. n−1 Let g(x) = a0 + a1x + ··· + an−1x . Then ψ(g(x)) = g(α) = β. In other words β ∈ Im(ψ), showing that ψ is surjective. We now deduce that ∼ Q[x]/(fα(x)) = Q(α). To complete the proof, we must determine the isomorphism explicitely. The explicit isomorphism we gave in the statement of the theorem comes from the proof of the First Isomorphism Theorem.  Corollary 13.1. Suppose that α, α0 are conjugates (i.e. algebraic numbers having the same minimal polynomial). Then the fields Q(α) and Q(α0) are isomorphic; indeed there is a unique isomorphism 0 Q(α) → Q(α ) fixing Q and satisfying α 7→ α0. Proof. From Theorem 16 we have isomorphisms

φ : Q[x]/(fα(x)) → Q(α), φ (x + (fα(x))) = α. and 0 0 0 0 φ : Q[x]/(fα(x)) → Q(α ), φ (x + (fα(x))) = α . 0 −1 Then φ ◦ φ is the required isomorphism.  22 SAMIR SIKSEK √ √ 2 Example 13.1. Let√ K = √Q( 2). Then√ 2 has minimal√ polynomial√ x − 2. Hence the conjugates of 2 are 2 and − 2. Now Q( 2) = Q(− 2). We know from the above that the map √ √ σ : K → K, a + b 2 7→ a − b 2 for all a, b ∈ Q is an isomorphism of fields. The reader might want to check this directly. Notice also that σ(a) = a for all a ∈ Q. Hence σ fixes the rationals.

14. Embeddings Lemma 14.1. Let f ∈ R[x] and let γ ∈ C be a root of f. The γ (the complex conjugate of γ) is also a root of f. Proof. Write n n−1 f(x) = anx + an−1x + ··· + a0, ai ∈ R. But f(γ) = 0, so 0 = f(γ) n = anγ + ··· + a0 n = an · γ + ··· a0 n = anγ + ··· + a0 since ai ∈ R = f(γ). Hence γ is a root of f. 

Suppose α is an algebraic number and let α1, . . . , αn be its conjugates. By definition, the αi are the roots of fα(x) ∈ Q[x]. By the above Lemma, αi is again of one α1, . . . , αn. This means that either αi is real, or if it is not real then its complex conjugate must be included in the list α1, . . . , αn. It is traditional to reorder α1, . . . , αn as follows

α1, . . . , αr, αr+1, . . . , αr+s, αr+s+1, . . . , αr+2s, where α1, . . . , αr are real, the others are non-real complexes, and

αr+1 = αr+s+1, αr+2 = αr+s+2, αr+s = αr+2s. Note that n = r + 2s where r is the number of real conjugates of α and s is the number of pairs of complex conjugates of α. ∼ Now Corollary 13.1 shows that Q(α) = Q(αi) for i = 1, . . . , n. However, for ∼ i = 1, . . . , r we have Q(αi) ⊂ R. Thus composing the isomorphism Q(α) = Q(αi) with the inclusion Q(αi) ⊂ R we obtain an embedding

σi : Q(α) ,→ R, σi(α) = αi. An embedding (denoted by the hook arrow ,→) is an injective homomorphism. We call σ1, . . . , σr the real embeddings of K = Q(α). For i = r + 1, . . . , r + 2s we define the complex embeddings

σi : Q(α) ,→ C, σi(α) = αi. Note that σi(β) = σi+s(β) for all i = r + 1, . . . , r + s and all β ∈ K. ALGEBRAIC NUMBER THEORY 23 √ √ √ Example√ 14.1. Let K = Q( 2). Since 2 has exactly two real conjugates, 2, − 2, we have r = 2, s = 0. The real embeddings are

σ1 : K,→ R, σ2 : K,→ R where √ √ √ √ σ1(a + b 2) = a + b 2, σ2(a + b 2) = a − b 2 for all a, b ∈ Q. √ √ Example 14.2.√ Let√K = Q( −3). Since −3 has exactly two complex, non-real, conjugates, −3, − −3, we have r = 0, s = 1. The complex embeddings are

σ1 : K,→ C, σ2 : K,→ C where √ √ √ √ σ1(a + b −3) = a + b −3, σ2(a + b −3) = a − b −3 for all a, b ∈ Q. √ √ 3 3 Example 14.3. Let K = Q( 2). The minimal polynomial of 2 is f(x) = x3 −2. Write ω = exp(2πi/3). Then f has three roots √ √ √ 3 3 2 3 α1 = 2, α2 = ω 2, α3 = ω 2, √ and so these are the conjugates of 3 2. The first is real and the other two are complex conjugates. Hence r = 1, s = 1. We have embeddings

σ1 : K,→ R, σ2 : K,→ C, σ3 : K,→ C where √ √ √ √  3 3 2 3 3 2 σ1 a + b 2 + c( 2) = a + b 2 + c( 2) , √ √ √ √  3 3 2 3 2 3 2 σ2 a + b 2 + c( 2) = a + bω 2 + cω ( 2) , √ √ √ √  3 3 2 2 3 3 2 σ3 a + b 2 + c( 2) = a + bω 2 + cω( 2) , for all a, b, c ∈ Q.

15. Field Polynomial We continue with the notation of the previous section: K = Q(α) is a number field of degree n and σ1, . . . , σn are the embeddings of K.

Definition. Let β ∈ K. We call σ1(β), . . . , σn(β) the K-conjugates of β. We define the field polynomial Fβ(x) by n Y Fβ(x) = (x − σi(β)) . i=1 We define the K-norm by n Y NK (β) = σi(β) i=1 and the K-trace of β by n X TrK (β) = σi(β) i=1 24 SAMIR SIKSEK √ Example 15.1. Suppose d is a square-free integer, d 6= 0, 1. Let K = Q( d).√ By Theorem 14 any element β ∈ K can expressed uniquely in the form β = a + b d for some a, b ∈ Q. The embeddings σ1, σ2 satisfy √ √ √ √ σ1(a + b d) = a + b d, σ2(a + b d) = a − b d.

Note that σ1, σ2 are both√ real if d >√0 and both√ complex if d < 0. Thus the K-conjugates of β = a + b d are a +√b d and a − b d. The field polynomial of β = a + b d is

Fβ(x) = (x − σ1(β)) (x − σ2(β)) = x2 − 2a x + (a2 − db2). Moreover 2 2 Tr(β) = 2a, NK (β) = a − db . We note in the above example that the for elements of quadratic fields, the K-norms, K-traces are rational, and that the field polynomials have rational coef- ficients. This isn’t a coincidence.

Lemma 15.1. Suppose K = Q(α) is a number field and β ∈ K. Then (i) Fβ(x) ∈ Q[x]; (ii) Tr(β) ∈ Q and NK (β) ∈ Q.

Proof. Suppose K = Q(α) is a number field of degree n with embeddings σ1, . . . , σn corresponding to the conjugates α1, . . . , αn of α. It is clear from the definitions that n n−1 n Fβ(x) = x − Tr(β) x + ··· + (−1) NK (β). Hence (ii) follows immediately from (i). n−1 Let us prove (i). By Theorem 14 we can write β = a0 + a1α + ··· + an−1α n−1 with ai ∈ Q. Let h(x) = a0 + a1x + ··· + an−1x ∈ Q[x]. Then β = h(α) and

σi(β) = h (σi(α)) = h(αi). By definition n Y Fβ(x) = (x − σi(β)) i=1 and hence n Y Fβ(x) = (x − h(αi)) . i=1 The polynomial on the right-hand side is unchanged by permutations of α1, . . . , αn. Thus the coefficients of Fβ(x) are symmetric polynomials in α1, . . . , αn with coef- ficients in Q. This shows indeed that the coefficients of Fβ(x) are in Q.  Theorem 17. Suppose K is a number field of degree n. Suppose β ∈ K has degree m. Then (i) m | n; (ii) Writing l = n/m, we have l Fβ(x) = fβ(x) ; (iii) The K-conjugates of β are the conjugates of β repeated l times. ALGEBRAIC NUMBER THEORY 25

Here as usual, Fβ is the field polynomial of β and fβ is the minimal polynomial of β. Before proving the theorem we make an important remark. When we defined the field polynomial Fβ(x), the definition depended not only on β and K but on a choice of a generator α for K. The above theorem shows that Fβ(x) depends only on β and the field K. Proof of Theorem 17. As f(β) = 0 we see that

f (σi(β)) = σi (f(β)) = 0.

Thus σi(β) is a root of f and so by definition a conjugate of β. Our first observation is: every K-conjugate of β (equivalently every root of Fβ) is a conjugate of β. Recall that the minimal polynomial fβ is irreducible. Thus we can write l (8) Fβ(x) = fβ(x) g(x) for some non-negative integer l and some g ∈ Q[x] such that fβ - g. We note that Fβ and fβ are both monic, hence g is monic. We would like to show that g = 1. To do this it is enough to show that g has no roots. We argue by contradiction. Suppose γ is a root of g. From (8) we see that γ is a root of Fβ. By our observation above, γ is a root of fβ. Thus γ is a common root of fβ and g. As fβ is irreducible and fβ - g we see that fβ and g are coprime. Hence there are polynomials u(x), v(x) ∈ Q[x] such that u(x)fβ(x) + v(x)g(x) = 1. Substituting γ for x we get 0 = 1 which is a contradition. Hence g = 1 and this l shows that Fβ(x) = fβ(x) . Now Fβ has degree n (the same degree as K). However fβ has degree m (the l same degree as β). Comparing degrees on both sides of Fβ(X) = fβ(x) we obtain n = lm. Thus m | n. This proves (i) and (ii) simultaneously. l (iii) follows immediately from the equality Fβ(x) = fβ(x) . 

16. Ring of Integers We recall that an algebraic integer β is a complex number satisfying f(β) = 0 for some monic polynomial with coefficients in Z. We showed that an algebraic number β is an algebraic integer if and only if its minimal polynomial fβ ∈ Z[x]. We denoted the set of algebraic integers by O and showed that it is a subring of C.

Definition. Let K be a number field. We define the ring of integers OK of K to the set OK = K ∩ O.

We note that OK is the intersection of two of C and hence must be a subring. Note also that Z ⊆ OK .

Definition. Let K be a number field of degree n. An integral basis for OK is a set ω1, . . . , ωn of elements in OK such that every β ∈ OK can be written uniquely as

β = a1ω1 + a2ω2 + ··· + anωn with a1, . . . , an ∈ Z.

Theorem 18. Suppose K is a number field of degree n. Then OK has an integral basis ω1, . . . , ωn. 26 SAMIR SIKSEK

We omit the proof of this theorem. This is essentially a theorem about torsion- free abelian groups.

Example 16.1. In Theorem 4 we showed that O∩Q = Z. Thus the ring of integers of the number field Q is OQ = Z. An integral basis for this is {1}. The following lemma is helpful in deciding if an algebraic number is an algebraic integer.

Lemma 16.1. Suppose K is a number field and β ∈ K. Then β ∈ OK if and only if Fβ ∈ Z[x]. Proof. We know from Theorem 17 that l Fβ(x) = fβ(x) for some positive integer l. If β ∈ OK , that is β is an algebraic integer, then fβ ∈ Z[x] and hence Fβ ∈ Z[x]. Conversely, suppose that Fβ ∈ Z[x]. It follows from Theorem 7 that fβ ∈ Z[x]. Thus β is an algebraic integers; i.e. β ∈ OK . 

Example 16.2. Let K = Q(i). We would like to compute OK and an integral basis 2 for it. We know that Z ⊆ OK . Moreover i is the root of the monic x + 1 ∈ Z[x] and so i ∈ OK . Thus we see that Z[i] ⊆ OK . Here Z[i] = {a + bi : a, b ∈ Z} .

We would like to determine whether or not OK = Z[i]. Suppose β ∈ K. Since 1, i is a Q-basis for K we may write β = u + vi for some u, v ∈ Q. Write u = u0 + u00, v = v0 + v00 where u0, v0 ∈ Z and 0 ≤ u00 < 1, 0 ≤ v00 < 1. Let β0 = u0 + v0i, β00 = u00 + v00i. 0 Now β ∈ Z[i] ⊆ OK . Hence 0 β ∈ OK ⇐⇒ β − β ∈ OK 00 ⇐⇒ β ∈ OK

⇐⇒ Fβ00 ∈ Z[x] 2 00 002 002 ⇐⇒ x − 2u x + u + v ∈ Z[x] 00 002 002 ⇐⇒ 2u ∈ Z and u + v ∈ Z. However 0 ≤ u00 < 1, and so 2u00 ∈ Z gives us two possibilities u00 = 0 or 1/2. If 2 u00 = 0 then v00 ∈ Z and 0 ≤ v00 < 1 so v00 = 0. Thus we see that β = β0 ∈ Z[i]. 2 If however u00 = 1/2 then 1/4 + v00 ∈ Z. But 0 ≤ v00 < 1 so 1 1 5 ≤ + v002 < . 4 4 4 Hence 1/4 + v002 = 1 and so v002 = 3/4 giving us a contradiction. Thus OK = Z[i]. The set 1, i is an integral basis for OK . ALGEBRAIC NUMBER THEORY 27 √ Example 16.3. Let K = Q( 5).√ We follow the same steps as in the previous example to determine OK . Again Z[ 5] ⊆ OK , where √ n √ o Z[ 5] = a + b 5 : a, b ∈ Z . √ √ Suppose β ∈ K. Since 1, 5 is a Q-basis for K we may write β = u + v 5 for some u, v ∈ Q. Write u = u0 + u00, v = v0 + v00 where u0, v0 ∈ Z and 0 ≤ u00 < 1, 0 ≤ v00 < 1. Let √ √ β0 = u0 + v0 5, β00 = u00 + v00 5. √ 0 Now β ∈ Z[ 5] ⊆ OK . Hence 0 β ∈ OK ⇐⇒ β − β ∈ OK 00 ⇐⇒ β ∈ OK

⇐⇒ Fβ00 ∈ Z[x] 2 00 002 002 ⇐⇒ x − 2u x + u − 5v ∈ Z[x] 00 002 002 ⇐⇒ 2u ∈ Z and u − 5v ∈ Z. However 0 ≤ u00 < 1, and so 2u00 ∈ Z gives us two possibilities u00 = 0 or 1/2. If 2 √ u00 = 0 then v00 ∈ Z and 0 ≤ v00 < 1 so v00 = 0. Thus we see that β = β0 ∈ Z[ 5]. 2 If however u00 = 1/2 then 1/4 − 5v00 ∈ Z. But 0 ≤ v00 < 1 so −19 1 < − 5v002 ≤ 1/4. 4 4 Hence 1/4 − 5v002 = −4, −3, −2, −1, 0. Examining all the possibilities we find that 00 v = 1/2. Thus √ 1 + 5 β00 = 0, or β00 = 2 and both of these are in OK . It is now easy to see that " √ # ( √ ! ) 1 + 5 1 + 5 O = = a + b : a, b ∈ . K Z 2 2 Z √ It follows that the set 1, (1 + 5)/2 is an integral basis for OK . We can generalize the above two examples to quadratic number fields. √ Theorem 19. Let K = Q( d) where d is a square-free integer, and d 6= 0, 1 (1) If d 6≡ 1 (mod 4) then √ n √ o OK = Z[ d] = a + b d : a, b ∈ Z . √ In particular, 1, d is an integral basis for OK . (2) If d ≡ 1 (mod 4) then " √ # ( √ ! ) 1 + d 1 + d O = = a + b : a, b ∈ . K Z 2 2 Z √ In particular 1, (1 + d)/2 is an integral basis for OK . 28 SAMIR SIKSEK

We leave the proof of this theorem as an exercise.

17. Determinants and Discriminants

Let K be a number field of degree n with embeddings σ1, . . . , σn. Let ω1, . . . , ωn be a basis for K over Q. Consider the matrix   σ1(ω1) σ1(ω2) . . . σ1(ωn) σ2(ω1) σ2(ω2) . . . σ2(ωn)    . . .   . . .  σn(ω1) σn(ω2) . . . σn(ωn) which we denote by (σi(ωj)) for short. We define the determinant of the basis ω1, . . . , ωn to be the determinant of this matrix and denote by D(ω1, . . . , ωn). Thus

D(ω1, . . . , ωn) = det(σi(ωj)).

We define the discriminant of the basis ω1, . . . , ωn to be the square of the determi- nant and denote by ∆(ω1, . . . , ωn). Hence 2 2 ∆(ω1, . . . , ωn) = D(ω1, . . . , ωn) = det(σi(ωj)) .

Suppose now that β1, . . . , βn is another basis for K over Q. Then n X βi = cijωj j=1 for some cij ∈ Q satisfying det(cij) 6= 0. It is an easy linear algebra exercise to show that

D(β1, . . . , βn) = det(cij)D(ω1, . . . , ωn), and hence 2 ∆(β1, . . . , βn) = det(cij) ∆(ω1, . . . , ωn). Theorem 20. Suppose K = Q(α) is a number field of degree n. Then (i) The discriminant of the basis 1, α, . . . , αn−1 is given by n−1 Y 2 ∆(1, α, . . . , α ) = (αi − αj) . 1≤i

where α1, . . . , αn are the conjugates of α. (ii) If β1, . . . , βn is any basis for K over Q then ∆(β1, . . . , βn) 6= 0 and ∆(β1, . . . , βn) ∈ Q. (iii) If β1, . . . , βn is an integral basis then ∆(β1, . . . , βn) 6= 0 and ∆(β1, . . . , βn) ∈ Z. (iv) If β1, . . . , βn and γ1, . . . , γn are both integral bases for OK then

∆(β1, . . . , βn) = ∆(γ1, . . . , γn). n−1 Proof. We know that σi(α) = αi. Hence the determinant of the basis 1, α, . . . , α is 2 n−1 1 α1 α1 . . . α1 2 n−1 1 α2 α . . . α n−1 2 2 (9) D(1, α, . . . , α ) = ...... 2 n−1 1 αn αn . . . αn ALGEBRAIC NUMBER THEORY 29

You will instantaneously recognize this as a Vandermonde determinant, and recall that n−1 Y D(1, α, . . . , α ) = (αi − αj). 1≤i

Finally we turn to (iii). Suppose β1, . . . , βn is an integral basis. From (ii) the discriminant of this basis is non-zero. We know that σi(βj) is a conjugate of βj. As βj ∈ O, so σi(βj). Hence from the definition of ∆ and the fact that O is a ring we see that ∆(β1, . . . , βn) ∈ O. But (ii) tells us that ∆(β1, . . . , βn) ∈ Q. From Theorem 4 we know that O ∩ Q = Z. Thus ∆(β1, . . . , βn) ∈ Z as required. We sketch a proof of (iv). Suppose that β1, . . . , βn and γ1, . . . , γn are both integral basis for OK . It follows from the theory of abelian groups that βi = Pn j=1 cijγj where cij ∈ Z and det(cij) = ±1. So 2 ∆(β1, . . . , βn) = det(cij) ∆(γ1, . . . , γn) = ∆(γ1, . . . , γn).  Definition. Let K a number field. We define the discriminant of K, denoted by ∆K , to be the discriminant of any integral basis β1, . . . , βn for OK .

We note, by Theorem 20 that ∆K is independent of the choice of basis and moreover it is a non-zero rational integer. √ Corollary 17.1. Let K = Q( d) where d is a square-free integer, and d 6= 0, 1 ( 4d if d 6≡ 1 (mod 4) ∆K = d if d ≡ 1 (mod 4).

Proof. In Theorem 19 we wrote down integral bases for OK . We merely have to write down their discriminants. We will do the case d ≡ 1 (mod√ 4) and leave the other case as an exercise for the reader. Here we have 1, (1 + d)/2 as an integral basis for OK . Now √  √  1 (1 + d)/2 √ D 1, (1 + d)/2 = √ = − d. 1 (1 − d)/2 30 SAMIR SIKSEK

Hence  √   √ 2 ∆K = ∆ 1, (1 + d)/2 = D 1, (1 + d)/2 = d, as required.  √ √ √ 3 3 3 2 Exercise 17.1. Let K = Q( 2). Compute ∆K given that 1, 2, ( 2) is an integral basis.

18. Ideals

We know that√ unique factorization fails for rings of integers of number fields. For example, in Z[ −5] we can factorize 6 as a product of irreducibles in two different ways: √ √ 6 = 2 · 3 = (1 + −5)(1 − −5). It turns out that we can recover unique factorization for rings of integers of number fields if we look at ideals instead of elements. What this means is that we will show that every ideal can be written as a product of powers of prime ideals in a unique way. We mostly use calligraphic letters for ideals of number fields: A, B, C, P, Q. Many books use gothic letters a, b, etc. Let K be a number field and OK be its ring of integers. A non-empty subset A of OK is an ideal if it satisfies the following conditions: • A 6= ∅; • α−β ∈ A for every α, β ∈ A (this really says that A is an additive subgroup of OK ); • if α ∈ A and β ∈ OK then βα ∈ A.

The zero ideal is just {0}. Of course OK is an ideal of OK ; ideals that are properly contained in OK are called proper ideals. If 0 6= α ∈ OK we define the principal ideal generated by α be

αOK = {αr : r ∈ OK } .

Another common notation for αOK is (α). Of course (1) = OK . When we think of OK as an ideal it is usual to write it as (1). In more generality, if α1, α2, . . . , αn are non-zero elements OK we define the ideal generated by α1, . . . , αn to be ( n ) X (α1, . . . , αn) = βiαi : β1, . . . , βn ∈ OK . i=1 If A, B are ideals the so is the set (A, B) = {α + β : α ∈ A, β ∈ B} . We sometimes write A+B for (A, B). We say that A, B are coprime if A+B = (1). We define the ideal product ( r ) X AB = αiβi : αi ∈ A, βi ∈ B . i=1 It is an easy exercise to show that AB is again an ideal. ALGEBRAIC NUMBER THEORY 31

18.1. Quotient Rings. Let A be an ideal of the ring OK . A coset of A is of the form x + A = {x + α : α ∈ A}. Recall that two cosets are equal x + A = y + A if and only if x − y = A. We define the quotient OK /A = {x + A : x ∈ A}.

A priori OK /A is just the set of cosets of A, but we can make it into a ring by defining addition and multiplication as follows: (x + A) + (y + A) = (x + y) + A, (x + A)(y + A) = xy + A. It is an easy exercise to show that these operations are well-defined and that they do give a ring structure on OK /A. We would like to prove that if A is a non-zero ideal then OK /A is finite. Before we can do this we need the following lemma.

Lemma 18.1. Suppose that A is a non-zero ideal of OK . Then A ∩ Z 6= {0}. Proof. We want to show that A contains some non-zero element of Z. As A is a non-zero ideal, we can choose some α ∈ A ⊆ OK such that α 6= 0. Since α is an algebraic integer, its minimal polynomial fα(x) has integer coefficients; say n n−1 fα(x) = x + an−1x + ··· + a0, ai ∈ Z.

Recall that fα is irreducible. If a0 = 0 then x | fα(x) and since fα(x) is irreducible we see that fα(x) = x. But this implies that α = 0 contradicting our choice of α. Hence a0 6= 0. Now fα(α) = 0, so n n−1 a0 = −α − an−1α − · · · − a1α ∈ A, showing that a0 ∈ A ∩ Z. 

Theorem 21. Let A be a non-zero ideal of OK . Then the quotient ring OK /A is finite.

Proof. We know that OK has some Z-basis ω1, . . . , ωn. From Lemma 18.1 there is some non-zero m ∈ A ∩ Z. We would like to show that the quotient ring OK /A is finite. We do this by constructing a map n φ :(Z/mZ) −→ OK /A. Here Z/mZ are the integers modulo m (which you denoted by Zm in some other courses). We also show that our map φ is surjective. This will be enough to complete the proof: since the domain is finite, the co-domain of this surjective map must be finite. Define φ(a1,..., an) = (a1ω1 + ··· + anωn) + A. There is an issue here, which is to show that map is well-defined. This means that if a1, . . . , an and b1, . . . , bn are integers satisfying a1 = b1,..., an = bn then (a1ω1 + ··· + anωn) + A = (b1ω1 + ··· + bnωn) + A. Let’s do that. The condition a1 = b1,..., an = bn means that

a1 − b1 = mc1, . . . , an − bn = mcn, for some c1, . . . , cn ∈ Z.

But m ∈ A. Hence ai − bi ∈ A for i = 1, . . . , n and so is (ai − bi)ωi. Thus

(a1ω1 + ··· + anωn) − (b1ω1 + ··· + bnωn) = (a1 − b1)ω1 + ··· + (an − bn)ωn ∈ A, 32 SAMIR SIKSEK proving that (a1ω1 + ··· + anωn) + A = (b1ω1 + ··· + bnωn) + A. This shows that φ is well-defined. Since any x ∈ OK can be written as x = a1ω1 + ··· + anωn for some ai ∈ Z, we see that x + A = φ(a1,..., an). Hence φ is surjective. 

Definition. Suppose that A is a non-zero ideal of OK . We define the norm of the ideal A, denoted by N(A) to be the number of elements of the quotient ring OK /A.

Theorem 22. Let K be a number field of degree n and A a non-zero ideal of OK . Then A has a Z-basis consisting of n elements. Moreover, if δ1, . . . , δn is a Z-basis for A and ω1, . . . , ωn is an integral basis for OK then

D(δ1, . . . , δn) N(A) = . D(ω1, . . . , ωn)

By a Z-basis consisting of n elements we mean some δ1, . . . , δn ∈ A such that any element of α ∈ A can be written uniquely as a linear combination

α = a1δ1 + a2δ2 + ··· + anδn. The proof of the theorem is essentially theory and we omit it. It is natural to ask what is the relationship between the norms of ideals and the norms of elements of the field. Recall that if β ∈ K we defined the K-norm of β by n Y NK (β) = σi(β) i=1 where σi are the embeddings of K.

Proposition 18.2. If β ∈ OK is non-zero and B = (β) is the principal ideal generated by β then N(B) = |NK (β)|.

Proof. Suppose that ω1, . . . , ωn is an integral basis for OK . It is clear that βω1, . . . , βωn is a Z-basis for B = (β) = βOK . Hence by Theorem 22 we have

D(βω1, . . . , βωn) N(B) = . D(ω1, . . . , ωn) But

σ1(βω1) σ1(βω2) . . . σ1(βωn)

σ2(βω1) σ2(βω2) . . . σ2(βωn) D(βω , . . . , βω ) = 1 n ......

σn(βω1) σn(βω2) . . . σn(βωn)

σ1(β)σ1(ω1) σ1(β)σ1(ω2) . . . σ1(β)σ1(ωn)

σ2(β)σ2(ω1) σ2(β)σ2(ω2) . . . σ2(β)σ2(ωn) = ......

σn(β)σn(ω1) σn(β)σn(ω2) . . . σn(β)σn(ωn)

σ1(ω1) σ1(ω2) . . . σ1(ωn)

σ2(ω1) σ2(ω2) . . . σ2(ωn) = σ (β) ··· σ (β) 1 n ......

σn(ω1) σn(ω2) . . . σn(ωn)

= NK (β)D(ω1, . . . , ωn) ALGEBRAIC NUMBER THEORY 33 proving that N(B) = |NK (β)|. 

Eventually we will show that norms are multiplicative. Meaning N(AB) = N(A)N(B). For now we have be content with a special case.

Theorem 23. Suppose that A, B are coprime non-zero ideals. Then (i) A ∩ B = AB; (ii) (Chinese Remainder Theorem) we have an isomorphism ∼ OK /AB = OK /A × OK /B;

(iii) N(AB) = N(A)N(B).

Proof. We are supposing that A, B are coprime. This means that A + B = (1) or equivalently a + b = 1 for some a ∈ A and b ∈ B. We know that AB ⊆ A ∩ B. To prove (i) we must show that A ∩ B ⊆ AB. Equivalently, we must show that any c ∈ A ∩ B satisfies c ∈ AB. Thus suppose that c ∈ A ∩ B. Remember that there are elements a ∈ A and b ∈ B such that a + b = 1. Thus c = ac + bc. Now a ∈ A and c ∈ A ∩ B ⊆ B, so ac ∈ AB. Similarly bc ∈ AB. Hence c = ac + bc ∈ AB. This proves (i). To prove (ii) consider the map

φ : OK −→ OK /A × OK /B, φ(c) = (c + A, c + B).

It is easy to show that φ is a homomorphism of rings. Thus by the First Isomorphism Theorem

OK /Ker(φ) = Im(φ). Let us calculate the kernel and image. Now

c ∈ Ker(φ) ⇐⇒ φ(c) = (0 + A, 0 + B) ⇐⇒ (c + A, c + B) = (0 + A, 0 + B) ⇐⇒ c ∈ A and c ∈ B ⇐⇒ c ∈ A ∩ B.

Thus Ker(φ) = A ∩ B. By (i) A ∩ B = AB so we have Ker(φ) = AB. We would like to show that φ is surjective. Suppose that (c1 + A, c2 + B) ∈ OK /A × OK /B. Let c = c1b + c2a where as above a + b = 1 and a ∈ A, b ∈ B. We claim that φ(c) = (c1 + A, c2 + B). Note that

c − c1 = c1(b − 1) + c2a = c1(−a) + c2a = a(c2 − c1) ∈ A, and similarly c − c2 ∈ B. Hence

φ(c) = (c + A, c + B) = (c1 + A, c2 + B).

This shows that φ is surjective and so Im(φ) = OK /A × OK /B. Putting all this together proves (ii). Now (iii) follows immediately from (ii) and the definition of norms of ideals.  34 SAMIR SIKSEK

19. Prime and Maximal Ideals

Definition. We call a proper ideal P prime, if for all α, β ∈ OK we have αβ ∈ P =⇒ α ∈ P or β ∈ P. We call a proper ideal M maximal if there isn’t any ideal A satisfying

M ( A ( OK . In words, a proper ideal is maximal if and only if it is not properly contained in some other proper ideal. Theorem 24. Every maximal ideal is prime. An ideal P is prime if and only if OK /P is an integral domain. An ideal M is maximal if and only if OK /M is a field. Every maximal ideal is prime.

Proof. The statements about OK /P are reformulations of the definitions of prime and maximal ideals. For the last statement, suppose that M is maximal. Then OK /M is a field. But fields are integral domains, so M is prime.  The above theorem tells us that maximal ideals are prime. The converse is not true in general, but it is true for non-zero prime ideals of the ring of integers of a number field.

Theorem 25. Let K be a number field and OK be its ring of integers. A non-zero ideal of OK is prime if and only if it is maximal. Proof. By Theorem 24 we know that every maximal ideal is prime. Let us prove the converse. Suppose that P is a prime ideal of OK . Theorem 24 tells us that OK /P is an integral domain. However, we know by Theorem 21 that OK /P is finite. In other words OK /P is a finite integral domain. But we recall that every finite integral domain is a field (if you’ve forgotten how this is proved, see Part I of these notes, or do the exercise below). Hence OK /P is a field. We apply Theorem 24 again to deduce that P is maximal.  Exercise 19.1. Let R be a finite integral domain. Prove that R is a field as follows: let x ∈ R\{0}. Consider that list x, x2, x3,.... Why must there be repetition in this list? Use this to show that x must have an inverse. [This proof is in the style of elementary number theory. The other proof you saw before is in the style of combinatorics.] Exercise 19.2. Here is an alternative way of showing the maximal ideals are prime. Suppose that M is maximal and suppose that αβ ∈ M but α∈ / M. Let M0 = (α) + M. Show that M0 = (1). Deduce that β ∈ M. Hence M is prime. Before proceeding we need one more property of prime ideals. Lemma 19.1. Suppose that A, B and P are ideals such that P is prime and P ⊇ AB. Then either P ⊇ A or P ⊇ B. Perhaps you would like to prove this for yourself before looking at the proof. Proof of Lemma 19.1. Suppose that AB ⊆ P but A 6⊆ P. Then there is some α ∈ A such that α∈ / P. We want to show that B ⊆ P. Consider β ∈ B. Then αβ ∈ AB ⊆ P. By the definition of a prime ideal, α ∈ P or β ∈ P. But we know already that α∈ / P and so β ∈ P. Since this is true for all β ∈ P we deduce that B ⊆ P as required.  ALGEBRAIC NUMBER THEORY 35

20. Towards Unique Factorization for Ideals I Our objective is to prove the following theorem. Theorem 26. (Unique Factorization Theorem for Ideals) Let K be a number field and OK be its ring of integers. Then every non-zero ideal A can be written as a product of of finitely prime ideals n Y A = Pi. i=1 Moreover this factorization is unique up to re-ordering. We note an important convention, which is that the ideal (1) is regarded as the product of zero many prime ideals: 0 Y (1) = Pi. i=1 The proof of the Unique Factorization Theorem for Ideals will require many steps. One of these is the Cancellation Lemma which we prove later.

Lemma 20.1. (Cancellation Lemma) Let K be is a number field and OK be its ring of integers. Suppose that BA = CA for some non-zero ideals A, B and C. Then B = C. The proof the Cancellation Lemma is highly non-trivial. You can’t just say “divide”, because we are talking about ideals (which are sets) and we haven’t defined what division of ideals means. However the Cancellation Lemma is enough to imply the uniqueness part of the Unique Factorization Theorem. Lemma 20.2. (Uniqueness Part of the Unique Factorization Theorem) Let K be a number field and suppose the Cancellation Lemma holds for OK . Suppose that P1,..., Pm and Q1,..., Qn are prime ideals of OK . If m n Y Y Pi = Qj i=1 j=1 then n = m and the P1,..., Pm and Q1,..., Qn are the same up to re-ordering. Proof. We prove the lemma by induction on min(m, n). Suppose first that min(m, n) = 0. Without loss of generality suppose that m = 0. If n = 0 then there is nothing to prove. So suppose that n > 0. Hence we have

(1) = Q1Q2 ... Qn.

But Q1Q2 ... Qn ⊆ Qi for i = 1, . . . , n, so Qi = (1). As prime ideals are proper by definition, we have a contradiction. Hence if min(m, n) = 0 then m = n = 0 and the lemma is true. We now come to the inductive step. Suppose min(m, n) ≥ 1. Note that m n Y Y Pm ⊇ Pi = Qj. i=1 j=1

By Lemma 19.1 we see that Pm ⊇ Qj for some j. After re-labeling we can suppose that Pm ⊇ Qn. Now we recall that prime ideals are maximal (in our present 36 SAMIR SIKSEK context, not in general). Hence Pm = Qn. Now we apply the Cancellation Lemma to cancel Pm = Qn from

P1P2 ... Pm = Q1Q2 ... Qn to obtain

P1P2 ... Pm−1 = Q1Q2 ... Qn−1. Now we can apply the inductive hypothesis to complete the proof. 

21. Towards Unique Factorization for Ideals II In the previous section we showed that if we can factorize an ideal as a product of primes then such a factorization is unique up to re-ordering (of course we assumed the Cancellation Lemma). To prove the Unique Factorization Theorem we must also prove existence: that any non-zero ideal can be written as a product of prime ideals. This is again a long-term goal. For now we introduce the notion of an irreducible ideal and prove the existence of prime factorization under the assumption that irreducible ideals are prime. Definition. We say a proper ideal is irreducible if it is not the product of two strictly larger ideals. Lemma 21.1. (Irreducibles are Primes) A non-zero ideal is prime if and only if it is irreducible. Showing that a prime ideal is irreducible is easy. Suppose P is prime and suppose P = AB. By Lemma 19.1 we see that P ⊇ A or P ⊇ B. Let’s say P ⊇ A. Then P ⊇ A ⊇ AB = P. Hence P = A. Thus we see that A is not strictly bigger than P. So we have shown that every prime ideal is irreducible. Showing the converse will require much effort. Lemma 21.2. (Existence of Factorization into Prime Ideals) Let K be a number field and OK be its ring of integers. Assume that irreducible ideals of OK are prime. Then every non-zero ideal can be written as a product of prime ideals. Proof. Since we are assuming that irreducible ideals are prime, we need only show that every non-zero ideal A can be written as a product of irreducible ideals. We do this by induction on the norm N(A). Note first that N(A) = 1 if and only if OK /A = 0 which is equivalent to A = OK . But OK = (1) is the product of zero many irreducible ideals. Now suppose that N(A) > 1 and we want to show that A can be written as a product of irreducible ideals. If A is irreducible then there is nothing to prove. Thus we may suppose that A is reducible, which means that A = BC where B and C are strictly larger ideals (strictly larger means A ( B and A ( C). We will show that N(B) < N(A) and N(C) < N(A). After this we simply apply the inductive hypothesis to deduce that B and C can written as products of irreducibles and hence so can A = BC. It remains to show that N(B) < N(A) and N(C) < N(A). To do this define the map

φ : OK /A → OK /B, α + A 7→ α + B and show that this is well-defined and surjective using B ⊃ A (exercise). Moreover show that the map is not injective using B ) A (exercise). Hence the cardinality ALGEBRAIC NUMBER THEORY 37

(i.e. number of elements) of OK /A is strictly larger than that of OK /B. Recalling the definition of norm we see that N(A) > N(B) and similarly N(A) > N(C). 

22. Unique Factorization Proof—A Summary so Far Our objective was (and is) to prove the Unique Factorization Theorem for Ideals (Theorem 26). We made two assumptions: • The Cancellation Lemma; • Irreducible ideals are prime. From these two assumptions we deduced the existence (Lemma 21.2) and uniquess (Lemma 20.2) parts of the Unique Factorization Theorem. In other words to com- plete the proof of the Unique Factorization Theorem ‘all’ we have to do is to prove our two assumptions. To do this we need to introduce ideal classes and show that there are finitely many ideal classes.

23. A Special Case of the Cancellation Lemma The Cancellation Lemma 20.1 is one of our two ingredients needed to prove the Unique Factorization Theorem for ideals. In this section we content ourselves with proving a special case of the Cancellation Lemma. In turns out that this special case is needed in the proof of the full Cancellation Lemma. Later on we prove the following statement for ideals of OK : to contain is to divide. What this means is the following: if A, B are non-zero ideals and B ⊇ A (i.e. B contains A) then BD = A for some non-zero ideal D (i.e. B divides A). We cannot prove this yet, but we can prove a useful special case of this statement.

Lemma 23.1. Let C be a non-zero ideal and β be a non-zero element of OK . If C ⊆ (β) then there is a non-zero ideal D such that C = (β)D. Proof. Suppose C ⊆ (β). Define

D = {δ ∈ OK : δβ ∈ C}. We want to show that D is an ideal that does the required job, namely C = (β)D. First we show that D is an ideal. Note that 0β ∈ C and so 0 ∈ D. Suppose δ1, δ2 ∈ D. Then δ1β, δ2β ∈ C. Hence

(δ1 + δ2)β ∈ C, showing that δ1 + δ2 ∈ D. Moreover, if δ ∈ D and α ∈ OK then δβ ∈ C and hence (αδ)β = α(δβ) ∈ C. Thus αδ ∈ D. This shows that D is an ideal. From the definition of D it is clear that (β)D ⊆ C. Suppose γ ∈ C. Since C ⊆ (β) we can write γ = βδ for some δ ∈ OK . Again, from the definition of D we see that δ ∈ D. Hence γ ∈ (β)D. This shows that C = (β)D.  Exercise 23.1. This is an exercise on the theme “to contain is to divide”. Recall that any ideal of Z is principal. Suppose that m, n ∈ Z\{0}. Show that (m) ⊇ (n) if and only if m | n. Now for our special case of the Cancellation Lemma. 38 SAMIR SIKSEK

Lemma 23.2. Suppose that β ∈ OK is non-zero and A, C are non-zero ideals of OK . If (β)A = CA then (β) = C.

Proof. Let ω1, . . . , ωn be a Z-basis for the ideal A. Then βω1, . . . , βωn is a Z-basis for βA = (β)A. Suppose that γ ∈ C. Then γωi ∈ CA = (β)A. Hence there are aij ∈ Z such that n X γωi = aijβωj. j=1 This system of equalties for i = 1, . . . , n can be rewritten in matrix notation as β−1γw = Aw

−1 where A = (aij) and w is the column vector with entries ωi. Hence β γ is an −1 eigenvalue of the matrix A. In other words, β γ is a root of χA(x) = det(xIn −A). −1 But A has entries in Z and so χA is monic with coefficients in Z. Thus β γ is an −1 algebraic integer, and so β γ ∈ OK . We deduce that γ ∈ βOK = (β). This is true for all γ ∈ C. Thus C ⊆ (β). Lemma 23.1 tells us that C = (β)D for some ideal D. We return to our original equality: (β)A = CA and substitute C = (β)D to obtain (β)A = (β)DA or equivalently βA = βDA. Dividing by β we obtain A = DA. Recall that ωi was a Z-basis for A. It is easy to see that (Exercise) X ωi = δijωj, where δij ∈ D. Thus 1 is an eigenvalue of the matrix (δij) and so is a root of the polynomial n n−1 det(xIn − (δij)) = x + dn−1x + ··· + d0, where di ∈ D. From this we obtain

1 = −dn−1 − dn−2 − · · · − d0 ∈ D, so D = (1). Substituting D = (1) in C = (β)D we obtain C = (β) as desired. 

24. Ideal Classes

Definition. Let K be a number field and OK be its ring of integers. We define the following relation on the non-zero ideals of OK : A ∼ B if and only if (β)A = (α)B for some non-zero elements α, β ∈ OK . It is easy to show that this is an equivalence relation and so when A ∼ B we say that A and B are in the same ideal class. We write [A] for the class represented by A. The principal class is the class of (1). Exercise 24.1. Verify that ∼ really gives an equivalence relation on the non-zero ideals. Show that the principal class consists precisely of principal ideals. Define multiplication on the set of ideal classes by [A][B] = [AB]. Show that this operation is well-defined. Eventually we will show that this operation turns the ideal classes into an abelian group. For now, which of the group axioms can you prove straightforwardly from the definitions? What is the identity element?

Theorem 27. (Finiteness of Ideal Classes) Let K be a number field and OK be its ring of integers. There are only finitely many classes of ideals of OK . ALGEBRAIC NUMBER THEORY 39

We delay proving this theorem till later. For now we show that it implies the Cancellation Lemma and that irreducible ideals are prime. In other words, we show that this theorem implies the Unique Factorization Theorem for Ideals.

Lemma 24.1. Assume that OK has only finitely many ideal classes. Given any non-zero ideal A, there is some exponent h > 0 such that Ah is principal. Proof. First consider that sequence of ideals A, A2, A3,... . Since there are only finitely many ideal classes, there must be some exponents r > s > 0 such that Ar ∼ As. Hence there are non-zero integers α, β such that (α)Ar = (β)As. Write C for the ideal αAr−s. Then CAs = (β)As. By our special case of the Cancellation Lemma 23.2 we have that C = (β). Hence r−s r−s (α)A = (β). Hence A is principal as required.  We can now prove the Cancellation Lemma itself, assuming finiteness of ideal classes.

Lemma 24.2. (Cancellation Lemma) Let K be is a number field and OK be its ring of integers. Assume that OK has only finitely many ideal classes. Suppose that BA = CA for some non-zero ideals A, B and C. Then B = C. Proof. As we are assuming that there are only finitely many ideal classes, we know h from Lemma 24.1 that A = (α) for some non-zero α ∈ OK and exponent h > 0. Multiplying both sides of BA = CA by Ah−1 we obtain B(α) = C(α), which we can rewrite as αB = αC. Dividing by α we obtain B = C. 

Lemma 24.3. (To Contain is to Divide) Let K be is a number field and OK be its ring of integers. Assume that OK has only finitely many ideal classes. If A ⊆ B are non-zero ideals, then there is a non-zero ideal D such that A = BD. Proof. As we are assuming that there are only finitely many ideal classes, we know h from Lemma 24.1 that B = (β) for some non-zero β ∈ OK and exponent h > 0. From A ⊆ B we obtain ABh−1 ⊆ Bh and so ABh−1 ⊆ (β). By Lemma 23.1 there is some non-zero ideal D such that ABh−1 = (β)D. Substituting back (β) = Bh we get ABh−1 = DBh. By the Cancellation Lemma 24.2 we can cancel Bh−1 to obtain A = DB as required. 

Lemma 24.4. (Irreducibles are Primes) Let K be is a number field and OK be its ring of integers. Assume that OK has only finitely many ideal classes. A non-zero ideal is prime if and only if it is irreducible. Proof. We proved before that prime ideals are irreducible. Suppose that P is a non-zero irreducible ideal, we would like to show that it is prime. It is enough 40 SAMIR SIKSEK to show that P is maximal (since maximal ideals are prime). Suppose P is not maximal. Then there is some ideal A such that

P ( A ( OK . By Lemma 24.3 we see that P = AD for some non-zero ideal D. We know that P ( A and also P = AD ⊆ D. Now P = AD is irreducible. By definition of irreducible, P is not properly contained in A or not properly contained in D. However P ( A. Hence P is not properly contained in D. Thus P = D. Cancelling from P = AD we get A = (1) = OK contradicting A ( OK . 

25. Unique Factorization Proof—Summary So Far (again) Recall the conclusion of our previous summary in Section 22: to complete the proof of the Unique Factorization Theorem for Ideals (Theorem 26) we needed to prove two results: • The Cancellation Lemma; • Irreducible ideals are prime. In the previous section we proved both of these results under the assumption that there ring of integers of a number field has only finitely many ideal classes. Thus to complete the proof of the Unique Factorization Theorem for Ideals all we have to do is to prove the finiteness of ideal classes.

26. What are the prime ideals of K? In this section we take a well-earned rest from the proof of the proof of the Unique Factorization Theorem to ask ourselves, “What do the prime ideals of rings of integers of number fields look-like?”. We assume the Unique Factoriza- tion Theorem, Cancellation Lemma, to Contain is to Divide, Irreducible Ideals are Prime throughout this section. Later on we return to complete the proof of all of these once we had our rest.

Lemma 26.1. Suppose P is a non-zero prime ideal of OK . Then P ∩ Z = pZ for some prime number p. Moreover, P divides pOK (the principal ideal of OK generated by p).

Proof. It is easy to see that P ∩ Z is an ideal of Z. Moreover, we proved previously that for any non-zero ideal A we have A ∩Z 6= 0. Hence P ∩Z is a non-zero ideal of Z. We recall that Z is a . So P ∩ Z = pZ for some non-zero integer p, which we may take to be positive. We want to show that p is prime. Suppose p | ab where a, b are positive integers. Then ab ∈ pZ ⊆ P ∩ Z ⊆ P. As P is prime, we see that a ∈ P or b ∈ P. But a, b ∈ Z. Hence, a ∈ P ∩ Z = pZ or b ∈ P ∩ Z = pZ. In otherwords p | a or p | b. This shows that p is prime. Thus we have shown P ∩Z = pZ for some prime p. Now p ∈ P. Hence pOK ⊆ P. By “to contain is to divide” we see that P | pOK . 

We see from the above lemma that any non-zero prime ideal of OK divides pOK for some prime number p. To get the prime ideals of OK all we have to do is factorize pOK as a product of prime ideals. The following theorem tells us how, provided OK = Z[α] for some α ∈ OK . ALGEBRAIC NUMBER THEORY 41

Theorem 28. Suppose that OK = Z[α] where α has minimal polynomial fα(X) ∈ Z[X] of degree n. Suppose that

Y ei (10) fα(X) ≡ gi(X) (mod p) where each gi is a monic polynomial in Z[X] which is irreducible modulo p. Write Pi = (p, gi(α)). Then each ideal Pi is prime and

Y ei pOK = Pi .

Example 26.1. Let us take K = Q(i) and see the of some small primes in OK (the Gaussian integers). We know OK = Z[i], and i has minimal 2 polynomial f(X) = X + 1. Let us factorize the ideals 2OK , 3OK , 5OK in OK . We note X2 + 1 ≡ (X + 1)2 (mod 2). 2 Hence 2OK = P where P = (2, 1 + i) is prime. But we remember that Z[i] is a principal ideal domain. So we should be able to write P as a principal ideal. Now notice that 2 = (1 + i)(1 − i). Hence 2 ∈ (1 + i)OK . Thus

(1 + i)OK ⊆ (2, 1 + i) ⊆ (1 + i)OK . 2 Hence P = (1 + i)OK and 2OK = P . 2 To factorize 3, we note that X + 1 is irreducible modulo 3. Hence 3OK = (3, i2 + 1) = (3) is a prime ideal. To factorize 5, note that X2 + 1 ≡ X2 − 4 ≡ (X + 2)(X − 2) (mod 5).

Hence 5OK = PQ where P = (5, 2+i) and Q = (5, 2−i). However 5 = (2+i)(2−i) so P = (2 + i)OK and Q = (2 − i)OK .

Samir Siksek, Department of Mathematics, University of Warwick, Coventry, CV4 7AL, United Kingdom E-mail address: [email protected]