Affine Varieties

By

Asash Yohannes

Advisor: Dr. Dawit Cherinet

Department of Mathematics Arbaminch University OCT. 2018 A THESIS ON AFFINE VARIETIES

By

ASASH YOHANNES

A THESIS SUBMITTED TO THE DEPARTMENT OF MATHEMATICS COLLEGE OF NATURAL SCIENCE SCHOOL OF GRADUATE STUDIES ARBAMINCH UNIVERSITY

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN MATHEMATICS WITH SPECIALIZATION IN ALGEBRA.

OCT. 2018 ARBAMINCH ETHIOPIA Acknowledgment

Next to GOD, I would like to thank my advisor Dr.Dawit Cherinet for all his help and support during the work of my thesis. I would also like to thank every one who read and give constractive and helpful comments on doing this thesis.

ii Declaration

I, Asash Yohannes with student number SMSc/118/07, hereby declare that this thesis has not previously been submitted for assessment or completion of any post graduate qualification to another university or for another qualification.

Date Asash Yohannes

iii Certificate

I hereby certify that I have read this thesis prepared by Asash Yohannes under my supervision and recommended that, it be accepted as fulfilling the thesis requirement.

Date Dr. Dawit Cherinet

iv Examiners’ Thesis Approval Sheet

As members of Board of Examiners of the final MSc. Graduate Thesis open defense examination, we certify that we have read and evaluated this Graduate Thesis prepared by ASASH YOHANNES entitled ”Affine Varieties” and recommended that it be accepted as fulfilling the thesis requirement for the Degree of Master of Science in Mathematics(Algebra).

Name of Chairperson Signature Date

Name of Principal Advisor Signature Date

Name of External Examiner Signature Date

Name of Internal Examiner Signature Date

SGS Approval Signature Date

Name Of The Department Head Signature Date

Final approval and acceptance of the thesis is contingent upon submission of the final copy of the Graduate Thesis to the Council of the Graduate Studies(CGS) through the Department Graduate Committed(DGC) of the candidate’s department.

v Contents

1 Introduction 1 1.1 Statement of the problem ...... 1 1.2 Objectives of the study ...... 2 1.2.1 General objectives of the study ...... 2 1.2.2 Specific objectives of the study ...... 2

2 Gr¨obnerBases 3 2.1 Polynomials in One Variable ...... 3 2.2 Polynomials in Several Variables Over a Field ...... 8

2.2.1 Ideals in K[x1, . . . , xn]...... 9

2.2.2 Monomial Ordering in K[x1, ··· , xn]...... 9

2.2.3 Division Algorithm in K[x1, . . . , xn]...... 11 2.3 Monomial Ideals and Dickson’s Lemma ...... 12 2.4 Hilbert-Basis Theorem and Gr¨obnerBases ...... 15

3 Of Ideals In Noetherian Rings 27 3.1 Prime, Maximal, Radical & Primary Ideals ...... 27 3.2 Primary Decomposition ...... 30

4 Affine Varieties 32 4.1 Affine Spaces ...... 32 4.2 Affine Varieties ...... 34 4.2.1 Properties Of Affine Varieties ...... 38 4.3 Parametrization Of Affine Varieties ...... 38 4.4 Ideals and Affine Varieties ...... 40 4.4.1 Hilbert’s Nullstellensatz ...... 45

vi 4.4.2 Radical Ideals and the -Variety Correspondence . . . 47 4.4.3 Sums, Products and Intersections Of Ideals ...... 50 4.4.4 Zariski Closure and Quotient Ideals ...... 54 4.5 Irreducible Varieties ...... 57 4.6 Decomposition of Affine Varieties in to Irreducibles ...... 61 Chapter 1

Introduction

Algebraic geometry is the combination of linear algebra and abstract algebra. Linear algebra study the solutions of system of linear equations, whereas abstract algebra deals with polynomial equations in one variable. combines these two fields of Mathematics by studying solutions of system of polynomial equations in several variables fi(x1, . . . , xn) = 0, i = 1, . . . n, where fi ∈ K[x1, . . . , xn] and K is a field.

One difference between linear equations and polynomial equations is that; theorems on linear equations does not depend on the field K, but for polynomial equations depend on whether the field K is algebraically closed or not(to lesser extent) where K has characteristic zero.The space where system of polynomial equations in which algebraic geometry deals with is said to be algebraic-Variety (or Affine variety).

1.1 Statement of the problem

In this thesis we construct a statement that relates the affine varieties in Kn and ideals in K[x1, . . . , xn], where K is a field.

1 Chapter 1 : Introduction

1.2 Objectives of the study

1.2.1 General objectives of the study

The general objectives of the study are:

• To determine the properties of affine varieties.

• To characterize affine varieties.

• To decompose affine varieties in to irreducible varieties.

1.2.2 Specific objectives of the study

The specific objectives of the study are:

• To study the relation ship between ideals and affine varieties.

• To see the correspondence between prime ideals and irreducible varieties.

1.2 Objectives of the study 2 Chapter 2

Gr¨obnerBases

2.1 Polynomials in One Variable

Consider the polynomial ring K[x] in x over a field K which is the set of expressions of the form

m K[x] = {f(x) = a0 + a1x + ··· + amx , am 6= 0}.

i where a0, . . . , am ∈ K, are called coefficients of x , i = 0, . . . , m. m am called the leading coefficient of f, amx called leading term of f denoted by LT (f), m is called the degree of f denoted by deg(f).

We say a polynomial f ∈ K[x] is the zero polynomial if a0 = a1 = ··· = am = 0. ♣. We also introduce the following notation for polynomials in K[x]. The word polynomial is an expression of the form,

2 n a0 + a1x + a2x + ··· + anx .

This expression corresponds to the sequence (a0, a1, . . . , an,...) of its coefficients. As any function, a sequence a is determined by its values; for each i ∈ N, we write a(i) = ai ∈ R, so that

a = (a0, a1, a2, . . . , an, 0, 0,...).

The entries ai ∈ R are called the coefficients of the sequence. The term coefficient means ”acting together to some single end”. Here, coefficients combine

3 Chapter 2 : Gr¨obnerBases with powers of x to give the terms of the sequence. The sequence (0, 0,..., 0, 0,...) is a polynomial called the the-zero polynomial. The element x ∈ K[x] is given by

x = (0, 1, 0,...).

Thus, we have

x2 = x.x = (0, 1, 0,...)(0, 1, 0,...). = (0, 0, 1, 0 ...)

Similarly

x3 = (0, 0, 0, 1, 0,...). x4 = (0, 0, 0, 0, 1, 0,...). . .

Thus, we have,

(a0, a1, a2, . . . an, 0, 0,...) = (a0,...) + (0, a1, 0,...) + ··· + (0, 0, . . . , an,...)

= a0(1, 0,...) + a1(0, 1, 0,...) + ··· + an(0, 0,..., 1, 0,...) 2 n = a0 + a1x + a2x + ··· + anx .

Definition 2.1.1. Let X be a non empty subset of a ring R, and let {Ai : i ∈ I} be a family of ideals in R containing X, then

\ Ai i∈I is called an ideal generated by X denoted by hXi.

If X = {x1, . . . , xn}, then hXi = hx1, . . . , xni, in this case we say the ideal is ”finitely generated ”. If X is a singleton element X={x}, then the ideal is generated by a single element, such an ideal is called ”principal ideal.” Theorem 2.1.2. Let R be a ring. Then R is an integral domain if and only if R[x] is an integral domain.

2.1 Polynomials in One Variable 4 Chapter 2 : Gr¨obnerBases

Proof. (=⇒) Let R be an integral domain ⇒ R is a non-trivial ring with unity

Let a = (a0, a1,... ), b = (b0, b1,... ) ∈ R[x] be non-zero.

Since a, b 6= 0, ∃ n, m ∈ Z≥0 such that an 6= 0, bm 6= 0 and ai = 0 for i>n and bj = 0 for j>m. Now consider the (m + n)th term in the product a.b. It is given by X arbs = anbm 6= 0, r+s=m+n

Since R is an integral domain, an 6= 0, & bm 6= 0

Note that r + s = m + n ⇒ r ≥ n, s ≥ m ⇒ (r = n or ar = 0), or

(s = m or bs = 0). Therefore, the (m + n)th term in the product a.b is non-zero. Hence a.b 6= 0. ⇒ R[x] contains no divisors of zero.Thus R[x] is an integral domain. (⇐=) Conversely Let R[x] be an integral domain. Since R is isomorphic to a sub-ring of R[x]. It follows that R is an integral domain.

Theorem 2.1.3. Let K be a field, then K[x] is an integral domain in which every ideal is principal ideal.

Proof. By the above theorem K[x] is an integral domain, since K is so. Let I ⊆ K[x] be an ideal. If I = {0}, then there is nothing to proof. Since I = h0i = {0}. Suppose I 6= {0}. ⇒ I contains at least one non-zero polynomial. Consider the set S = {deg(f(x): 0 6= f(x) ∈ I}.

Since S is non-empty set of non-negative integers, S has a least element ( by well ordering axiom ). Let n ∈ Z≥0 be the least element of S, then n = deg(f) for some f(x) ∈ I. Since f(x) ∈ I, we have hf(x)i ⊆ I, n ≤ deg(g(x)) for all 0 6= g(x) ∈ I. We shall prove that I is generated by f(x).

2.1 Polynomials in One Variable 5 Chapter 2 : Gr¨obnerBases

On the other hand, suppose g(x) ∈ I. By division algorithm ∃ q(x), r(x) ∈ K[x] such that

g(x) = q(x).f(x) + r(x), where r(x) = 0 or deg(r(x)) ≤ deg(f(x)) = n. Now, r(x) = g(x) − q(x).f(x) ∈ I (Since g(x), f(x) ∈ I and I is an ideal). By least property of n, it follows that r(x) = 0, and hence g(x) = q(x).f(x) ∈ hf(x)i. Thus, I ⊆ hf(x)i. Therefore, I = hf(x)i.

♣. Note that: In an integral domain D, if every ideal I is principal ideal, then D is called ””. From the above theorem, it follows that, K[x] is principal ideal domain, and the generator of an ideal I in K[x] is a non-zero polynomial of smallest degree contained in the ideal I. If I = hfi is an ideal in K[x], then a polynomial g ∈ I, provided the remainder is zero in the division of g by f. Definition 2.1.4. A greatest common divisor of polynomials f, g ∈ K[x] is a polynomial h ∈ K[x] such that

1. h divides f and g.

2. If p ∈ K[x] divides f and g, then p divides h.

When h has such property, we write GCD(f, g) = h. Example 2.1.5. GCD(x6 − 1, x4 − 1) = x2 − 1. Proposition 2.1.6. Let f, g ∈ K[x], then

1. GCD(f, g) exists and is unique up to multiplication by a non-zero constant k.

2. GCD(f, g) is the generator of the ideal hf, gi.

Proof. Consider the ideal hf, gi. Since every ideal of K[x] is principal ideal, ∃ h ∈ K[x] such that hf, gi = hhi. claim: h = GCD(f, g). To see this, first note that h divides f and g, since f, g ∈ hhi.

2.1 Polynomials in One Variable 6 Chapter 2 : Gr¨obnerBases

Next, suppose that p ∈ K[x] divides f, and g, ⇒ f = Cp and g = Dp for some C,D ∈ K[x] Since h ∈ hf, gi, ∃ A, B ∈ K[x] such that Af + Bg = h

⇒ h = Af + Bg = ACp + BDp = p(AC + BD)

This shows p divides h. Thus, h = GCD(f, g). This shows the existance of GCD(f, g). To prove uniqueness, suppose that h0 be another GCD(f, g) ⇒ h and h0 are the divisors of each other. ⇒ h is a non-zero constant multiple of h0 .Thus, part (1) of the proposition is proved.Part (2) follows by the way we found h in the above paragraph.

♣.Note that: We use the Euclidean division algorithm to find the GCD of polynomials in K[x]. Example 2.1.7. Since GCD(x6 − 1, x4 − 1) = x2 − 1, we have

hx6 − 1, x4 − 1i = hx2 − 1i.

Proposition 2.1.8. Let f1, . . . , fs ∈ K[x] for s ≥ 2, then

1. GCD(f1, . . . , fs) exists and is unique up to multiplication by a non-zero constant k.

2. GCD(f1, . . . , fs) is the generator of the ideal hf1, . . . , fsi.

3. GCD(f1, . . . , fs) = GCD(f1, GCD(f2, . . . , fs)).

Proof. Consider the ideal hf1, . . . , fsi. Since every ideal of K[x] is principal ideal,

∃ h ∈ K[x] such that hhi = hf1, . . . , fsi. claim: h = GCD(f1, . . . , fs)

To see this, first note that h divides f1, . . . , fs, since f1, . . . , fs ∈ hhi.

Next, suppose p ∈ K[x] divides f1, . . . , fs

⇒ f1 = C1p, f2 = C2p, . . . , fs = Csp for some C1,...,Cs ∈ K[x].

2.1 Polynomials in One Variable 7 Chapter 2 : Gr¨obnerBases

Since h ∈ hf1, . . . , fsi, there are A1,A2,...As ∈ K[x] such that

h = A1f1 + A2f2 ··· + Asfs

⇒ h = A1C1p + A2C2p + ··· + AsCsp

= p(A1C1 + A2C2 + ··· + AsCs)

⇒ p divides h.

Thus, GCD(f1, . . . , fs) = h. This shows the existance of GCD. 0 To prove uniqueness, suppose that h be another GCD of f1, . . . , fs, ⇒ h and h‘ are the divisors of each other. ⇒ h is a non-zero constant multiple of h‘. Thus, part (1) of the proposition is proved and part (2) follows by the way we found h in the above paragraph.

To prove part (3), let h = GCD(f2, . . . , fs), by part (2) of this proposition

hf1, hi = hf1, . . . , fsi.

Also hGCD(f1, h)i = hGCD(f1, . . . , fs)i.

⇒ GCD(f1, h) = GCD(f1, . . . , fs) follows from the uniqueness of the GCD.

2.2 Polynomials in Several Variables Over a Field

In this section we will see polynomials and their ideals in several variables.

Note that: If K is a field, then we denote K[x1, . . . , xn], the ring of polynomials with n variables x1, . . . , xn, where the coefficients are in K.

Definition 2.2.1. A monomial in x1, . . . , xn is a product of the form

α1 α2 αn x1 .x2 . . . xn where α1, . . . , αn are non-negative integers.

The total degree of the monomial is the sum α1 + ... + αn. n To simplify this notation, let α = (α1, . . . , αn) ∈ Z≥0, then

α α1 αn x = x1 . . . . xn

And |α| = α1 + ... + αn which is the total degree of the monomial. Example 2.2.2. Let f = 2x2y3z ∈ K[x, y, z], then we have

α = (2, 3, 1) and |α| = 6

2.2 Polynomials in Several Variables Over a Field 8 Chapter 2 : Gr¨obnerBases

Definition 2.2.3. A polynomial f in x1, . . . , xn with coefficients in a field K is a finite linear combinations of monomials with coefficients in K of the form

X α f = aαx , aα ∈ K. α

We have the following notations α i. aα is the coefficient of the monomial x . α ii. If aα 6= 0, then aαx is the term of f. iii. The total degree of f is the maximum |α|, denoted by deg(f).

2.2.1 Ideals in K[x1, . . . , xn]

Definition 2.2.4. Let f1, . . . , fs ∈ K[x1, . . . , xn], then we set

( s ) X hf1, . . . , fsi = hifi : h1, . . . , hs ∈ K[x1, . . . , xn] . i=1

♣. The crucial fact is that hf1, . . . , fsi is an ideal of K[x1, . . . , xn].

Note that: i. We call hf1, . . . , fsi an ideal generated by f1, . . . , fs. ii. The polynomials f1, . . . , fs are called ”the generators” or ”the bases” of I .

2.2.2 Monomial Ordering in K[x1, ··· , xn]

Definition 2.2.5. Let (X, ≤) be a partially ordered set (poset). We say X is called ”totally (or linearly) ordered set” if ∀ a ,b ∈ X, either a ≤ b or b ≤ a. A totally ordered set is also called ”Chain”. [3]

Definition 2.2.6. A Monomial ordering in K[x1, . . . , xn] is any relation ”>” n on Z≥0, satisfying

n 1.” >” is totally (or linearly) ordered set on Z≥0.

n 2. If α>β, and γ ∈ Z≥0, then α + γ > β + γ.

n n 3. > is well-ordering on Z≥0. ( i.e every non-empty subset of Z≥0 has smallest n element on Z≥0 under >).[1]

2.2 Polynomials in Several Variables Over a Field 9 Chapter 2 : Gr¨obnerBases

Example 2.2.7. Lexico-graphic ordering, graded-lexicographic ordering and graded-reverse-lexicographic ordering are examples of monomial ordering. Definition 2.2.8. [Lexicographic-order] n Let α = (α1, . . . , αn) and β = (β1, . . . , βn) ∈ Z≥0, we say α>lex β, if in the vector α β difference α − β, the left-most non-zero entry is positive. We write x >lex x , if α >lex β. [1] Example 2.2.9. 2 3 4 1. xy >lex y z since α = (1, 2, 0), β = (0, 3, 4) and α − β = (1, −1, −4). 3 2 4 3 2 2. x y z >lex x y z since α − β = (0, 0, 3). Definition 2.2.10. [Graded-Lexicographic-order] n Let α, β ∈ Z≥0, we say α>grlex β, if n n X X |α| = αi > |β| = βi. i=1 i=1 α β Or |α| = |β| and α >lex β. We write x >grlex x if α >grlex β.[1] Example 2.2.11. 2 3 3 2 1. xy z >grlex x y since |(1, 2, 3)| = 6>|(3, 2, 0)| = 5 2 4 5 2. xy z >grlex xyz . Since |α| = |β| = 7 and α>lex β. Definition 2.2.12. [Graded-Reverse-Lexicographic-order] n Let α, β ∈ Z≥0, we say α>grevlex β, if |α|>|β|. Or |α| = |β| and the right most α β non-zero entry in |α| − |β| is negative. We write x >grevlex x if α>grevlex β.[1] Example 2.2.13. 4 4 7 5 5 4 1. x y z >grevlex x y z (total degree greater) 5 2 4 3 2. xy z >grevlex x yz Since |α| = 8 = |β| and α − β = (−3, 4, −1). Definition 2.2.14. Let X α f = aαx α be a non-zero polynomial in K[x1, ··· , xn], and let > be a monomial ordering,

1. The multidegree of f is : multideg(f)=max{|α|: aα 6= 0}.

2. The leading-coefficient of f is : LC(f) = αmultideg(f) ∈ K. 3. The leading monomial of f is : LM(f) = xmultideg(f) ∈ K.

4. The leading term of f is : LT (f) = LC(f).LM(f). [1] Example 2.2.15. Let f = 4xy2 +4z2 −5x3 +7x2z2 with lex-order, then we have;

multideg(f) = (3, 0, 0)

2.2 Polynomials in Several Variables Over a Field 10 Chapter 2 : Gr¨obnerBases

LC(f) = −5

LM(f) = x3

LT (f) = −5x3

2.2.3 Division Algorithm in K[x1, . . . , xn]

Fix a monomial ordering on K[x1, . . . , xn], and suppose g1, . . . , gm is a set of nonzero polynomials in K[x1, . . . , xn]. If f is any polynomial in K[x1, . . . , xn], start with a set of quotients q1, . . . , qm and a remainder r initially all equal to 0 and successively test whether the leading term of the dividend f is divisible by the leading terms of the divisors g1, . . . , gm, in that order. Then i. If LT (f) is divisible by LT (gi), say, LT (f) = aiLT (gi), add ai to the quotient qi, replace f by the dividend f − aigi (a polynomial with lower order leading term), and reiterate the entire process. ii. If the leading term of the dividend f is not divisible by any of the leading terms LT (g1), . . . , LT (gm), add the leading term of f to the remainder r, replace f by the dividend f − LT (f) (i.e., remove the leading term of f), and reiterate the entire process. The process terminates when the dividend is 0 and results in a set of quotients q1, . . . , qm and a remainder r with

f = q1g1 + ... + qmgm + r.

Each qigi has multidegree less than or equal to the multidegree of f and the remainder r has the property that no nonzero term in r is divisible by any of the leading terms LT (g1), . . . , LT (gm) (since only terms with this property are added to r in (ii)).[2]

Theorem 2.2.16. Division-algorithm in K[x1, ··· , xn] n Fix a monomial ordering > on Z≥0 , and let F = (g1, . . . , gt) ∈ K[x1, . . . , xn].

Then every f ∈ K[x1, . . . , xn] can be written in the form

f = a1g1 + ... + atgt + r

2.2 Polynomials in Several Variables Over a Field 11 Chapter 2 : Gr¨obnerBases

where ai, r ∈ K[x1, . . . , xn], either r = 0 or is a linear combination with coeffi- cients in K of monomials, non of which is divisible by any of LT (g1), . . . , LT (gt).

We call r is the remainder on the division of f by F. Furthermore; if aigi 6= 0, then we have multideg(f) ≥ multideg(aigi).[1] Example 2.2.17. 2 2 2 2 1. Let f = x y + xy + y , g1 = xy − 1, g2 = y − 1 with lex-order, then dividing f by F = (g1, g2), we get

x2y + xy2 + y2 = (x + y)(xy − 1) + 1(y2 − 1) + (x + y + 1)

2. If F = (g2, g1), then we have,

x2y + xy2 + y2 = (x + 1)(y2 − 1) + x(xy − 1) + (2x + 1) showing that the remainder is not uniquely determined.

Note that: If r = 0, then f ∈ hg1, ··· , gti. Example 2.2.18. 2 2 a. Let f1 = xy+1, f2 = y −1 ∈ K[x, y] with lex order. Then dividing f = xy −x by F = (f1, f2) we have

xy2 − x = y(xy + 1) + 0(y2 − 1) + (−x − y) (1) b. The same polynomial with F = (f2, f1) we get

xy2 − x = x(y2 − 1) + 0(xy + 1) + 0 (2)

This shows that f ∈ hf1, f2i. Thus, we must conclude that, division algorithm given in theorem 2.2.16 is an imperfect generalization of its one-variable counterpart. So our aim is to get ”a good generating set for an ideal in K[x1, . . . , xn]”.

2.3 Monomial Ideals and Dickson’s Lemma

Definition 2.3.1. An ideal I ⊆ K[x1, . . . , xn] is monomial ideal if there exists a n subset A ⊆ Z≥0 (possibly infinite ) such that I consists of all polynomials of the

2.3 Monomial Ideals and Dickson’s Lemma 12 Chapter 2 : Gr¨obnerBases form X α hαx , hα ∈ K[x1, . . . , xn]. α∈A In this case we write I = hxα|α ∈ Ai.[1] i.e I is an ideal generated by monomials.[4] Example 2.3.2. I = hx4y2, x3y4, x2y5i is a monomial ideal. Lemma 2.3.3. Let I = hxα : α ∈ Ai be a monomial ideal, then the monomial xα ∈ I if and only if xβis divisible by xα for some α ∈ A.

Proof. If xβ is a multiple of xα for some α ∈ A, then xβ ∈ I by definition of ideal. Conversely, if xβ ∈ I, then s β X α(i) x = hix , i=1 where hi ∈ K[x1, . . . , xn] and α(i) ∈ A.

If we expand each hi as a linear combination of monomials, we see that every term on the right side of the equation is divisible by some xα(i). Hence, the left side xβ must have the same property.

Lemma 2.3.4. Let I be a monomial ideal and f ∈ K[x1, . . . , xn], then the following statements are equivalent

1. f ∈ I.

2. Every term of f lies in I.

3. f is a k-linear combination of monomials in I. Theorem 2.3.5. [Dickson’s lemma] Let I = hxα : α ∈ Ai be a monomial ideal, then I can be written in the form

I = hxα(1), . . . , xα(s)i where α(1), . . . , α(s) ∈ A. In particular I has finite bases (or generators).

Proof. (By induction on n the number of variables) α n If n = 1, then I is generated by the monomials x1 , where α ∈ Z≥0. Let β ≤ α n β be the smallest element of A ⊆ Z≥0. Then β ≤ α ∀ α ∈ A, so that x1 , divides α β all other generators x1 . From here, I = hx1 i follows easily.

2.3 Monomial Ideals and Dickson’s Lemma 13 Chapter 2 : Gr¨obnerBases

Now assume n>1 and that the theorem is true for n−1. We will write the variables α m as x1, . . . , xn−1, y, so that monomials in K[x1, . . . , xn−1, y] can be written as x y , n−1 where α = (α1, . . . , αn−1) ∈ Z≥0 and m ∈ Z≥0 .

Suppose I ⊆ K[x1, . . . , xn−1, y] is a monomial ideal. To find generators for I, α let J be the ideal in K[x1, . . . , xn−1] generated by the monomials x for which α m x y ∈ I for some m ≥ 0. Since J is a monomial ideal in K[x1, . . . , xn−1], our inductive hypothesis implies that finitely many of the xα’s generate J, say J = hxα(1), ··· , xα(s)i. The ideal J can be understood as the ”projection” of I into K[x1, . . . , xn−1].

α(i) mi For each i = 1, . . . s, the definition of J tells us that x y ∈ I for some mi ≥ 0.

Let m be the largest of the mi. Then, for each k between 1 and m−1, consider the β β k ideal Jk ⊆ K[x1, . . . , xn−1] generated by the monomials x such that x y ∈ I.

One can think of Jk as the ”slice” of I generated by monomials containing y th exactly to the k power. Using our inductive hypothesis again, Jk has a finite

αk(1) αk(sk) generating set of monomials, say Jk = hx , . . . , x i, We claim that I is generated by the monomials in the following list:

from J : xα(1)ym, . . . , xα(s)ym,

α(1) α0(s0) from J0 : x , . . . , x ,

α1(1) α1(s1) from J1 : x y, . . . , x y, . .

αm−1(1) m−1 αm−1(sm−1) m−1 from Jm−1 : x y , . . . , x y .

First note that every monomial in I is divisible by one on the list. To see why, let xαyp ∈ I. If p ≥ m, then xαyp is divisible by some xα(i)ym by the construction of J. On the other hand, if p ≤ m − 1, then xαyp is divisible by some xαp(j)yp by the construction of Jp. It follows Lemma 2.3.3 that the above monomials generate an ideal having the same monomials as I. To complete the proof of the theorem, we need to show the finite set of generators can be chosen from a given set of generators for the ideal. If we switch back to writing the variables as x1, . . . , xn, then our monomial ideal is α I = hx : α ∈ Ai ⊆ k[x1, . . . , xn]. We need to show that I is generated by finitely many of the xα’s, where α ∈ A. By the previous paragraph, we know that I = hxβ(1), . . . , xβ(s)i for some monomials xβ(i) in I. Since xβ(i) ∈ I = hxα : α ∈ Ai, Lemma 2.3.3 tells us that each xβ(i) is divisible by xα(i) for some α(i) ∈ A. From

2.3 Monomial Ideals and Dickson’s Lemma 14 Chapter 2 : Gr¨obnerBases here it is easy to show that I = hxα(1), . . . , xα(s)i. This completes the proof.

2.4 Hilbert-Basis Theorem and Gr¨obnerBases

♣. The Gr¨obner bases have good properties relative to the ”Division-algorithm” for the ideal description problem[1].

Definition 2.4.1. Let I ⊆ K[x1, . . . , xn] be a non-zero ideal 1. We denote LT (I), the set of leading terms of the elements of I. Thus, LT (I)={cxα : there exists f ∈ I with LT (f) = cxα}.

2. We denote hLT (I)i, the ideal generated by LT (I).

Proposition 2.4.2. Let I ⊆ K[x1, . . . , xn] be a non-zero ideal, then i. hLT (I)i is a monomial ideal. ii. There are g1, . . . , gt ∈ I such that

hLT (I)i = hLT (g1), . . . , LT (gt)i.

Proof. (i) The leading monomials LM(g) of elements g ∈ I − {0} generate the monomial ideal hLT (g): g ∈ I −{0}i. Since LM(gi) and LT (g) differ by a nonzero constant, this ideal equals hLT (g): g ∈ I − {0}i = hLT (I)i. Thus, hLT (I)i is a monomial ideal. (ii) Since hLT (I)i is generated by the monomials LM(g) for g ∈ I − {0}, by Dickson Lemma:

LT (I) = hLM(g1), . . . , LM(gt)i for finitely many g1, . . . , gt ∈ I.

Since LM(gi) differ from LT (gi), it follows that

hLT (I)i = hLT (g1), . . . , LT (gt)i.

This completes the proof.

Theorem 2.4.3. [Hilbert Basis Theorem]

Every ideal in K[x1, . . . , xn] is finitely generated. i.e. if I ⊆ K[x1, . . . , xn], then

I = hg1, . . . , gti, for some g1, . . . , gt ∈ I.

Proof. If I = {0}, we take our generating set to be {0}.

If I contains some non-zero polynomials, then the generating set g1, . . . , gt for I

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 15 Chapter 2 : Gr¨obnerBases can be constructed as follows:

By proposition 2.4.2, there are g1, . . . , gt such that hLT (I)i = hLT (g1), . . . , LT (gt)i.

Claim: I = hg1, . . . , gti

Since each gi ∈ I,

hg1, . . . , gti ⊆ I.

Now let f ∈ I be arbitrary, then by division algorithm of f by g1, . . . , gt we have

f = a1g1 + ··· + atgt + r where no term of r is divisible by LT (g1), . . . , LT (gt).

Claim: r = 0

But r = f − a1g1 − a2g2 − · · · − atgt ∈ I.

If r 6= 0, LT (r) ∈ hLT (I)i = hLT (g1), . . . , LT (gt)i.

By Lemma 2.3.3, LT (r) must be divisible by LT (gi), which is a contradiction to the definition of remainder.

Consequently, r = 0. Thus, f = a1g1 + ··· + atgt + 0 ∈ hg1, . . . , gti.

⇒ I ⊆ hg1, . . . , gti.

Hence, I = hg1, . . . , gti. This completes the proof.

Definition 2.4.4. Fix a monomial order. A finite subset G = {g1, . . . , gt} of an ideal is said to be Gro¨bner basis(or standard basis) if

hLT (I)i = hLT (g1), . . . , LT (gt)i.

Equivalently, {g1, . . . , gt} ⊆ I is Gro¨bner basis of I, if the leading term of any element of I is divisible by one of the LT (gi).

Corollary 2.4.5. Fix a monomial order. Then every non-zero ideal I ⊆ K[x1, . . . , xn] has a Gro¨bner basis. Furthermore; any Gro¨bner basis for an ideal I is a base of I.

Proof. Given a non-zero ideal, the set G = {g1, . . . , gt} conducted in the proof of theorem 2.4.3 is a Gr¨obner basis by definition.

For the second claim, note that if hLT (I)i = hLT (g1), . . . , LT (gt)i, then the argument given in theorem 2.4.3 shows that I = hg1, . . . , gti, so that G is a basis for I.

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 16 Chapter 2 : Gr¨obnerBases

Proposition 2.4.6. Let G = {g1, . . . , gt} be a Gro¨bner basis of an ideal

I ⊆ K[x1, . . . , xn], and let f ∈ K[x1, . . . , xn]. Then there exists a unique r ∈ K[x1, . . . , xn] with the following two properties:

1. No term of r is divisible by any of LT (g1), . . . , LT (gt). 2. There is g ∈ I such that f = g + r.

In particular, r is the remainder on the division of G no matter how the elements of G are listed when using the division algorithm.

Proof. The division algorithm gives f = a1g1 + ··· + atgt + r, where r satisfies (1).

We can also satisfy (2) by setting g = a1g1 + ··· + atgt. So that f = g + r. To show uniqueness: Suppose f = g + r = g0 + r0 satisfy (1) and (2). ⇒ r − r0 = g0 − g ∈ I. 0 0 If r 6= r , then LT (r − r ) ∈ hLT (I)i = hLT (g1), . . . , LT (gt)i. 0 ⇒ LT (r − r ) is divisible by some LT (gi). Which is impossible, since no term of 0 r, r is divisible one of LT (g1), . . . , LT (gt). ⇒ r − r0 must be zero. (i.e r − r0 = 0). ⇒ r = r0. So, g − g0 = r − r0 = 0 ⇒ g = g0. (uniqueness is proved) The final part of the proposition follows from the uniqueness of r.

Corollary 2.4.7. Let G = {g1, . . . , gt} be a Gro¨bner basis of an ideal

I ⊆ K[x1, . . . , xn], and f ∈ K[x1, . . . , xn], then f ∈ I if and only if the remainder on division of f by G is zero.

Proof. (⇒) If r = 0, then f ∈ I. (⇐) Conversely Let f ∈ I, ⇒ f = f + 0 satisfies the two conditions of proposition 2.4.6. It follows that 0 is the remainder in the division of f by G.

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 17 Chapter 2 : Gr¨obnerBases

♣. Note that : Corollary 2.4.7 is equivalent to the definition of Gr¨obner basis.

Definition 2.4.8. Let f, g ∈ K[x1, . . . , xn] be non-zero polynomials, then

1. If multideg(f) = α, multideg(g) = β, then, let γ = (γ1, . . . , γn), where γ γi = max{(αi, βi)} ∀i, we call x , the least common multiple of LM(f) and LM(g).

2. The S-polynomial of f and g denoted by S(f, g), is the combination xγ xγ S(f, g) = .f − .g LT (f) LT (g) Example 2.4.9. Let f = x3y2 − x2y3 + x, g = 3x4y + y2 in K[x, y], with grlex order. Then LT (f) = x3y2 , LT (g) = 3x4y , xγ = LCM(LM(f), LM(g)) = x4y2 xγ xγ ⇒ S(f, g) = .f − .g LT (f) LT (g) x4y2 x4y2 = (x3y2 − x2y3 + x) − (3x4y + y2) x3y2 3x4y 1 = −x3y3 + x2 − y3. 3 Ps Lemma 2.4.10. Suppose we have a sum i=1 cifi, where ci ∈ K and n Ps Ps multideg(fi) = δ ∈ Z≥o for all i. If ( i=1 cifi) < δ, then i=1 cifi is a linear combination, with coefficients in K, of the S-polynomials S(fj, fk) for 1 ≤ j, k ≤ s. Furthermore; each S(fi, fk) has multidegree < δ.

Proof. Let di = LT (fi), so that cidi is the leading coefficient of cidi. Since the cidi all have multidegree δ and their sum has strictly smaller multidegree, it follows Ps easily that i=1 cidi = 0.

Define pi = fi/di, and note that pi has leading coefficient 1. Consider the telescoping sum

Ps Ps i=1 cidi = i=1 cidipi = c1d1(p1 − p2) + (c1d1 + c2d2)(p2 − p3) + ···

+(c1d1 + ··· + cs−1ds−1)(ps−1 − ps) + (c1d1 + ··· + csds)ps.

δ By assumption, LT (fi) = dix , which implies that the least common multiple of δ LT (fj) and LM(fk) is x . Thus xδ xδ xδ xδ S(fj, fk) = fj − fk = δ fj − δ fk = pj − pk. (1) LT (fj) LT (fk) djx dkx

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 18 Chapter 2 : Gr¨obnerBases

Ps Using this equation and i=1 cidi = 0, the telescoping sum becomes

s X cifi = c1d1S(f1, f2)+(c1d1+c2d2)S(f2, f3)+···+(c1d1+···+cs−1ds−1)S(fs−1, fs), i=1

Which is the sum of the desired form. Since pj and pk have multidegree δ and leading coefficient 1, the difference pj −pk has multidegree < δ. By equation (1), the same is true of S(fj, fk), and the lemma is proved. Theorem 2.4.11. [Buchberger’s Criterion]

Let I be a polynomial ideal, then the basis G = {g1, ··· , gt} is a Gro¨bner basis if and only if ∀ i 6= j, the remainder on the division of S(gi, gj) by G (listed in some order) is zero.

Proof. (⇒) : If G is a Gr¨obner basis, then since S(gi, gj) ∈ I, the remainder on division by G is zero by corollary 2.4.7. (⇐:) Conversely Let f ∈ I be a non-zero polynomial. We must show that if the S-polynomials all have zero remainders on division by G, then LT (f) ∈ hLT (g1), . . . , LT (gt)i. Before giving the details, let us outline the strategy of the proof.

Given f ∈ I = (g1, . . . , gt), there are polynomials hi ∈ K[x1, . . . , xn] such that

t X f = higi. (2) i=1 It follows that

multideg(f) ≤ max(multideg(higi)). (3)

If equality does not occur, then some cancellation must occur among the leading terms of (2). Lemma 2.4.10 enables us to rewrite this in terms of S-polynomials. Then our assumption that the S-polynomials have zero remainders will allow us to replace the S-polynomials by expressions that involve less cancellation. Thus, we will get an expression for f that has less cancellation of leading terms. Continuing in this way, we will eventually find an expression (2) for f where equality occurs in (3). Then multideg(f) = multideg(higi) for some i, and it will follow that

LT (f) is divisible by LT (gi). This will show that LT (f) ∈ hLT (g1), . . . , LT (gt)i, which is what we want to prove. We now give the detail of the proof. Given an expression written in the form (2).

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 19 Chapter 2 : Gr¨obnerBases

For each such expression, we get a possibly different δ. Since a monomial ordering is well-ordering, we can select an expression (2) for f such that δ is minimal. We will show that once this minimal δ is chosen, we have multideg(f) = δ. Then equality occurs in (3), and as we observed, it follows that LT (f) ∈ hLT (g1), . . . , LT (gt)i. This will prove the theorem. It remains to show multideg(f) = δ. We will prove this by contradiction. Equality can fail only when multideg(f) < δ. To isolate the terms of multidegree δ, let us write f in the following form: X X f = higi + higi m(i)=δ m(i)<δ X X X = LT (hi)gi + (hi − LT (hi))gi + higi (4) m(i)=δ m(i)=δ m(i)<δ

The monomials appearing in the second and third sums on the second line all have multidegree < δ. α(i) P P α(i) Let LT (hi) = cix . Then the first sum m(i)=δ LT (hi)gi = m(i)=δ cix gi has α(i) exactly the form described in Lemma 2.4.10 with fi = x gi. Thus Lemma 2.4.10 α(j) α(k) implies that this sum is a linear combination of the S-polynomials S(x gj, x gk). However;

δ δ α(j) α(k) x α(j) x α(k) S(x gj, x gk) = α(j) x gj − α(k) x gk x LT (gj) x LT (gk)

δ−γjk = x S(gj, gk),

γjk where x = LCM(LM(gj), LM(gk)). Thus, there are constants cjk ∈ K such that

X X δ−γjk LT (hi)gi = cjkx S(gj, gk). (5) m(i)=δ j,k

The next step is to use our hypothesis that the remainder of S(gj, gk) on the division by g1, . . . , gt is zero. Using the division algorithm, this means that each S-polynomial can be written in the form

t X S(gj, gk) = aijkgi, (6) i=1

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 20 Chapter 2 : Gr¨obnerBases

where aijk ∈ K[x1, . . . , xn]. The division algorithm also tells that

multideg(aijkgj) ≤ multideg(S(gj, gk)) (7)

∀ i, j, k. Intuitively, this says that when the remainder is zero, we can find an expression for S(gj, gk) in terms of G where the leading terms do not all cancel.

δ−γjk To exploit this, multiply the expression for S(gj, gk) by x to obtain

t δ−γjk X x S(gj, gk) = bijkgi, i=1

δ−γjk where bijk = x aijk. Then (7) and Lemma 2.4.10 imply that

δ−γjk multideg(bijkgi) ≤ multideg(x S(gj, gk)) < δ. (8)

δ−γjk If we substitute the above expression for x S(gj, gk) into (5), we get an equation

X X δ−γjk X X X ˜ LT (hi)gi = cjkx S(gj, gk) = cjk( bijkgi) = higi m(i)=δ j,k j,k i i which by (8) has the property that for all i,

˜ multideg(higi) < δ.

P P ˜ For the final step in the proof, substitute m(i)=δ LT (hi)gi = i higi into equation (4) to obtain an expression for f as a polynomiol combination of the gi’s where all the terms have multidegree < δ. This contradicts the minimality of δ and completes the proof of the theorem.

Example 2.4.12. Consider I = hy − x2, z − x3i, then the set G = {y − x2, z − x3} is a Gro¨bner basis with the lex order y>x>z. Since xγ xγ S(f, g) = .f − .g LT (f) LT (g) yz yz = (y − x2) − (z − x3) y z = −zx2 + yx3.

Now dividing S(f, g) by f = y − x2, g = z − x3 we have

−zx2 + yx3 = (x3)(y − x2) + (−x2)(z − x3) + 0.

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 21 Chapter 2 : Gr¨obnerBases

F Definition 2.4.13. We define S(f1, f2) for the remainder on the division of the S-polynomial S(f, g) by the set F = (f1, f2). Next we are going to construct a Gr¨obner basis for a given set of ideals by means of algorithm which is called Buchberger’s algorithm. Before stating the theorem we need to define ACC(Ascending chain condition) of ideals. Definition 2.4.14. A R satisfies ACC, Ascending Chain Condition, if every ascending chain of ideals

I1 ⊆ I2 ⊆ · · · ⊆ In ⊆ · · · stops; that is, the sequence is constant from some point on: ∃ N ∈ Z with

IN = IN+1 = IN+2 = ···

Theorem 2.4.15. [Ascending Chain Condition (ACC)] Let

I1 ⊆ I2 ⊆ · · · ⊆ In ⊆ · · · be an ascending chain of ideals in K[x1, . . . , xn]. Then there exists an N ≥ 1 such that

IN = IN+1 = IN+2 = ···

Proof. Let I1 ⊆ I2 ⊆ I3 ⊆ · · · be an ascending chain of ideals. Set ∞ [ I = Ii. i=1 i.Claim I is an ideal in K[x1, . . . , xn] To this end,

(a). 0 ∈ I, since 0 ∈ Ii ∀i. (b).If f, g ∈ I, then by definition

⇒ f ∈ Ii and g ∈ Ij for some i, j (possibly different).

Since Ii forms an ascending chain, if we relabel so that i ≤ j, then f and g are in Ij.

Since Ij is an ideal, ⇒ f + g ∈ Ij. Hence, f + g ∈ I.

(c). If f ∈ I and r ∈ K[x1, . . . , xn], then f ∈ Ii for some i and rf ∈ Ii ⊆ I.

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 22 Chapter 2 : Gr¨obnerBases

Hence I is an ideal. ii. By Hilbert Basis Theorem I have finite generating set. Set

I = hf1, . . . , fsi.

But each of the generators is contained in some one of the Ij, say fi ∈ Iji for some ji, i = 1, . . . , s.

Take N = Max{ji}.

Then, by definition of ascending chain fi ∈ IN ∀i. Hence

I = hf1, . . . , fsi ⊆ IN ⊆ IN+1 ⊆ · · · ⊆ I.

As a result the ascending chain stabilizes with IN . All the subsequent ideals in the chain are equal.

Theorem 2.4.16. [Buchberger’s Algorithm]

Every ideal I = hf1, . . . , fsi in K[x1, . . . , xn] has Gro¨bner basis which can be computed by an algorithm.[3]

Proof. If G = {g1, . . . , gt}, then hGi and hLT (G)i will denote the following ideals:

hGi = hg1, . . . , gti

hLT (G)i = hLT (g1), . . . , LT (gt)i.

Turning to the proof of the theorem, we first show that G ⊆ I holds at every stage of the algorithm. This is true initially, and whenever we enlarge G, we do G0 so by adding the remainders S = S(p, q) for p, q ∈ G. Thus, if G ⊆ I, then p, q and, hence S(p, q) are in I, and since we are dividing by G0 ⊆ I, we get G ∪ {S} ⊆ I. We also note that G contains the given basis F of I so that G is actually a basis of I. G0 The algorithm terminates when G = G0, which means that S = S(p, q) = 0 for all p, q ∈ G.. Hence G is a Gr¨obner basis of hGi = I. It remains to prove that the algorithm terminates. We need to consider what happen after each pass through the main loop. The set G consists of G0 (the old G) together with the non-zero remainders of S-polynomials of the elements of G0. Then

hLT (G0)i ⊆ hLT (G)i (1)

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 23 Chapter 2 : Gr¨obnerBases

Since G0 ⊆ G. Furthermore, if G0 6= G, we claim that hLT (G0)i is strictly smaller than hLT (G)i. To see this, suppose that a non-zero remainder r of an S-polynomial has been adjoined to G. Since r is a remainder on division by G0, LT (r) is not divisible by the leading terms of elements of the G0, and thus LT (r) ∈/ hLT (G0)i. Yet LT (r) ∈ hLT (G0)i. Which proves our claim. By (1), the ideal hLT (G0)i from successive iterations of the loop form an acceding chain of ideals in K[1, . . . , xn]. Thus, te ACC implies that after a finite number of iterations the chain will stabilize, so that hLT (G0)i = hLT (G)i must happen eventually. By the previous paragraph, this implies that G = G0, so that the algorithm must terminate after a finite number of steps.

♣. Algorithm: (Buchberger’s Algorithm)

Buchberger’s criterion can be used to provide an algorithm to find a Gr¨obner basis for an ideal I as follows: G If I = hg1, . . . , gmi, G = {g1, . . . , gm}, and S(gi, gj) = 0, then G is a Gr¨obner basis.

Otherwise S(gi, gj) has a non-zero remainder r. 0 Increase G by appending the polynomial gm+1 = r and set G = {g1, . . . , gm, gm+1}. This procedure terminate after a finite number of steps in a generating set G, that satisfies Buchberger’s criterion, hence is a Gr¨obner basis for I. G Note that: Once S(gi, gj) = 0, then it yields remainder zero when additional polynomials are appended to G.[2] Example 2.4.17. Consider the ring K[x, y] with grlex order, and let

3 2 2 I = hf1, f2i = hx − 2xy, x y − 2y + xi

3 2 γ 3 ⇒ LT (f1) = x , LT (f2) = x y and x = x y

x3y x3y ⇒ S(f , f ) = (x3 − 2xy) − (x2y − 2y2 + x) 1 2 x3 x2y = −x2

F 2 Since S(f1, f2) = −x 6= 0, we should include to our generating set as a new 2 generator f3 = −x .

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 24 Chapter 2 : Gr¨obnerBases

⇒ We get F = (f1, f2, f3). Now x3 x3 S(f , f ) = (x3 − 2xy) − (−x2) 1 3 x3 −x2 = −2xy.

F Since S(f1, f3) = −2xy 6= 0, we must add f4 = −2xy to our generating set as a new generator.

And we have F = (f1, f2, f3, f4) F F We also have S(f1, f2) = S(f1, f3) = 0 2 Since, S(f1, f4) = −2xy = yf4. F F So S(f1, f4) = 0, the Buchberger’s algorithm asserts that S(f1, f4) is not included to our generating set. 2 Next S(f2, f3) = −2y + x. F 2 2 But S(f2, f3) = −2y + x 6= 0, we must add f5 = −2y + x to our generating set as a new generator.

Setting F = (f1, f2, f3, f4, f5). And we finally found that

F S(fi, fj) = 0 ∀i ≤ j ≤ 5

It follows that a grlex Gro¨bner basis for the ideal I is given by 3 2 2 2 2 {f1, f2, f3, f4, f5} = {x − 2xy, x y − 2y , −x , −2xy, −2y + x}. Definition 2.4.18. A minimal Gro¨bner basis for the ideal I is a Gro¨bner basis

G = {g1, . . . , gt} such that LT (gi) = 1 ∀gi ∈ G and LT (gj) is not divisible by

LT (gi) ∀ i 6= j.[2] Example 2.4.19. In the above example

LT (f1) = −xLT (f3)

So, we can dispense with f3 in the minimal Grobner¨ basis. Similarly 1 LT (f ) = x2y = (− xLT (f ) 2 2 4 we can eliminate f2. 2 2 1 Thus F = {x , xy, y − 2 x} is minimal Gro¨bner basis.

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 25 Chapter 2 : Gr¨obnerBases

Definition 2.4.20. Fix a monomial ordering in K[x1, . . . , xn]. A Gro¨bner basis

G = {g1, . . . , gt} for all non-zero ideal I is a Reduced Gro¨bner basis if

1. Each gi has monic leading terms, and

2. no term of gj is divisible by LT (gi) ∀ i 6= j.[2]

2.4 Hilbert-Basis Theorem and Gr¨obnerBases 26 Chapter 3

Primary Decomposition Of Ideals In Noetherian Rings

3.1 Prime, Maximal, Radical & Primary Ideals

Before going to primary decomposition, let us define some basic terms which are commonly raised in this chapter. Definition 3.1.1. In a commutative ring R, a non-zero element a of R is said to be the divisor of zero if there exists a non-zero element b of R such that a.b = 0.

Example 3.1.2. In the ring Z6, 2, 3 and 4 are zero divisors, since 2 6 3 = 0,

3 6 4 = 0. Definition 3.1.3. In a commutative ring R, a proper ideal P of R is said to be ”” if, for any ideals I and J of R,

IJ ⊆ P ⇒ I ⊆ P or J ⊆ P.

Example 3.1.4. The prime ideals of the ring Z are the ideals pZ, where p is 0 or prime. Definition 3.1.5. Let R be a commutative ring. An ideal M 6= R is said to be ”Maximal ideal” if for any ideal U of R satisfying M ⊆ U ⊆ R, either U = M or U = R. Example 3.1.6. 1. The ideal pZ in the ring Z is maximal ideal if and only if p is prime.

2. Every maximal ideal in a commutative ring with unity is prime ideal.

27 Chapter 3 : Primary Decomposition Of Ideals In Noetherian Rings

Definition 3.1.7. Let R be a ring and a ∈ R. Then a is called ”nilpotent” if an = 0 for some positive integer n.

Example 3.1.8. 6 is the nilpotent element of the ring Z8, since

3 6 = (6 8 6) 8 6 = 4 8 6 = 0.

Definition 3.1.9. Let I be an ideal in a commutative ring R, √ 1. The radical of I, denoted by I, is a collection of elements in R some powers of which lie in I. i.e √ I = {a ∈ R : ak ∈ I for some k ≥ 1}.

2. The radical of the zero ideal is called the ”nilradical” of R. √ 3. An ideal I is called radical ideal if I = I.

Note that: If I is an ideal in a commutative ring R. Then the radical of I is the intersection of all prime ideals of R containing I.

a1 a2 ar Example 3.1.10. In the ring Z of integers, let n = p1 p2 ··· pr , where p1, p2, . . . , pr are distinct primes and a1, a2, . . . , ar are positive integers. Then, p1Z, p2Z, . . . , prZ are all the prime ideals containing nZ and hence p (nZ) = (p1Z) ∩ (p2Z) ∩ · · · ∩ (prZ) = (p1p2 ··· pr)Z.

As concrete illustration of this, we have

p 3 1 (24Z) = 6Z (since 24 = 2 .3 ) and

p 2 2 (100Z) = 10Z (since 100 = 2 .5 )

Definition 3.1.11. A proper ideal Q in a commutative ring R is called , if whenever ab ∈ Q and a∈ / Q, then bn ∈ Q for some positive integer n. Equivalently, √ if ab ∈ Q and a∈ / Q, then b ∈ Q.

3.1 Prime, Maximal, Radical & Primary Ideals 28 Chapter 3 : Primary Decomposition Of Ideals In Noetherian Rings

Proposition 3.1.12. Let R be a commutative ring with unity,

1. Prime ideals are primary.

2. The ideal Q is primary if and only if every zero divisor in R/Q is nilpotent. √ 3. If Q is primary, then Q is a prime ideal and is the unique smallest prime ideal containing Q.

4. If Q is an ideal whose radical is a maximal ideal, then Q is primary ideal.

5. Suppose M is a maximal ideal and Q is an ideal with M n ⊆ Q ⊆ M for √ some n ≥ 1. Then Q is primary ideal with Q = M.

Proof. The first two statements are immediate from the definition of primary ideals. √ For (3), suppose ab ∈ Q √ ⇒ ambm = (ab)m ∈ Q, since Q is primary, either am ∈ Q, in which case a ∈ Q, or √ (am)n ∈ Q for some positive integer n, in which case b ∈ Q. This proves that √ √ Q is prime ideal, and it follows that Q is the smallest prime ideal containing Q. To prove (4), we pass to the quotient ring R/Q; by (2) it suffices to show that every zero divisor in the quotient ring is nilpotent. √ We are reduced to the situation where Q = h0i and M = Q = p(0), which is the nilradical, is the maximal ideal. Since the nilradical contained in every prime ideal, it follows that M is the unique prime ideal, so also the unique maximal ideal. If d were a zero divisor, then the ideal hdi would be proper ideal, hence contained in a maximal ideal. This implies that d ∈ M, hence every zero divisor ideal is nilpotent. Finally, suppose M n ⊆ Q ⊆ M for some n ≥ 1 where M is maximal ideal. √ √ Then Q ⊆ M, so Q ⊆ M = M. √ √ Conversely, M n ⊆ Q shows that M ⊆ Q. So Q = M is a maximal ideal, and Q is primary by (4). √ Definition 3.1.13. Let Q be primary ideal,then the prime ideal P = Q is called the ”associated prime to Q”, and Q is said to belong to P (or to be P -primary).

3.1 Prime, Maximal, Radical & Primary Ideals 29 Chapter 3 : Primary Decomposition Of Ideals In Noetherian Rings

Example 3.1.14. 1. The primary ideals in Z are 0 and hpmi for p a prime and m ≥ 1.

2. For any field K, the ideal hxi in K[x, y] is primary since it is prime ideal. For any n ≥ 1, the ideal hx, yin is primary since it is a power of the maximal ideal hx, yi.

3. If R is Noetherian, and Q is a primary ideal belonging to the prime ideal P , then

P m ⊆ Q ⊆ P for some m ≥ 1 (by proposition 3.1.12)

If P is maximal ideal, then the last statement of proposition 3.1.12 shows the converse also holds.This is not necessary true if P is a prime ideal that is not maximal.

3.2 Primary Decomposition

Definition 3.2.1. 1. An ideal I in R has a primary decomposition if it may be written as a finite intersection of primary ideals: m \ I = Qi Qi a primary ideal. i=1

2. The primary decomposition above is ”minimal” and the Qi are called the primary components of I if

• no primary ideal contains the intersection of the remaining primary T ideals i.e Qi + j6=i Qj for all i, and √ √ • the associated prime ideals are all distinct: i.e Qi 6= Qj for i 6= j. We now prove that in a every proper ideal has a minimal primary decomposition. This result is often called Lasker-Noether Decomposition theorem. Definition 3.2.2. A proper ideal I in the commutative ring R is said to be irreducible if I cannot be written as the intersection of two other other ideals, i.e if I = J ∩ K with ideals J, K implies that I = J or I = K.

3.2 Primary Decomposition 30 Chapter 3 : Primary Decomposition Of Ideals In Noetherian Rings

Proposition 3.2.3. Let R be a Noetherian ring. Then

1. every irreducible ideal is primary, and

2. every proper ideal in R is a finite intersection of irreducible ideals.

Proof. To prove (1) let Q be an irreducible ideal and suppose that ab ∈ Q and b∈ / Q. Then for any fixed n the set of elements x ∈ R with anx ∈ Q is an ideal

An ∈ R. Clearly A1 ⊆ A2 ⊆ ... and since R is Noetherian this ascending chain of ideals must stabilize, i.e An = An+1 = ··· for some n > 0. Consider the two ideals I = hani + Q and J = hbi + Q of R, each containing Q. If y ∈ I ∩ J, then y = anz + q for some z ∈ R, and q ∈ Q. Since ab ∈ Q, it follows that aJ ⊆ Q, and in particular ay ∈ Q. n+1 n Then a z = ay − aq ∈ Q, so z ∈ An+1 = An. But z ∈ An means that a z ∈ Q, so y ∈ Q. It follows that I ∩ J = Q. Since Q is irreducible and hbi + Q 6= Q (since b∈ / Q), we must have an ∈ Q, which shows that Q is primary. To prove (2), let S be a collection of ideals of R that cannot be written as a finite intersection of irreducible ideals.If S is non-empty, then since R is Noetherian, there is a maximal element I in S. Then I is not irreducible, so I = J ∩ K for some ideals J and K distinct from I. Then J ⊆ K and K ⊆ J and the maximality of I implies that neither J nor K is in S. But this means that both J and K can be written as a finite intersections of irreducible ideals, hence the same would be true for I. This is a contradiction, so S = φ, which completes the proof of the proposition.

Theorem 3.2.4. (Primary Decomposition Theorem) Let R be a Noetherian ring. Then every proper ideal I in R has a minimal primary decomposition. If m n \ \ 0 I = Qi = Qi i=1 i=1 are two minimal primary decompositions for I, then the set of associated primes in the two decompositions are the same:

p p p p 0 p 0 p 0 { Q1, Q2,..., Qm} = { Q1, Q2,..., Qn}.

Moreover, the primary components Qi belonging to the minimal elements in this set of associated primes are uniquely determined by I.

3.2 Primary Decomposition 31 Chapter 4

Affine Varieties

There are two categories of algebraic varieties

1. Affine varieties

2. Projective varieties

4.1 Affine Spaces

Definition 4.1.1. Given a field K and n ∈ Z+, we define the n-dimensional affine space over a field K to be the set

n  K = (a1, . . . , an): a1, . . . , an ∈ k

Example 4.1.2. For K = R, we have the familiar space Rn. We call K1 = K, the affine line. K2 is the affine plane.

Next we see how polynomials are related to affine-spaces. The core idea is that a polynomial X α f = aαx α gives a function f : Kn → K, as follows. n Given (a1, . . . , an) ∈ K , replace each xi by ai

⇒ f(a1, . . . , an) ∈ K, since each ai ∈ K.

Proposition 4.1.3. Let K be an infinite field and f ∈ K[x1, . . . , xn]. Then f = 0 n in K[x1, . . . , xn] if and only if f : K → K is the zero function.

32 Chapter 4 : Affine Varieties

Proof. One direction of the proof is obvious since the zero polynomial clearly gives the zero function.

To prove the converse, we need to show that if f(a1, . . . , an) = 0 for all n (a1, . . . , an) ∈ K , then f is the zero polynomial. We will use induction on the number of variables n. When n = 1, it is well known that a non-zero polynomial in K[x] of degree m has at most m distinct roots. For our particular f ∈ K[x], we are assuming f(a) = 0 ∀ a ∈ K. Since K is infinite, this means that f has infinitely many roots, and hence, f must be the zero function.

Now assume that the converse is true for n − 1, and let f ∈ K[x1, . . . , xn] be a polynomial that vanishes at all points of Kn. By collecting the various powers of xn, we can write f in the form

N X i f = gi(x1, . . . , xn−1)xn, i=0 where gi ∈ K[x1, . . . , xn−1]. We will show that each gi is the zero polynomial in n − 1 variables, which will force f to be the zero polynomial in K[x1, . . . , xn]. n−1 If we fix (a1, . . . , an−1) ∈ K , we get the polynomial f(a1, . . . , an−1, xn) ∈ K[xn]. By our hypothesis on f, this vanishes for every an ∈ K. It follows from the case n = 1 that f(a1, . . . , an−1, xn) is the zero polynomial in K[xn]. Using the above formula for f, we see that the coefficients of f(a1, . . . , an−1, xn) are gi(a1, . . . , an−1), and thus, gi(a1, . . . , an−1) = 0 ∀i. Since n−1 (a1, . . . , an−1) was arbitrarily chosen from K . Our inductive assumption then implies that each gi is the zero polynomial in K[x1, . . . , xn−1]. This forces f to be the zero polynomial in K[x1, . . . , xn] and completes the proof of the proposition.

Corollary 4.1.4. Let K be an infinite field and f, g ∈ K[x1, . . . , xn].Then f = g if and only if f : Kn → K and g : Kn → K are the same functions.

Proof. (⇒) If f = g, clearly f : Kn → K and g : Kn → K are the same functions.

(⇐) To prove the nontrivial direction, let f, g ∈ K[x1, . . . , xn] give the same polynomial on Kn. By hypothesis, the polynomial f − g vanishes at all points of Kn. Proposition 3.1.3 then implies that f − g is the zero polynomial. This proves that f = g in K[x1, . . . , xn].

4.2 Affine Spaces 33 Chapter 4 : Affine Varieties

4.2 Affine Varieties

Now we define the basic geometric object called ”Affine variety” (or Affine algebraic set).

Definition 4.2.1. Let K be a field and f1, . . . , fs be polynomials in K[x1, . . . , xn], then we set

 n V (f1, . . . , fs) = (a1, . . . , an) ∈ K : fi(a1, . . . , an) = 0 ∀ 1 ≤ i ≤ s .

We call V (f1, . . . , fs), the affine variety defined by f1, . . . , fs. n i.e V (f1, . . . , fs) ⊆ K is the common zero of the system of polynomial equations

f1 = f2 = ... = fs = 0.

Example 4.2.2. 1. In R2 with variety V (x2 + y2 − 1), which is the circle of radius 1 centered at the origin

♣. The conic-sections (circles, parabolas, ellipses and hyperbolas) are affine varieties. Likewise, graphs of polynomial functions are affine varieties [the graph of y = f(x) is V (y −f(x))]. Although not as obvious, graphs of rational functions are affine varieties.

4.2 Affine Varieties 34 Chapter 4 : Affine Varieties

For example, consider the graph of x3 − 1 y = ; x

x3−1 The graph of y = x Which is the affine variety V (xy − x3 + 1) 3. In R3 a nice affine variety is given a paraboloid of revolution V (z − x2 − y), which is obtained by rotating the parabola z = x2 about the z-axis. This gives us the picture:

4.2 Affine Varieties 35 Chapter 4 : Affine Varieties

4.The familiar cone is the affine variety V (z2 − x2 − y2):

5. An interesting example of a curve in R3 is the twisted cubic, which is the variety V (y − x2, z − x3). For simplicity, we will confine ourselves to the portion that lies in the first octant. To begin, we draw the surface y = x2 and z = x3 separately:

4.2 Affine Varieties 36 Chapter 4 : Affine Varieties

Then their intersection gives the twisted cubic:

We next give some examples of varieties in higher dimensions. A familiar case comes from linear algebra. Namely, fix a field K, and consider a system of m linear equations in n unknowns x1, . . . , xn with coefficients in K:

a11x1 + ··· + a1nxn = b1, . . (1)

am1x1 + ··· + amnxn = bm.

The solutions of these equations form an affine variety in Kn, which we will call a linear variety. Thus, lines and planes are linear varieties, and there are examples of arbitrarily large dimensions. Linear varieties relate nicely to our discussion of dimension. Namely, if V ⊆ Kn is the linear variety defined by (1), then V need not have dimension n − m even though V is defined by m equations. In fact, when V is nonempty, then V has

4.2 Affine Varieties 37 Chapter 4 : Affine Varieties

dimension n − r, where r is the rank of the matrix (aij). So for linear varieties, the dimension is determined by the number of independent equations.

4.2.1 Properties Of Affine Varieties

Lemma 4.2.3. If V,W ⊆ Kn are affine varieties, then so are V ∪W and V ∩W .

Proof. Suppose that V = V (f1, . . . , fs) and W = V (g1, . . . , gt). Then we claim that

V ∩ W = V (f1, . . . , fs, g1, . . . , gt),

V ∪ W = V (figj : 1 ≤ i ≤ s, 1 ≤ j ≤ t).

The first equality is trivial to prove: being in V ∩ W means that both f1, . . . , fs and g1, . . . , gt vanish, which is the same as f1, . . . , fs, g1, . . . , gt vanishing.

The second equality takes a little more work. If (a1, . . . , an) ∈ V , then all of the fi’s vanish at this point, which implies that all of the figj’s also vanish at

(a1, . . . , an). Thus, V ⊆ V (figj) and W ⊆ V (figj) follows similarly.

This proves that V ∪ W ⊆ V (figj).

Going the other way, choose (a1, . . . , an) ∈ V (figj). If this lies in V , then we are done, and if not, then fi0 (a1, . . . , an) 6= 0 for some i0. Since fi0 gj vanishes at

(a1, . . . , an) ∀j, the gj’s must vanish at this point, proving that (a1, . . . , an) ∈ W .

This shows that V (figj) ⊆ V ∪ W.

4.3 Parametrization Of Affine Varieties

In this section, we will discuss the problem of describing the points of affine va- rieties V (f1, . . . , fs). This reduces to asking whether there is a way to ” write down ” the solutions of polynomial equations f1 = f2 = ··· = fs = 0. When there are finitely many solutions, the goal is simply to list them all. But what we do when there are infinitely many? As we will see, this question leads to the notion of parameterizing an affine variety.

4.3 Parametrization Of Affine Varieties 38 Chapter 4 : Affine Varieties

Example 4.3.1. To get started, let us look at an example from linear algebra. Let the field be R, and consider the system of equations,

x + y + z = 1. x + 2y − z = 3. (1)

Geometrically, this represents the line in R3 which is the intersection of the planes x + y + z = 1 and x + 2y − z = 3. It follows that there are infinitely many solutions. To describe the solutions, we use row operations on equation (1) to obtain the equivalent equations

x + 3z = −1, y − 2z = 2.

Letting z = t, where t is arbitrary, this implies that all solutions of (1) are given by

x = −1 − 3t, y = 2 + 2t, z = t (2) as t varies over R. We call t a parameter, and (2) is thus, a parametrization of the solutions of (1). Example 4.3.2. Let us consider the unit circle

x2 + y2 = 1. (3)

A common way to parametrize the circle is using trigonometric functions:

x = cos(t), y = sin(t).

There is also a more algebraic way to parametrize this circle:

1 − t2 x = , 1 + t2 2t y = . (4) 1 + t2

4.3 Parametrization Of Affine Varieties 39 Chapter 4 : Affine Varieties

Note that : In the previous examples, equations (1) and (3) are called an implicit representations of varieties, whereas (2) and (4) are parametric. The desirability of having both types of representations leads to the following two questions:

• (Parametrization) Does every variety have rational parametric representation?

• (Implicitization) Given a parametric representation of an affine variety, can we find the defining equations (i.e, can we find an implicit representation)?

The answer to the first question is no. In fact, most affine varieties can not be parametrized in the sense described here. Those that can are called unirationals.The situation for the second question is much nicer. The answer is yes: given a parametric representation, we can always find the defining equation. Example 4.3.3. Let us look at an example of how implicitization works. Consider the parametric representation:

x = 1 + t, y = 1 + t2. (5)

This describes a curve in the plane, but at this point, we cannot be sure that it lies on an affine variety. To find the equation we are looking for, notice that we can solve equation for t to obtain

t = x − 1.

Substituting this into the second equation yields

y = 1 + (x − 1)2 = x2 − 2x + 2.

Hence, the parametric equation (5) describes the affine variety V (y −x2 +2x−2).

4.4 Ideals and Affine Varieties

The goal of this section is to introduce some naturally occurring ideals and to see how ideals relate to affine varieties. The real importance of ideals is that they will give us language for computing with affine varieties. In chapter two we have seen

4.4 Ideals and Affine Varieties 40 Chapter 4 : Affine Varieties

that every ideal in K[x1, . . . , xn] is generated by a finite number of polynomials.

The ideal hf1, . . . , fsi has a nice interpretation in terms of polynomial equations.

Given f1, . . . , fs ∈ K[x1, . . . , xn], we get the system of equations

f1 = 0, . .

fs = 0.

From these equations, one can derive others using algebra. For example, if we multiply the first equation by h1 ∈ K[x1, . . . , xn], the second by h2 ∈ K[x1, . . . , xn], etc., and add the resulting equations, we obtain

h1f1 + h2f2 + ··· + hsfs = 0, which is the consequence of our original system. Notice that the left-hand side of this equation is exactly an element of the ideal hf1, . . . , fsi. Thus, we can think of hf1, . . . , fsi as consisting of all ”polynomial consequences” of the equations f1 = f2 = ··· = fs = 0. To see this consider the following example: Example 4.4.1. Consider the parametric representation:

x = 1 + t, y = 1 + t2 and eliminated t to obtain

y = x2 − 2x + 2 to see, we write the equations as:

x − 1 − t = 0, y − 1 − t2 = 0. (1)

To cancel the t terms, we multiply the first equation by x-1+t and the second by -1:

(x − 1)2 − t = 0, −y + 1 + t2 = 0,

4.4 Ideals and Affine Varieties 41 Chapter 4 : Affine Varieties and then to obtain

(x − 1)2 − y + 1 = x2 − 2x + 2 − y = 0.

In terms of the ideal generated by equation (1), we can write this as

x2 − 2x + 2 − y = (x − 1 + t)(x − 1 − t) + (−1)(y − 1 − t2) ∈ hx − 1 − t, y − 1 − t2i.

Proposition 4.4.2. If f1, . . . , fs and g1, . . . , gt are bases of the same ideal in

K[x1, . . . , xn], so that hf1, . . . , fsi = hg1, . . . , gti, then V (f1, . . . fs) = V (g1, . . . , gt). Definition 4.4.3. Let V ⊆ Kn be an affine variety. Then we set

I(V ) = {f ∈ K[x1, . . . , xn]: f(a1, . . . , an) = 0 ∀ (a1, . . . , an) ∈ V }.

The crucial observation is that I(V ) is an ideal. n Lemma 4.4.4. If V ⊆ K is an affine variety, then I(V ) ⊆ K[x1, . . . , xn] is an ideal. We will call I(V ) the ideal of V .

Proof. It is obvious that 0 ∈ I(V ) since the zero polynomial vanishes on all of Kn, and so, in particular it vanishes on V .

Next, suppose that f, g ∈ I(V ) and h ∈ K[x1, . . . , xn].

Let (a1, . . . , an) be an arbitrary point of V . Then

f(a1, . . . , an) + g(a1, . . . , an) = 0 + 0 = 0,

h(a1, . . . , an)f(a1, . . . , an) = h(a1, . . . , an) . 0 = 0, and it follows that I(V ) is an ideal.

Example 4.4.5. Let K be a field. i. V (0) = Kn, where 0 is the zero polynomial when K is infinite field. ii. Consider the variety {(0, 0)} consisting of the origin in K2. Then, its ideal I({(0, 0)}) consists of all polynomials that vanish at the origin. Then we claim that I({(0, 0)}) = hx, yi.

n n Proof. i. V (0) = K is clear, since every point a = (a1, . . . , an) of K is a zero of the zero polynomial.

4.4 Ideals and Affine Varieties 42 Chapter 4 : Affine Varieties ii. One direction of the second example is trivial, since for any polynomial of the form A(x, y)x + B(x, y)y obviously vanishes at the origin. Going the other way, P i j suppose that f = i,j aijx y vanishes at the origin. Then a00 = f(0, 0) = 0 and, consequently,

X i j f = a00 + aijx y i,j6=0,0   ! X i−1 j X j−1 = 0 +  aijx y  x + a0jy y ∈ hx, yi. i,j j>0 i>0 Our claim is proved

Note that: We have the following correspondence between polynomials, varieties and ideals. We started with the polynomials, used them to define an affine variety took all functions vanishing on the variety and got back the ideal generated by these polynomials. It is natural to wonder if this happens in general. So take f1, . . . , fs ∈ K[x1, . . . , xn]. This gives us:

P olynomials V ariety ideals f1 . . . , fs → V (f1, . . . , fs) → I(V (f1, . . . , fs)) and the natural question to ask is whether I(V (f1, . . . , fs)) = hf1, . . . , fsi? The answer, unfortunately, is not always yes. Here is the best answer we can give at this point.

Lemma 4.4.6. If f1, . . . , fs ∈ K[x1, . . . , xn], then hf1, . . . , fsi ⊂ I(V (f1, . . . , fs)), although equality need not occur.

Ps Proof. Let f ∈ hf1, . . . , fsi, which means that f = i=1 hifi for some polynomials h1, . . . , hs ∈ K[x1, . . . , xn]. Since f1, . . . , fs vanish on V (f1, . . . , fs), Ps so must i=1 hifi. Thus, f vanishes on V (f1, . . . , fs), which proves f ∈ I(V (f1, . . . , fs)).

Proposition 4.4.7. Let V and W be affine varieties in Kn and I and J be ideals in K[x1, . . . , xn]. Then: 1. V ⊆ W if and only if I(V ) ⊇ I(W ).

2. If I ⊆ J, then V (J) ⊆ V (I).

3. V = W if and only if I(V ) = I(W ).

4.4 Ideals and Affine Varieties 43 Chapter 4 : Affine Varieties

Proof. 1. First suppose that V ⊆ W . Then any polynomial vanishing on W must vanish on V , which proves I(W ) ⊆ I(V ). Next, assume that I(W ) ⊆ I(V ). We know that W is the affine variety defined by some polynomials g1, . . . , gt ∈ K[x1, . . . , xn]. Then g1, . . . , gt ∈ I(W ) ⊆ I(V ), and hence the gi’s vanish on V . Since W consists of all common zeros of the gi’s, it follows that V ⊆ W . 2. Let I ⊆ J, and let x ∈ V (J). ⇒ g(x) = 0 ∀ g ∈ J. Since I ⊆ J, it follows, in particular that f(x) = 0 ∀ f ∈ I. ⇒ x ∈ V (I). Hence, V (J) ⊆ V (I). 3.(⇒) Suppose V = W , Then, by definition of equality of sets we have V ⊆ W and W ⊆ V . By part (1) of this proposition I(V ) ⊇ I(W ) and I(W ) ⊇ I(V ). Hence, V (I) = V (W ). (⇐) Conversely Suppose V (I) = V (W ) ⇒ V (I) ⊆ V (W ) and V (W ) ⊆ V (I) by definition of equality of sets. By part 1 of this proposition W ⊇ I and I ⊇ W respectively. Hence, V = W .

n Proposition 4.4.8. If x = (a1, . . . , an) ∈ K , then

I({x}) = hx1 − a1, . . . , xn − ani.

Proof. The right side is contained in the left side because xi − ai is zero when xi = ai. Also, the result holds for n = 1. r Assume n > 1, and let f = brx1+···+b1x1+b0 ∈ I({x}), where bi are polynomials in x2, . . . , xn and br 6= 0. By division algorithm we have

f = (x1 − a1)g(x1, . . . , xn) + h(x2, . . . , xn), and h vanish at (a2, . . . , an). By induction hypothesis, h ∈ hx2 − a2, . . . , xn − aai, hence f ∈ hx1 − a1, . . . , xn − ani.

n Lemma 4.4.9. If a = (a1, . . . , an) ∈ K , then I = hx1 − a1, . . . , xn − ani is maximal ideal.

4.4 Ideals and Affine Varieties 44 Chapter 4 : Affine Varieties

Proof. Suppose I is properly contained in the ideal J with f ∈ J/I. Applying division algorithm n-times we have

f = A1(x1 − a1) + ··· + An(xn − an) + b, where A1 ∈ K[x1, . . . , xn],A2 ∈ K[x2, . . . xn],...,An ∈ K[xn], and b ∈ K. Note that b 6= 0, since f∈ / I; so by solving the above equation for b we have b ∈ J.

Hence 1 = (1/b).b ∈ J. Consequently J = K[x1, . . . , xn].

4.4.1 Hilbert’s Nullstellensatz

Theorem 4.4.10. [Weak Nullstellesnatz]

If I is a proper ideal in K[x1, . . . , xn], then V (I) 6= φ, where K is algebraically closed field.

Proof. We may assume I is a maximal ideal, for there is a maximal ideal J containing I i.e I ⊆ J ⇒ V (J) ⊆ V (I).

Let L = K[x1, . . . , xn]/I be the quotient ring. ⇒ L is a field, and K is a sub-field of L.

Suppose we knew that K = L, for each i, there is an ai ∈ K such that I residue of xi is ai, i.e xi − ai ∈ I.

But hx1 − a1, . . . , xn − ani is maximal ideal.So

I = hx1 − a1, . . . , xn − ani

And hence,

V (I) = {(a1, . . . , an)}= 6 φ.

Theorem 4.4.11. [Hilbert Nullstellensatz ]

Let K be algebraically closed field, if f, f1, . . . , fs ∈ K[x1, . . . , xn] such that f ∈ I(V (f1, . . . , fs)), then there exists a positive integer m such that

m f ∈ hf1, . . . , fsi and conversely.

4.4 Ideals and Affine Varieties 45 Chapter 4 : Affine Varieties

Proof. Given a non-zero polynomial f vanishes at every common zero of f1, . . . , fs, we must show that there is a positive integer m such that s m X f = Aifi. i=1 To see this, consider ˇ I = hf1, . . . , fs, 1 − yfi ⊆ K[x1, . . . , xn, y] (1) Claim, V (Iˇ) = φ n+1 To see this, let (a1, . . . , an, an+1) ∈ K . Either

• (a1, . . . , an) is a common zero of f1, . . . , fs, or

• (a1, . . . , an) is not a common zero of f1, . . . , fs.

In the first case, f(a1, . . . , an) = 0, since f vanishes at any common zero of f1, . . . , fs.

Thus, the polynomial 1 − yf takes the value 1 − an+1f(a1, . . . , an) = 1 6= 0 at the point (a1, . . . , an, an+1). ˇ ⇒ (a1, . . . , an, an+1) ∈/ V ((I).

In the second case, for some i, 1 ≤ i ≤ s, we must have fi(a1, . . . , an) 6= 0.

Thinking fi as a function of n+1 variables which does not depend on the last variable, we have fi(a1, . . . , an, an+1) 6= 0. ˇ ⇒ (a1, . . . , an, an+1) ∈/ V (I). n+1 ˇ Since (a1, . . . , an, an+1) ∈ K is arbitrary, we conclude that V (I) = φ (as we claim). Now applying the weak Hilbert’s theorem to conclude that 1 ∈ Iˇ. That is s X 1 = pi(x1, . . . , xn, y)fi + q(x1, . . . , xn, y)(1 − yf) (2) i=1 for some pi, q ∈ K[x1, . . . , xn, y]. Now, set y = 1 . Then the relation ( 2 ) implies that f(x1,...,xn) s X 1 = pi(x1, . . . , xn, 1/f).fi. (3) i=1 Multiplying both sides of this equation by the power of f m, where m is chosen sufficiently large to clear all the denominators, this yields s m X f = Aifi, (4) i=1

4.4 Ideals and Affine Varieties 46 Chapter 4 : Affine Varieties

for some polynomials Ai ∈ K[x1, . . . , xn], which is what we want to show.

4.4.2 Radical Ideals and the Ideal-Variety Correspondence

In this section we explore the relation ship between ideals and varieties. Lemma 4.4.12. Let V be a variety. If f m ∈ I(V ), then f ∈ I(V ).

Proof. Let x ∈ V , If f m ∈ I(V ), then (f(x))m = 0. But this can happen only if f(x) = 0. Since x ∈ V was arbitrary, we must have f ∈ I(V ).

Thus, an ideal consisting of all polynomials which vanish on a variety V has the property that if some power of a polynomial belong to the ideal, then the polynomial itself belong to the ideal. Note that: An ideal I is radical ideal if f m ∈ I for some m ≥ 1 implies f ∈ I Corollary 4.4.13. I(V ) is radical ideal.

Proof. Let f m ∈ I(V ), then by lemma 4.4.12 we have f ∈ V (I). Hence I(V ) is radical ideal by the definition of radical ideal.

Theorem 4.4.14. [The Strong Hilbert Nullstellesnatz]

Let K be algebraically closed field. If I is an ideal in K[x1, . . . , xn], then √ I(V (I)) = I.

Proof. We certainly have √ I ⊆ I(V (I)) √ because f ∈ I ⇒ f m ∈ I for some m. Hence, f m vanishes on V (I). ⇒ f vanishes on V (I), thus, f ∈ I(V (I)). Conversely, suppose f ∈ I(V (I)). Then, by definition f vanishes on V (I). By Hilbert Nullstellesnatz, there is a positive integer m such that f m ∈ I. √ But, this means that f ∈ I. Since f was arbitrary, √ I(V (I)) ⊆ I.

4.4 Ideals and Affine Varieties 47 Chapter 4 : Affine Varieties

Hence, √ I(V (I)) = I.

Theorem 4.4.15. (The Ideal Variety Correspondence) Let K be an arbitrary field. i. The maps

Affine varieties −→I Ideals

and

Ideals −→V Affine varieties are inclusion reversing. i.e if I1 ⊆ I2 are ideals, then V (I1) ⊇ V (I2) and similarly if V1 ⊆ V2 are varieties, then I(V1) ⊇ I(V2). Furthermore, for any variety V , we have

V (I(V )) = V.

So that I is always one-to-one. ii. If K is algebraically closed field, and we restrict to radical ideals, then the maps

Affine varieties −→I radical ideals and

radical ideals −→V Affine varieties are inclusion reversing bijections, which are inverses of each other.

Proof. (i). We already show that

I1 ⊆ I2 ⇒ V (I1) ⊇ V (I2) and

V1 ⊆ V2 ⇒ I(V1) ⊇ I(V2).

4.4 Ideals and Affine Varieties 48 Chapter 4 : Affine Varieties

It remains to show V (I(V )) = V

n When V = V (f1, . . . , fs) is sub-variety of K V ⊆ V (I(V )) follows directly from the definition of V .

To show the reverse inclusion, note that f1, . . . fs ⊆ I(V ) by definition, and thus,

hf1, . . . , fsi ⊆ I(V ). Since V is inclusion reversing, it follows that

V (I(V )) ⊆ V (hf1, . . . , fsi) = V and consequently, I is one-to-one, since it has a left reverse. (ii). Since I(V ) is radical ideal,(by corollary 4.4.13), we can think of I as a function which takes varieties to radical ideals. Furthermore, for any variety V , we have V (I(V )) = V by (i). It remains to prove I(V (I)) = I whenever I is radical ideal. √ √ By Nullstellesnatz I(V (I)) = I, and I is being radical ideal, I = I . This gives the desired equality. Hence, V and I are inverses of each other, and thus, define bijections between the set of radical ideals and affine varieties.

Proposition 4.4.16. [Radical-Membership] √ Let K be an arbitrary field and let I ⊆ K[x1, . . . , xn] be an ideal, then f ∈ I if and only if the constant polynomial 1 belongs to the ideal ˇ I = hf1, . . . , fs, 1 − yfi ⊆ K[x1, . . . , xn, y]. ˇ (In which case I = K[x1, . . . , xn, y]).

Proof. In the proof of Hilbert’s Nullsstellesnatz we see that 1 ∈ Iˇ implies f m ∈ I √ for some m. Which in turn implies f ∈ I. Going to the other way, √ suppose that f ∈ I, ⇒ f m ∈ I ⊆ Iˇ for some m. But we also have 1 − yf ∈ Iˇ, and consequently,

1 = ymf m + (1 − ymf m) = ymf m + (1 − yf)(1 + yf + ··· + ym−1f m−1) ∈ I,ˇ as desired.

4.4 Ideals and Affine Varieties 49 Chapter 4 : Affine Varieties

4.4.3 Sums, Products and Intersections Of Ideals

Definition 4.4.17. Let I and J be ideals in K[x1, . . . , xn], then their sum denoted by I + J, is the set

I + J = {f + g : f ∈ Iandg ∈ J}.

Proposition 4.4.18. If I and J are ideals in K[x1, . . . , xn], then I + J is the smallest ideal containing I and J. Furthermore, if

I = hf1, . . . , fri and J = hg1, . . . , gsi, then

I + J = hf1, . . . , fr, g1, . . . , gsi.

Proof. (i). 0 = 0 + 0 ∈ I + J

(ii). Suppose h1, h2 ∈ I + J

⇒ by definition ∃f1, f2 ∈ I and g1, g2 ∈ J such that h1 = f1 +g1 and h2 = f2 +g2.

⇒ h1 + h2 = (f1 + f2) + (g1 + g2).

But f1 + f2 ∈ I, since I is an ideal. And g1 + g2 ∈ J, since J is an ideal.

⇒ h1 + h2 ∈ I + J.

(iii).Suppose h ∈ I + J and l ∈ K[x1, . . . , xn]. ⇒ ∃ f ∈ I and g ∈ J such that h = f + g.

⇒ lh = l(f + g) = lf + lg.

But lf ∈ I, since I is an ideal and lg ∈ J, since J is an ideal. Hence lh ∈ I + J. Therefore I + J is an ideal. Next, if H is an ideal containing I and J, then H must contain f ∈ I and g ∈ J. Since H is an ideal, it must contain f + g. In particular,

H ⊇ I + J.

Therefore, every ideal containing I and J must contain I + J, and thus, I + J must be the smallest such an ideal.

Finally, if I = hf1, . . . , fri, and J = hg1, . . . , gsi, then hf1, . . . , fr, g1, . . . , gsi must contain I andJ, so that

I + J ⊆ hf1, . . . , fr, g1, . . . , gsi.

4.4 Ideals and Affine Varieties 50 Chapter 4 : Affine Varieties

The reverse inclusion obvious and hence,

I + J = hf1, . . . , fr, g1, . . . , gsi.

Theorem 4.4.19. If I and J are ideals in K[x1, . . . , xn], then

V (I + J) = V (I) ∩ V (J).

Proof. If x ∈ V (I + J), ⇒ x ∈ V (I), since I ⊆ I + J. Similarly, x ∈ V (J), since J ⊆ I + J.

⇒ x ∈ V (I) ∩ V (J). ⇒ V (I + J) ⊆ V (I) ∩ V (J).

For the reverse inclusion, suppose x ∈ V (I) ∩ V (J) and h ∈ I + J. ⇒ ∃ f ∈ I and g ∈ J such that h = f + g. We have f(x) = 0, since x ∈ V (I), and g(x) = 0, since x ∈ V (J). Thus,

h(x) = f(x) + g(x) = 0 + 0 = 0.

Since h is arbitrary, we conclude that, x ∈ V (I + J). Hence,

V (I) ∩ V (J) ⊆ V (I + J).

Therefore, V (I) ∩ V (J) = V (I + J).

Product of Ideals We have seen that:

♣.V (f1, . . . , fr) ∪ V (g1, . . . , gt) = V (figj; 1 ≤ i ≤ r, 1 ≤ j ≤ s)

4.4 Ideals and Affine Varieties 51 Chapter 4 : Affine Varieties

Definition 4.4.20. Let I and J be ideals in K[x1, . . . , xn] . Then their product denoted I.J is defined to be the ideal generated by all polynomials f.g, where f ∈ I and g ∈ J. Thus, the product I.J of I and J is the set

n I.J = {f1g1 + ··· + frgr : f1, . . . , fr ∈ I, g1, . . . , gr ∈ J, and r ∈ Z }.

Note that:

Clearly I.J is an ideal in K[x1, . . . , xn].

Proposition 4.4.21. Let I = hf1, . . . , fri and J = hg1, . . . , gsi. Then I.J is generated by the set of all products of generators of I and J :

I.J = hfigj : 1 ≤ i ≤ r, 1 ≤ j ≤ si.

Proof. Clearly the ideal generated by the product figi of the generators is contained in I.J. Note that any polynomial in I.J is the sum of polynomials of the form f.g with f ∈ I and g ∈ J. But, f = a1f1 + ··· + arfr, g = b1g1 + ··· + bsgs, for appropriate polynomials a1 . . . , ar, b1, . . . , bs. Thus, the sum of polynomials of this form can be written as the form X Cijfigj where Cij ∈ K[x1, . . . , xn].

Theorem 4.4.22. Let I and J be ideals in K[x1, . . . , xn], then

V (I.J) = V (I) ∪ V (J).

Proof. Let x ∈ V (I.J) ⇒ g(x).h(x) = 0 ∀ g ∈ I, h ∈ J. If g(x) = 0 ∀ g ∈ I, then x ∈ V (I). If g(x) 6= 0 for some g ∈ I, then h(x) = 0 ∀ h ∈ J. In either case, x ∈ V (I) ∪ V (J).

⇒ V (I.J) ⊆ V (I) ∪ V (J).

Conversely, Suppose x ∈ V (I) ∪ V (J) ⇒ either g(x) = 0 ∀ g ∈ I, or h(x) = 0 ∀ h ∈ J. ⇒ g(x).h(x) = 0 ∀ g ∈ I and ∀ h ∈ J.

4.4 Ideals and Affine Varieties 52 Chapter 4 : Affine Varieties

Thus, f(x) = 0 ∀ f ∈ V (I.J). And hence, x ∈ V (I.J).

⇒ V (I) ∪ V (J) ⊆ V (I.J).

Therefore, V (I.J) = V (I) ∪ V (J).

Intersections of Ideals

Definition 4.4.23. The intersection I ∩ J of two ideals I and J of K[x1, . . . , xn] is the set of polynomials belonging to both I and J.

Proposition 4.4.24. If I and J are ideals in K[x1, . . . , xn]. Then I ∩ J is also an ideal in K[x1, . . . , xn].

Proof. (i). Since 0 ∈ I and 0 ∈ J, we have 0 ∈ I ∩ J. (ii). Let f, g ∈ I ∩ J, then f + g ∈ I, since f, g ∈ I, and f + g ∈ J, since f, g ∈ J. Hence, f + g ∈ I ∩ J.

(iii). Let f ∈ I ∩ J and h ∈ K[x1, . . . , xn].

Since f ∈ I and I is an ideal in K[x1, . . . , xn], we have hf ∈ I.

Similarly, f ∈ J and J is an ideal in K[x1, . . . , xn], we have hf ∈ J. Thus, hf ∈ I ∩ J. Hence, I ∩ J is an ideal.

Theorem 4.4.25. If I and J are ideals in K[x1, . . . , xn]. Then

V (I ∩ J) = V (I) ∪ V (J).

Proof. Let x ∈ V (I) ∪ V (J). ⇒ x ∈ V (I) or x ∈ V (J). ⇒ f(x) = 0 ∀ f ∈ I or g(x) = 0 ∀ g ∈ J. Thus, certainly f(x) = 0 ∀ f ∈ I ∩ J. ⇒ x ∈ V (I ∩ J).

⇒ V (I) ∪ V (J) ⊆ V (I ∩ J).

4.4 Ideals and Affine Varieties 53 Chapter 4 : Affine Varieties

To show the reverse inclusion, note that:

I.J ⊆ I ∩ J.

⇒ V (I ∩ J) ⊆ V (IJ).

But, V (IJ) = V (I) ∪ V (J), by theorem 4.4.22.

⇒ V (I ∩ J) ⊆ V (I.J) = V (I) ∪ V (J)

⇒ V (I ∩ J) ⊆ V (I) ∪ V (J).

Hence, V (I ∩ J) = V (I) ∪ V (J).

4.4.4 Zariski Closure and Quotient Ideals

We have seen that if a set S ⊆ Kn is an affine variety, then the set

I(S) = {f ∈ K[x1, . . . , xn]: f(a) = 0, ∀ a ∈ S} is an ideal. In fact, it is radical ideal. By ideal-variety correspondence, V (I(S)) is a variety. The following proposition states that this variety is the smallest variety that contains S. Proposition 4.4.26. If S ⊆ Kn, the affine variety V (I(S)) is the smallest variety that contains S [ in the sense that W ⊆ Kn is any affine variety containing S, then V (I(S)) ⊆ W ].

Proof. If W ⊇ S, then I(W ) ⊆ I(S), ( because I is inclusion-reversing). But then, V (I(W )) ⊇ V (I(S)), ( because V is inclusion-reversing). Since W is affine variety, V (I(W )) = W (by theorem4.4.15.), and the result follows.

This proposition leads to the following definition.

4.4 Ideals and Affine Varieties 54 Chapter 4 : Affine Varieties

Definition 4.4.27. The Zariski Closure of a subset of affine space is the smallest affine containing the set. If S ⊆ Kn, then the Zariski Closure of S denoted by S¯ and equal to V (I(S)).

Note that: I(S¯) = I(S). Since S ⊆ S¯, then I(S) ⊇ I(S¯). Going the other way, f ∈ I(S) implies S ⊆ V (f). Then, S ⊆ S¯ ⊆ V (f) by definition. So that f ∈ I(S¯). Hence, I(S) ⊆ I(S¯). Therefore I(S¯) = I(S). Proposition 4.4.28. If V and W are affine varieties with V ⊆ W , then

W = V ∪ (W − V ).

Proof. Since W contains W − V and W is a variety, it must be the case that the smallest variety containing W − V is contained in W . Hence, W − V ⊆ W . Since V ⊆ W by hypothesis, we must have

V ∪ (W − V ) ⊆ W.

To get the reverse inclusion, note that if V ⊆ W , then W = V ∪ (W − V ). Since W − V ⊆ W − V , the desired inclusion

W ⊆ V ∪ (W − V ) follows immediately.

Therefore, W = V ∪ (W − V ).

Definition 4.4.29. If I and J are ideals in K[x1, . . . , xn], then I : J is the set

I : J = {f ∈ K[x1, . . . , xn]: fg ∈ I ∀g ∈ J}, and is called the ideal quotient (or colon ideal) of I by J.

4.4 Ideals and Affine Varieties 55 Chapter 4 : Affine Varieties

Example 4.4.30. In K[x, y, z] we have

hxz, yzi: hzi = {f ∈ K[x, y, z]: fz ∈ hxz, yzi} = {f ∈ K[x, yz]: fz = Axz + Byz} = {f ∈ K[x, yz]: f = Ax + By} = hx, yi.

Proposition 4.4.31. If I and J are ideals in K[x1, . . . , xn], then I : J is an ideal in K[x1, . . . , xn] and I : J contains I.

Proof. To show I : J contains I, note that, because I is an ideal, if f ∈ I, then fg ∈ I ∀ g ∈ K[x1, . . . , xn], and, hence, certainly fg ∈ I, ∀g ∈ J. To show I : J is an ideal, first note that (i). 0 ∈ I : J, because 0 ∈ I.

(ii). Let f1, f2 ∈ I : J, then f1g ∈ I, and f2g ∈ I ∀g ∈ J.

Since I is an ideal, (f1 + f2)g = f1g + f2g ∈ I, ∀g ∈ J.

Thus, f1 + f2 ∈ I : J.

(iii). If f ∈ I : J with h ∈ K[x1, . . . , xn], then fg ∈ I, and since I is an ideal, hfg ∈ I ∀g ∈ J, which means that hf ∈ I : J. Hence, I : J is an ideal.

Theorem 4.4.32. Let I and J be ideals in K[x1, . . . , xn]. Then, 1. V (I : J) ⊇ V (I) − V (J).

2. If K is algebraically closed, then

V (I : J) = V (I) − V (J).

Proof. 1. We claim that

I : J ⊆ I(V (I) − V (J)).

For suppose that f ∈ I : J, and x ∈ V (I) − V (J). Then, fg ∈ I, ∀g ∈ J. Since x ∈ V (I), we have, f(x)g(x) = 0 ∀g ∈ J. Since x∈ / V (J), there is some g ∈ J such that g(x) 6= 0. Hence f(x) = 0 ∀x ∈ V (I) − V (J).

4.4 Ideals and Affine Varieties 56 Chapter 4 : Affine Varieties

Therefore, f ∈ I(V (I) − V (J)). Which proofs our claim. Since V is inclusion-reversing, we have V (I : J) ⊇ V (I(V (I) − V (J))). Which means V (I : J) ⊇ V (I) − V (J). √ 2. Suppose that I is algebraically closed and that I = I. Let x ∈ V (I : J). Equivalently,

if hg ∈ I, for all g ∈ J, then h(x) = 0. (1)

Now, let h ∈ I(V (I) − V (J)). If g ∈ J, then hg vanishes on V (I), because h vanishes on V (I) − V (J) and g on V (J). √ Thus, by Nullstensatz, hg ∈ I. √ By assumption, I = I, and hence, hg ∈ I for all g ∈ J. By (1), we have h(x) = 0. Thus, x ∈ V (I(V (I) − V (J))). This establishes that,

V (I : J) ⊆ V (I(V (I) − V (J))),

and completes the proof of the theorem.

Corollary 4.4.33. Let V and W be varieties in Kn. Then,

I(V ): I(W ) = I(V − W ).

Proof. By theorem 4.4.32. we have showed that

I : J ⊆ I(V (I) − V (J)).

If we apply this to I = I(V ) and J = I(W ) we obtain, I(V ): I(W ) ⊆ I(V − W ). The opposite containment follows from the definition of quotient ideals.

4.5 Irreducible Varieties

We already seen that the union of affine varieties is also an affine variety. In this section, we are going to see about irreducible varieties.

4.5 Irreducible Varieties 57 Chapter 4 : Affine Varieties

Definition 4.5.1. An afiine variety V ⊆ Kn is said to be Irreducible if whenever V is written in the form V = V1 ∪ V2, where V1 and V2 are affine varieties, then either V1 = V or V2 = V . Example 4.5.2. 1. The variety V (xy, yz) = V (z)∪V (x, y) is not irreducible.

2. It is clear that, a point, a line and a plane ought to be irreducible.

The following proposition gives the correspondence between irreducible varieties and radical ideals. Proposition 4.5.3. Let V ⊆ Kn be an affine variety, then V is irreducible if and only if I(V ) is a prime ideal.

Proof. (⇒): Let V be irreducible and let fg ∈ I(V ).

Set V1 = V ∩ V (f) and V2 = V ∩ V (g); these are varieties, since the intersection of affine varieties is again an affine variety.

Thus, fg ∈ I(V ) easily implies that V = V1 ∪ V2.

Since V is irreducible, we have either V1 = V or V2 = V . Say the former holds, so that

V = V1 = V ∩ V (f). This implies that f vanishes on V , so that f ∈ I(V ). Thus, I(V ) is prime. (⇐):

Assume that I(V ) is prime ideal and let V = V1 ∪ V2. Suppose V1 6= V

claim I(V ) = I(V2).

Since V2 ⊆ V , we have

I(V ) ⊆ I(V2).

For the reverse inclusion, first note that I(V ) $ I(V1), since V1 $ V.

Thus, we can pick f ∈ I(V1) − I(V ).

Now, let g ∈ I(V2) be arbitrary. Since V = V1 ∪ V2, it follows that fg vanishes on V , hence, fg ∈ I(V ). Since I(V ) is prime, we have either f ∈ I(V ) or g ∈ I(V ). But we know that f∈ / I(V ), and thus, g ∈ I(V ).

This proves that I(V ) = I(V2), whence V = V2 since I is one-to-one. Therefore V is irreducible variety.

4.5 Irreducible Varieties 58 Chapter 4 : Affine Varieties

Corollary 4.5.4. When K is algebraically closed, then the functions I and V induce a one-to-one correspondence between irreducible varieties in Kn and prime ideals in K[x1, . . . , xn]. Proposition 4.5.5. If K is an infinite field and V ⊆ Kn is a variety defined parametrically

x1 = f1(t1, . . . , tm), . .

xn = fn(t1, . . . , tm), where f1, . . . , fn are polynomials in K[t1, . . . , tm], then V is irreducible.

Proof. Let F : Km → Kn be defined by:

 F (t1 . . . , tm) = f1(t1 . . . , tm), . . . , fn(t1 . . . , tm) .

Saying V is defined parametrically by the above equation means that V is the Zariski closure of F (Km). In particular, I(V ) = I(F (Km)).

For any g ∈ K[x1, . . . , xn], the function g ◦ F is a polynomial in K[t1 . . . , tm]: i.e

 g ◦ F = g f1(t1 . . . , tm), . . . , fn(t1 . . . , tm) .

Because K is infinite, I(V ) = I(F (Km)) is the set of polynomials whose composition with F is the zero polynomial in K[t1, . . . , tm]:

I(V ) = {g ∈ K[x1, . . . , xn]: g ◦ F = 0}.

Now, suppose that gh ∈ I(V ), then

(gh) ◦ F = (g ◦ F )(h ◦ F ) = 0.

But, if the product of two polynomials in K[t1, . . . , tm] is the zero polynomials, one of them must be the zero polynomial. Hence, either g ◦ F = 0 or h ◦ F = 0. This means that, either g ∈ I(V ) or h ∈ I(V ). This shows that I(V ) is a prime ideal, and therefore, that V is irreducible.

The following proposition shows that any variety defined by a rational parametrization is irreducible.

4.5 Irreducible Varieties 59 Chapter 4 : Affine Varieties

Proposition 4.5.6. Let K be an infinite field and V be a variety defined by the rational parametrization

f1(t1, . . . , tm) x1 = , g1(t1, . . . , tm) . .

fn(t1, . . . , tm) xn = , gn(t1, . . . , tm) where f1, . . . , fn, g1, . . . , gn ∈ K[t1, . . . , tm], then V is irreducible.

m n Proof. Set W = V (g1g2 ··· gn) and let F : K − W → K defined by   f1(t1, . . . , tm) fn(t1, . . . , tm) F (t1, . . . , tm) = , ··· , . g1(t1, . . . , tm) gn(t1, . . . , tm) Then, V is the Zariski closure of Km − W . Which implies that I(V ) is the set of m h ∈ K[x1, . . . , xn] such that the function h◦F is zero for all (t1, . . . , tm) ∈ K −W. The difficulty is that h◦F need not be a polynomial, and, we, thus, cannot directly apply the argument in the latter part of proposition 4.5.5. We can get around this difficulty as follows.

Let h ∈ K[x1, . . . , xn]. Since

g1(t1, . . . , tm).g2(t1, . . . , tm) ··· gn(t1, . . . , tm) 6= 0

m N for any (t1, . . . , tm) ∈ K − W , the function (g1g2 ··· gn) (h ◦ F ) is equal to m zero, precisely those values of (t1, . . . , tm) ∈ K − W for which h ◦ F is equal to zero. Moreover, if we let N be the total degree of h ∈ K[x1, . . . , xn], then N (g1g2 ··· gn) (h ◦ F ) is a polynomial in K[t1, . . . , tm]. N We deduce that h ∈ I(V ) if and only if (g1g2 ··· gn) (h ◦ F ) is zero for all m N t ∈ K − W ; but this happens if and only if (g1g2 ··· gn) (h ◦ F ) is the zero polynomial in K[t1, . . . , tm]. Thus, we have to show that N h ∈ I(V ) if and only if (g1g2 ··· gn) (h ◦ F ) = 0 ∈ K[t1, . . . , tm]. Now, we can continue with our proof that I(V ) is prime ideal.

To see this, suppose p, q ∈ K[x1, . . . , xn] such that p.q ∈ I(V ). If the total degree of p and q are M and N respectively, then the total degree of p.q is M +N. Thus,

M+N (g1g2 ··· gn) (p ◦ F )(q ◦ F ) = 0.

M But, the former is a product of the polynomials (g1g2 ··· gn) (p ◦ F ) and N (g1g2 ··· gn) (q ◦ F ) ∈ K[t1, . . . , tm].

4.5 Irreducible Varieties 60 Chapter 4 : Affine Varieties

Hence, one of them must be the zero polynomial. In particular, either p ∈ I(V ) or q ∈ I(V ). This shows that I(V ) is a prime ideal, and, therefore, that V is an irreducible variety.

4.6 Decomposition of Affine Varieties in to Irreducibles

In the previous section we saw irreducible varieties arise naturally in many contexts. It is natural to ask whether an arbitrary variety can be built up out of irreducibles. In this section we explore this and related questions. We begin by translating an Ascending Chain Condition (ACC) for ideals into the language of varieties.

Proposition 4.6.1. [The Descending Chain Condition (DCC)] Any descending chain of varieties

V1 ⊇ V2 ⊇ V3 ··· in Kn must stabilize. That is, there exists a natural number N such that

VN = VN+1 = ···

Proof. Passing to the corresponding ideals gives an ascending chain chain of ideals

I(V1) ⊆ I(V2) ⊆ I(V3) ⊆ · · ·

By ascending chain condition of ideals, there exists a positive integer N such that

I(VN ) = I(VN+1) = ··· .

Since V (I(V )) = V for any variety, we have

VN = VN+1 = ··· .

4.6 Decomposition of Affine Varieties 61 in to Irreducibles Chapter 4 : Affine Varieties

We can use the above proposition to prove the following basic result about the structure of affine varieties. Theorem 4.6.2. Let V ⊆ Kn be an affine variety, then V can be written as a finite union

V = V1 ∪ V2 ∪ · · · ∪ Vm, where each Vi is an irreducible variety.

Proof. Assume that V is an affine variety which cannot be written as a finite union of irreducibles. 0 0 Then V is not irreducible, so that V = V1 ∪ V1 , where V 6= V1 and V 6= V1 . 0 Further, one of V1 or V1 must not be a finite union of irreducibles. Repeating the 0 0 argument just given, we can write V1 = V2 ∪ V2 , where V1 6= V2 and V1 6= V2 , and

V2 is not a finite union of irreducibles. Continuing in this way gives as an infinite sequence of affine varieties,

V ⊇ V1 ⊇ V1 ⊇ · · · with

V 6= V1 6= V2 6= ··· .

This contradicts proposition 4.6.1.

Example 4.6.3. Consider the variety V (xz, yz), which is the union of a line (the Z- axis), and a plane (the XY -axis) both of which are irreducibles. Definition 4.6.4. Let V ⊆ Kn be an affine variety. A decomposition

V = V1 ∪ V2 ∪ · · · ∪ Vm, where each Vi is an irreducible is called a ”minimal decomposition” (or some times, an ”irredundunt union”) if Vi * Vj for i 6= j. With this definition, we can now prove the following uniqueness result. Theorem 4.6.5. Let V ⊆ Kn be an affine variety. Then V has a minimal decomposition

V = V1 ∪ V2 ∪ · · · ∪ Vm.

4.6 Decomposition of Affine Varieties 62 in to Irreducibles Chapter 4 : Affine Varieties

(So each Vi is an irreducible variety and Vi * Vj for i 6= j). Furthermore, this minimal decomposition is unique up to the order in which

V1,...,Vm are written.

Proof. By theorem 4.6.2, V can be written in the form V = V1 ∪ V2 ∪ · · · ∪ Vm, where each Vi is irreducible. Further, if a Vi lies in some Vj for i 6= j, we can drop Vi, and V will be the union of the remaining Vj’s for i 6= j. Repeating this process leads to a minimal decomposition of V . 0 0 0 To show uniqueness, suppose that V = V1 ∪ V2 ∪ · · · ∪ Vl be another minimal decomposition of V . Then for each Vi in the first decomposition we have

0 0 0 0 Vi = Vi ∩ V = Vi ∩ (V1 ∪ · · · ∪ Vl ) = (Vi ∩ V1 ) ∪ · · · ∪ (Vi ∩ Vl ).

0 0 Since Vi is irreducible, it follows that Vi = Vi ∩ Vj for some j, i.e., Vi ⊆ Vj . 0 Applying the same argument to Vj (using the Vi’s to decompose V ) shows that 0 Vj ⊆ Vk for some k, and, thus,

0 Vi ⊆ Vj ⊆ Vk.

0 By minimality, i = k, and it follows that Vi = Vj . Hence, every Vi appears in 0 0 V = V1 ∪ · · · ∪ Vl , which implies m ≤ l. A similar argument proves l ≤ m, 0 and m = l follows. Thus, the Vi ’s are just the a permutations of the Vi’s, and uniqueness is proved.

♣. We remark that the uniqueness part of theorem 4.6.5 is false if one does not insist that the union be finite.

4.6 Decomposition of Affine Varieties 63 in to Irreducibles Chapter 4 : Affine Varieties

SUMMARY

The following table summarizes the results of this chapter. In the table, it is supposed that all ideals are radicals and that the field is algebraically closed.

ALGEBRA GEOMETRY radical ideals varieties I → V (I) I(V ) ← V addition of ideals intersection of varieties I + J → V (I) ∩ V (J) pI(V ) + I(W ) ← V ∩ W product of ideals union of ideals IJ → V (I) ∪ V (J) pI(V )I(W ) ← V ∪ W intersection of ideals union of varieties I ∩ J → V (I) ∪ V (J) I(V ) ∩ I(W ) ← V ∪ W quotient of ideals difference of varieties I : J → V (I) − V (J) I(V ): I(W ) ← V − W prime ideals irreducible varieties maximal ideal point of affine space ascending chain condition descending chain condition

4.6 Decomposition of Affine Varieties 64 in to Irreducibles Bibliography

[1] David Cox, John Little and Donal O’shea: Ideals Varieties and Algorithms, Third-Edition, Spring street New York, November 2006.

[2] David S Dommit and Richard M.Foote: Abstract Algebra,Third Edition, John Wiley and Sons, Inc 2004.

[3] Egbert Briscorn and Horst Kn¨orrer: Plane Algebraic Curves: Birkh¨auser; Bonn: April, 1986.

[4] Hindiyuki Mathumura: Commutative ring theory: Second Edition: The Ben- jamin Cuminings Publishing Company: Negoya University , Negoya, Japan.

[5] Joseph J.Rotman: A first course in Abstract Algebra: Second Edition: Pren- tice Hall, ISBN; 0130878685: May, 2003.

[6] Joseph J.Rotman: A first course in Abstract Algebra: Third Edition: Uni- versity of Illions: New Jersey 07458.

[7] J.S Mllne: Algebraic Geometry.

[8] M.F Atiyah, FRs and Macdonald: Introduction to commutative Algebra, Adison Wesley publishing compant 1969.

[9] Michele Audin: Geometry: Springer.

[10] Nicolas Bourbak: Commutative Algebra, Herman(1972), Adison Wisley.

[11] Pierre Antoine Grillet: Graduate text in Mathematics-Abstract Algebra, sec- ond edition, S,Axler K.A Ribet.

[12] Robert B.Ash: Basic Abstract Algebra: October 2000.

[13] Serge Lang: Graduate Text In Mathematics: Third Edition:Springer: F.W Gehring, K.A. Ribet: New Haven: 2002.

65 Chapter 4 : BIBLIOGRAPHY

[14] Thomas W.Hungerford: Graduate Texts In Mathematics- Algebra, Springer Verlag New York, inc 1974.

[15] U.M. Swamy, A.V.S.M. Murfy: Algebra Abstract and Modern: M.R Charles Fenny, M.R Ramesh: India, 2002.

[16] William Fulton: Algebraic Curves: Third Edition: January 28, 2008.

[17] Wolfram Decker and Frank Olafschreyer: Varieties, Gro¨bner bases, and Algebraic Curves: Springer: September 12, 2011.

4.6 BIBLIOGRAPHY 66