<<

Introduction to the Basics of , Computational Tools and Geometric Applications

Janko B¨ohm University of Saarland 17.05.2004

Abstract This is the manuscript for a talk given in a seminar on computer aided geometric design at the University of Saarland. The aim of the talk was to introduce the basic concepts of algebraic geometry, the computational tools, i.e. resultants and Groebner bases, and their geometric applications.

Contents

1 Overview 2

2 Basics of algebra and geometry 2 2.1 Affine varieties ...... 2 2.2 Varieties defined by ideals ...... 6 2.3 Hilbert Basis Theorem ...... 7 2.4 The ideal of a variety ...... 8 2.5 The Nullstellensatz ...... 11 2.6 Projection and elimination ...... 12 2.7 Some remarks on projective geometry ...... 13 2.7.1 and projective varieties ...... 13 2.7.2 The Theorem of Bezout ...... 14

3 Computational tools 15 3.1 Resultants ...... 15 3.2 Groebner bases ...... 18

1 4 Geometric Applications 22 4.1 Implicitation and birational geometry computations ...... 22

1 Overview

The main topics:

• Basics of Algebra and Geometry Affine varieties Varieties defined by ideals Hilbert Basis Theorem The ideal of a variety Nullstellensatz Projection and elimination Some remarks on projective geometry

• Computational Tools Resultants Groebner bases

• Geometric Applications Implicitation and birational geometry computations Intersection computations The genus of a Parametrization of and surfaces

2 Basics of algebra and geometry

2.1 Affine varieties Let k be a field.

Definition 1 An affine variety is the common zero locus of polynomials f1, ..., fr ∈ k [x1, ..., xn]

V (f1, ..., fr) = {f1 = 0, ..., fr = 0}

2 Example 2 • V (1) = ∅

• V (0) = kn

• Linear Algebra: For linear fi this is the solution space of an inhomo- geneous system of linear equations. Here we know, how to decide if V (f1, ..., fr) is empty and describe V (f1, ..., fr) by a parametrization applying Gauss algorithm.

x3−1 • The graph of a function: For example the graph of y (x) = x :

is V (xy − x3 + 1).

: Consider V (f) for one equation in the plane, e.g. f = y2 − x3 − x2 + 2x − 1:

• Surfaces in k3: Consider V (g) for one equation in 3-space ( an algebraic

3 set with 1 equation is called a hypersurface), e.g. g = f − z + z2.

• Twisted cubic: Consider the curve C = V (y − x2, z − x3) ⊂ k3 given by 2 equations in 3-space.

• There are also strange examples, e.g. V (xz, yz) = V (z) ∪ V (x, y) decomposes into the x − y-plane V (z) and the z-axis V (x, y)

4 Varieties, which do not admit a further decomposition are called irre- ducible, otherwise reducible. Example 3 Consider the following sets given by parametrizations: • Bezier spline: The curve C ⊂ k2 parametrized by 3 2 2 3 X (t) = x0(1 − t) + 3x1t(1 − t) + 3x2t (1 − t) + x3t 3 2 2 3 Y (t) = y0(1 − t) + 3y1t(1 − t) + 3y2t (1 − t) + y3t 2 with t ∈ k goes through the points (x0, y0) , (x3, y3) ∈ k and the lines at these points go through (x1, y1) resp. (x2, y2)

See for example the standard vector graphics programs. • Whitney umbrella: The surface S ⊂ k3 given by the parametrization X (s, t) = s · t Y (s, t) = s Z (s, t) = t2 with (s, t) ∈ k2.

5 C and S are also varieties. To see this, we have to describe them by implicit equations, i.e. as V (f1, ..., fr). Techniques to do this will be one of the topics. The implicit description would allow us for example to check easily, if a given point lies on the variety.

2.2 Varieties defined by ideals First recall, that an ideal I ⊂ R inside a ring R is an additive subgroup of R s.t. R-multiples of elements in I again lie inside of I. An important observation is, that the common zeroset of polynomials f1, ..., fr only depends on the ideal I = hf1, ..., fri ⊂ k [x1, ..., xn] generated by f1, ..., fr. The reason is, that if f1 (p) = 0, ..., fr (p) = 0 any polynomial linear combination also vanishes in p:

=0 Xr z }| { si (p) fi (p) = 0 i=1 for all si ∈ k [x1, ..., xn]. Hence any different set of generators of I gives the same zeroset and we should define:

Definition 4 For an ideal I ⊂ k [x1, ..., xn] we define

V (I) = {x ∈ kn | f (x) = 0∀f ∈ I}

Example 5 Consider I = hf1, f2i with

2 2 f1 = 2x − 3y + 10 2 2 f2 = 3x − y + 1

Gaussian elimination applied to the linear system of equations

2X − 3Y + 10 = 0 3X − Y + 1 = 0 shows that ­ ® I = x2 − 1, y2 − 4

6 hence V (I) = {(1, 2) , (−1, 2) , (1, −2) , (−1, −2)} consists of 4 points.

3

2

1

y0

±1

±2

±3 ±3 ±2 ±1 0 1 2 3 x

The definition of V (I) naturally raises the question, whether any ideal in k [x1, ..., xn] has a finite set of generators, hence any V (I) = V (f1, ..., fr) with some f1, ..., fr ∈ k [x1, ..., xn]. This is answered by the Hilbert Basis Theorem:

2.3 Hilbert Basis Theorem In k [x] (principal ideal domain) every ideal has a single generator, which one can find by successively using the Euclidian algorithm to compute the gcd. What about polynomials in several variables? Definition 6 A ring R is called Noetherian, if every ideal is finitely generated or equivalently if R contains no infinitely properly ascending chain of ideals I1 $ I2 $ I3 $ ... (Exercise: recall the proof of the equivalence from your algebra lecture). You can imagine the analogous definition for modules. Theorem 7 (Hilbert Basis Theorem) If R is Noetherian then so is R [x]. We skip the Proof. Let I ⊂ R [x] an ideal. The set of lead coefficients of I generate an ideal L ⊂ R which is finitely generated by some g1, ..., gr ∈ R since R is Noetherian. By definition for each gi there is an mi ∈ N0 s.t.

mi I 3 fi = gix + lower order terms

7 and let J = hf1, ..., fri ⊂ R [x] and m = max {m1, ..., mr}. Modulo the generators of J we can reduce the degree of elements of I to degree < m. The R-module M generated by 1, x, ..., xm−1 is finitely generated hence also M ∩I is finitely generated by some h1, ..., hl ∈ R [x] and I = hf1, ..., fr, h1, ..., hli.

2.4 The ideal of a variety Given an ideal I, we formed the set of common zeros V (I) of the elements of I. Conversely given some set S ⊂ kn, we can consider the set of polynomials I (S) vanishing on S, which is indeed an ideal (exercise: prove this).

n Definition 8 For S ⊂ k let I (S) = {f ∈ k [x1, ..., xn] | f (x) = 0∀x ∈ S}.

Obviously it holds, that

Remark 9 J1 ⊂ J2 ⇒ V (J2) ⊂ V (J1) S1 ⊂ S2 ⇒ I (S2) ⊂ I (S1)

So one can start to study a given variety by studying the hypersurfaces, in which it is contained V (J) ⊂ V (f) e.g. the twisted cubic lies inside V (y − x2) and V (z − x3). By the above correspondence this naturally leads to the ideal membership problem: Given an ideal J ⊂ R check, if a given f ∈ R lies inside J. For k [x] this is solved by the division with remainder with respect to the single generator of J (found by the Euclidian algorithm). How do the two processes forming I (S) and V (J) relate to each other? One could expect, that J = I (V (J)) for any ideal J, but this is not true: Consider for example the ideal J = hx2i ⊂ k [x], where V (J) = {0} ⊂ k, so I (V (J)) = hxi. At least we can note, that if f ∈ J and p ∈ V (J), then f (p) = 0 so f ∈ I (V (J)), i.e.

Remark 10 For every ideal J

J ⊂ I (V (J))

What about the relation the other way around? Since every polynomial in I (S) is vanishing on S, we note:

Remark 11 For any set S ⊂ kn

S ⊂ V (I (S))

If S = V (J) we can get the other inclusion by applying V to J ⊂ I (V (J)), hence

8 Remark 12 If S is a variety i.e. S = V (J) for some ideal J, then S = V (I (S)).

By playing with V and I you can easily check:

Remark 13 V (I (S)) is the smallest variety containing S.

(If S is any set and V (J) some variety with S ⊂ V (J) ⊂ V (I (S)) then I (V (J)) ⊂ I (S) so V (I (S)) ⊂ V (I (V (J))) = V (J) ⊂ V (I (S)) so V (I (S)) = V (J)).

Example 14 Take S = {(x, 0) ∈ R2 | 0 < x < 1}. Then I (S) = hyi and V (I (S)) is the whole x-axis. For a small part S of a circle V (I (S)) gives the whole circle

The blue + cyan part is V (I (S)) for the blue Bezier spline S:

If you know about topology, then note, that we can consider V (I (S)) as the closure of S in a suitable topology, the Zariski topology. You can imagine, that as the closed sets we should take the affine varieties. To check, that this indeed defines an topology one has to check that ∅, kn are affine varieties and finite unions and infinite intersections of varieties are again

9 varieties. So you should prove the following remark (and also keep in mind that the polynomial ring is Noetherian): Given two ideals J1 and J2 we can form the sum J1 + J2, product J1 · J2 and intersection J1 ∩ J2, where

J1 + J2 = {f1 + f2 | fi ∈ Ji} J1 · J2 = hf1 · f2 | fi ∈ Jii

Remark 15 The vanishing loci are

V (J1 + J2) = V (J1) ∩ V (J2)

V (J1 · J2) = V (J1 ∩ J2) = V (J1) ∪ V (J2) so unions and intersections of varieties are again varieties.

(Exercise: prove this).

2 3 Example 16 With J1 = hy − x i and J2 = hz − x i the twisted cubic given by J1 + J2 is the intersection of V (J1), V (J2).

Definition 17 An (k) is kn together with the Zariski topology.

Working with affine varieties, it is natural to ask, what kind of maps we should consider between them. The good maps with respect to the Zariski topology are the ones (locally) given by polynomials. Given a variety S ⊂ kn any polynomial function on kn induces a polynomial function on X and two such functions f1, f2 are identical, iff f1 − f2 ∈ I (S) so the polynomial functions on S are the elements of the coordinate ring

k [x1, ..., xn] /I (S)

Example 18 Consider the map (parametrization)

ϕ : A1 (R) → A2 (R) t 7→ (t2 + 1, t3 + t) and let C = image (ϕ).Then I (C) = hy2 − x3 − x2i (you can easily check the inclusion ⊃ by substituting the parametrization into the equation). We observe, that (0, 0) ∈ V (I (C)) but (0, 0) ∈/ C. Actually one can prove, that

C = V (I (C)) = C ∪ {(0, 0)}

10 2.5 The Nullstellensatz If 1 ∈ J then V (J) = ∅ (In k [x] we can check 1 ∈ J easily by computing the single generator of J). What about the converse? In general this is false, e.g. consider V (x2 + 1) ⊂ A1 (R), but if k is algebraically closed, then it is true (since any polynomial decomposes in linear factors). It turns out to be true also in polynomial rings with more than one variable:

Theorem 19 (Week Nullstellensatz) For J ⊂ k [x1, ..., xn] an ideal it holds: If k = k and V (J) = ∅ then 1 ∈ J.

(Without proof). This says, that any system of equations generating an ideal strictly smaller that k [x1, ..., xn] has a common zero. So you can consider it as the fundamental theorem of algebra in several variables.

Definition 20 For an ideal J the radical ideal is √ J = {f ∈ R | ∃n : f n ∈ J} p Example 21 hx2 (x − 1)i = hx (x − 1)i. So think of the radical as making multiple zeros to zeros of order 1. √ ³√ ´ Remark 22 J ⊂ I (V (J)) and V (J) = V J . √ Proof. If f ∈ J i.e. f m ∈ J so f m (p) = 0 ∀p ∈ V (J), hence f (p) = 0 ∀p ∈ V (J), i.e. f ∈ I (V (J)). √ If we apply V to the inclusion J ⊂ I (V (J)) we get V (J) = V (I (V (J))) ⊂ ³√ ´ V J . √ ³√ ´ For the other inclusion apply V to J ⊂ J and get V J ⊂ V (J).

11 Theorem 23 (Strong Nullstellensatz) For J ⊂ k [x1, ..., xn] it holds: If k = k then √ I (V (J)) = J

We skip the Proof. (Trick of Rabinovich) Take generators of J

J = hf1, ..., fsi and let f ∈ I (V (J)). Then for L = hJ, yf − 1i ⊂ k [x1, ..., xn, y] we have V (L) = ∅ (since for all p ∈ V (J) we get yf (p) − 1 = −1 6= 0), hence by the weak Nullstellensatz there are ai and b, such that X aifi + b (1 − yf) = 1 i

1 in particular for y = f and multiplying by a high enough power of f we get f m ∈ J.

2.6 Projection and elimination In linear algebra Gauss algorithm parametrizes the solution space of a linear system of equations by a coordinate space. This can be viewed as the pro- jection of the solution space. Solving nonlinear systems of equations we also can apply projection. For any ideal I ⊂ R = k [x1, ..., xn] we consider the elimination ideal

Im = I ∩ k [xm+1, ..., xn] and the projection

n n−m πm : A (k) → A (k) πm (a1, ..., an) = (am+1, ..., an)

Example 24 Consider ­ ® I = x2 − 1, y2 − 4 S = V (I) = {(1, 2) , (−1, 2) , (1, −2) , (−1, −2)}

2 then π1 (S) = {−2, 2} and I1 = hy − 4i.

So what do you think is the relation between them?

12 Example 25 If S is some variety, then in general the projection is not a 1 variety, e.g. for the hyperbola S = V (xy − 1) we have π1 (S) = A (k) \{0}

hence the right question is, how to describe the Zariski closure of the projec- tion: Theorem 26 If k = k then

πm (V (I)) = V (Im)

Proof. For any (am+1, ..., an) = πm (a1, ..., an) ∈ πm (V (I)) we observe that 0 = f (a1, ..., an) = f (am+1, ..., an) for all f ∈ Im

(since f ∈ I), hence πm (V (I)) ⊂ V (Im). V (Im) being closed, this also holds for the closure. Any g ∈ I (πm (V (I))) ⊂ k [xm+1, ..., xn] can be considered as an element m of k [x1, ..., xn] and vanishes on V (I) so by the Nullstellensatz√ ∃m : g ∈ I, m but then g ∈ I∩k [xm+1, ..., xn] = Im i.e. I (πm (V (I))) ⊂ Im = I (V (Im)) hence applying V

V (Im) = V (I (V (Im))) ⊂ V (I (πm (V (I)))) = πm (V (I))

We will see soon, how to compute Im.

2.7 Some remarks on projective geometry 2.7.1 Projective space and projective varieties In affine space A2 (k) we expect two lines to meet in one point, but there are special pairs of lines, for which this does not hold, namely two parallel lines. We fix this problem by the following:

13 We define the projective n-space Pn (k) over k as

Pn (k) = An+1 (k) / ∼

∗ with (a0, ..., an) ∼ (b0, ..., bn) ⇔ ∃λ ∈ k with (a0, ..., an) = λ (b0, ..., bn) and denote the equivalence classes by (a0 : ... : an). So we identify all points lying on the same line through (0, ..., 0).

Example 27 We can think of P2 (R) as the half sphere with opposite points on the boundary identified. We observe, that P2 (R) = A2 (R) ∪ P1 (R) is the union of A2 (R) with the line at infinity P1 (R) by stereographic projection.

So how to define varieties in projective space? All points on lines through the origin of An+1 (k) are identified, so we have to define varieties by poly- nomials, which satisfy

f (p) = 0 ⇒ f (λp) = 0 for all λ

If k is infinite, this is equivalent to f being homogeneous, i.e. all monomials in f have the same degree. For example x2y + xy2 is homogeneous, but x2y + xy is not. In terms of ideals we should consider homogeneous ideals, i.e. ideals which have homogenous generators. One of the reasons, why considering projective space is:

2.7.2 The Theorem of Bezout Theorem 28 If k is algebraically closed and f, g ∈ k [x, y, z] are homoge- neous of degrees d and e with no common factor, then the curves C1 = V (f) and C2 = V (g) intersect in d · e points, counted with multiplicity.

2 Example 29 The lines L1 = V (x) and L2 = V (x − z) in P meet at the point (0 : 1 : 0) since x = 0 and z = 0.

14 2 The line L1 = V (x) and the C = V (yz − x ) meet at (0 : 0 : 1) and (0 : 1 : 0).

The line L2 = V (y) and the parabola C meet in (0 : 0 : 1) with multiplicity 2, as the parabola is tangent to L2.

3 Computational tools

Working with varieties, the two key computational tools are resultants (fast, applicable in special situations) and Groebner bases (the universal tool).

3.1 Resultants Resultants are used to solve systems of polynomial equations and determine, whether or not a solution exists. They produce elements in the elimination ideals. We first give some results in K [x], keeping in mind, that we can take K = k (x2, ..., xn) as the field . Given f, g ∈ K [x] of positive degree l = deg f and m = deg g, write Pl i Pm i f = i=0 fl−ix and g = i=0 gm−ix . If f and g have a common factor h of nonzero degree i.e. f = f1h and g = g1h then with A = g1 and B = −f1 we get Af + Bg = 0 with deg A ≤ m − 1 and deg B ≤ l − 1 Conversely given this, assume B 6= 0 and f and g do not have a common factor of positive degree. The extended Euclidian algorithm gives a, b ∈ K [x] with af + bg = 1 so

B = Baf + Bbg = (Ba − bA) f i.e. deg B ≥ l a contradiction. So we proved:

Proposition 30 For f, g ∈ K [x] of positive degree deg f = l and deg g = m it is equivalent:

15 1. f and g have a common factor of positive degree.

2. There are 0 6= A, B ∈ K [x] with Af + Bg = 0 with deg A ≤ m − 1 and deg B ≤ l − 1.

The equation Af + Bg = 0 is equivalent to the linear system of equations Pm−1 i Pl−1 i in the coefficients of A = i=0 am−i−1x and B = i=0 bl−i−1x     a0 0  .   .  Syl (f, g, x) ·  .  =  .  bl−1 0 with some matrix Syl (f, g, x) called the Sylvester matrix   f0 g0  . . . .   ......    Syl (f, g, x) =  fl f0 gm g0   . . . .   ......  fl gm

l+m−1 e.g. the coefficient of the lead term x is given by f0a0 + g0b0 and the constant coefficient is given by flam−1 + gmbl−1. So defining the Sylvester resultant Res (f, g, x) = det Syl (f, g, x) we get:

Proposition 31 For f, g ∈ K [x] of positive degree deg f = l and deg g = m it is equivalent:

1. f and g have a common factor of positive degree.

2. There are 0 6= A, B ∈ K [x] with Af + Bg = 0 with deg A ≤ m − 1 and deg B ≤ l − 1.

3. Res (f, g, x) = 0.

Furthermore there are A, B ∈ K [x] polynomial expressions in the coeffi- cients of f and g with Af + Bg = Res (f, g, x).

For the second part we modify the right hand side of linear system of equations in the proof 2 ⇔ 3. Given f, g ∈ k [x1, ..., xn] of positive degree in x1, we can compute the resultant Res (f, g, x1) ∈ k [x2, ..., xn] by computing in k (x2, ..., xn)[x1] and noting, that we do not have to invert coefficients in the course of the com- putation.

16 Proposition 32 For f, g ∈ R = k [x1, ..., xn] and I = hf, gi, we get

1. Res (f, g, x1) ∈ I1 = I ∩ k [x2, ..., xn] is in the first elimination ideal.

2. Res (f, g, x1) = 0 ⇔ f and g have a common factor of positive degree.

Proof.

1. We can write Res (f, g, x1) = Af + Bg with A, B ∈ R and Res (f, g, x1) is an integer polynomial in the coefficients of f, g with respect to x1. 2. We note by above Proposition and Gauss theorem

Res (f, g, x1) = 0 ⇔ f, g have common factor of positive degree in k (x2, ..., xn)[x1] ⇔ f, g have common factor of positive degree in k [x2, ..., xn][x1]

Example 33 Consider I = hf1, f2i with

2 f1 = x + 2xy + y 2 2 f2 = x − y so   1 0 −1 0  2x 1 0 −1  Res (f , f , y) = det Syl (f , f , y) = det   = −x2 (x − 1) (3x + 1) 1 2 1 2  x 2x x2 0  0 x 0 x2 i.e. the x-coordinates of the solutions have to satisfy this equation. Inserting 1 1 x1 = 0, x2 = 1 and x3 = − 3 in f2 gives y1 = 0, y2 = ±1 and y3 = ± 3 . Checking these tuples with f1 gives the solutions µ ¶ 1 1 (x, y) = (0, 0) , (1, −1) , − , − 3 3

17 3.2 Groebner bases Now we will consider the algorithm, which allows us to do the computations in polynomial rings, e.g. treat the ideal membership problem. In the case of k [x] we can find the generator of an ideal applying the Euclidian algo- rithm and check ideal membership using division with remainder. Here we sucessively divide by the lead term of generator. The lead term is given by ordering the monomials with respect to the degree. Example 34 Test if V (x2 + x + 1) ⊂ V (x4 + x2 + 1): (x4 + x2 + 1) : (x2 + x + 1) = x2 − x + 1 x4 + x3 + x2 −x3 + 1 −x3 − x2 − x x2 + x + 1 so the remainder is zero, hence the answer is yes. In the multivariate case we do not know, how to find the lead term (e.g. consider xy2 + x2y) to do a division algorithm and then the analogue of the Euclidian algorithm. So we need:

α α1 αn Definition 35 (monomial order) To any monomial x = x1 · ... · xn we can associate the exponent vector α = (α1, ..., αn). A monomial order is a total ordering > on the set of exponent vectors (here total means: for any 2 elements it holds α < β, α = β or α > β) with the following properties: 1. Multiplication of monomials is respected i.e. for all γ we have α > β ⇒ α + γ > β + γ. 2. Any nonempty set has a smallest element. There are various monomial orders. Here we will only consider Example 36 (Lexicographic order) α > β ⇔ the leftmost nonzero entry of α − β is positive. For example in k [x, y, z] x = (1, 0, 0) > y = (0, 1, 0) > z = (0, 0, 1) xy2 = (1, 2, 0) > (0, 3, 4) = y3z4 x3y2z4 = (3, 2, 4) > (3, 2, 1) = x3y2z With respect to a given monomial ordering, for any polynomial f we denote by in (f) the largest monomial appearing in f and by lt (f) the cor- responding term in f (i.e. the product of in (f) by its coefficient).

18 Algorithm 37 (Division with remainder) Given f ∈ R = k [x0, ..., xn] and g1, ..., gm ∈ R we can write X f = aigi + r where no monomial of r is divisible by any in (gi) by the following algorithm: a=(0..0);remainder=0;

while f!=0 do ( if exist i with in(g#i) divides in(f) then ( c=lt(f)/lt(g#i); a#i=a#i+c;f=f-c*g#i; ) else ( remainder=remainder+lt(f); f=f-lt(f); ); ); The process terminates, since in every step the initial term decreases and this has to stop, since every set of monomials has a smallest element. Note, that in general in any step we have different possible choices of i such that in (fi) divides the initial term, hence the output is not unique.

Example 38 Using lex divide x2y + xy2 + y2 by xy − 1 and y2 − 1.

x2y +xy2 + y2 = x (xy − 1) + y (xy − 1) + x + 1 (y2 − 1) + y + 1 x2y − x xy2 +x + y2 xy2 − y x +y2 + y y2 + y y2 − 1 y + 1

Note, that unlike in the one variable case, we have to put terms into the remainder also in the intermediate steps.

Using lex we divide x2 − y2 by x2 + y and xy + x:

x2 −y2 = 1 (x2 + y) + (−y2 − y) x2 + y −y2 − y

19 This seems to be nonsense, as ¡ ¢ x2 − y2 = −y x2 + y + x (xy + x) so x2 − y2 ∈ hx2 + y, xy + xi and the division algorithm does not return 0. The problem lies in the fact, that in this equation the two lead terms cancel. So we can repair the situation by adding all polynomials to the list of divisors, which we can build by cancelling lead terms between divisors. This leads to the notion of a Groebner basis of an ideal:

Definition 39 g1, ..., gr ∈ I are called a Groebner basis of I if in (g1) , ..., in (gr) generate the ideal generated by all initial monomials of elements in I.

Remark 40 (Ideal membership) If g1, ..., gr is a Groebner basis of I then f ∈ I iff division by g1, ..., gr gives 0.

Proof. If f ∈ I then division with remainder gives X f = aigi + r where no monomial of r is divisible by any in (gi). Since r ∈ I and g1, ..., gr is a Groebner basis in (r) ∈ hin (g1) , ..., in (gr)i so there is an i such that in (gi) | in (r) hence r = 0. In the example a Groebner basis of I = hx2 + y, xy + xi is given by © ª G = y2 + y, x2 + y, xy + x

Since we can obtain x2 −y2 by cancelling lead terms and division of x2 −y2 by x2 + y and xy + x yields −y2 − y, this element has to be contained in the Groebner basis. We need a criterion telling us, at which point we have reached a Groebner basis: Given two polynomials the syzygy pair cancels the lead terms:

Definition 41 (Syzygy polynomial) For f, g ∈ R the syzygy polynomial is lcm (in (f) , in (g)) lcm (in (f) , in (g)) S (f, g) = f − g lt (f) lt (g)

Theorem 42 g1, ..., gr ∈ I are a Groebner basis iff S (gi, gj) divided by g1, ..., gr is zero for all i, j.

20 Proof. ⇒: S (gi, gj) ∈ I so see the remark on the ideal membership question. ⇐: Let f ∈ I. Since all syzygy polynomials reduce to 0 modulo g1, ..., gr we can write X f = aigi i

We have to show, that in (f) ∈ hin (g1) , ...,P in (gr)i. If in (f) = in (aigi) then this is the case, otherwise in the sum i aigi some lead terms cancel. In this case by assumtion we can replace any syzygy pair in the sum by an R-combination of g1, ..., gr. Exercise: Use the theorem to check in the above example, that G is indeed a Groebner basis.

Algorithm 43 (Buchberger) By the above theorem the following algorithm computes a Groebner basis: G={g1,...,gr}; repeat ( G’=G; for all tuples p,q in G do ( S=divide(S(p,q),G); if S!=0 then G=append(G,S); ); ) until G’==G; By Noetherian property of R the algorithm terminates.

Example 44 In k [t, z, y, x] with respect to the lexicographic order t > z > y > x we compute a Groebner basis of ­ ® I = t2 − x, t3 − y, t4 − z

In each step the first column denotes the coefficients of the syzygy polynomial and the second the division with remainder:  −tx + y  −t2x + z  −ty + z  t3y − zx t2 − x t t2 x ty t3 − y 1 t t4 − z 1 1 x −tx + y 1 −t3 −y z − x2 1 1 −x −ty + x2 1 y2 − x3 1

21 You can check as an exercise, that all other syzygy polynomials reduce to 0. Hence a Groebner basis of I is given by y2 −x3, z−x2, tx−y, ty−x2, t2 −x. Note, that we can do not need t3 − y and t4 − z, since they appear in a syzygy polynomial with coefficient 1 and by this get a Groebner basis, which is minimal with respect to the following notation:

Definition 45 A Groebner basis g1, ..., gr is called minimal, if removing a generator from hin (g1) , ..., in (gr)i gives a strictly smaller ideal.

4 Geometric Applications

4.1 Implicitation and birational geometry computations

Computing Im with Groebner bases:

Theorem 46 If G = {g1, ..., gr} is a Groebner basis of I ⊂ R with respect to lex, then Gm := G ∩ k [xm+1, ..., xn] is a Groebner basis of Im ⊂ k [xm+1, ..., xn] with respect to lex.

Proof. We first show Im = hGmi: Obviously Gm ⊂ Im. On the other hand take f ∈ Im ⊂ I and do division with remainder by g1, ..., gr. In the first step

f = aj1 gj1 + r1

with in (gj1 ) | in (f) so in (gj1 ) ∈ k [xm+1, ..., xn]. Any term containing x1, ..., xm would be bigger than in (gj1 ) in lex ordering, so gj1 ∈ k [xm+1, ..., xn]∩ I = Im. As a consequence r1 ∈ Im so inductively we get

f = aj1 gj1 + ... + ajs gjs

with gji ∈ Im and the remainder being 0 because f ∈ I and (g1, ..., gr) is a Groebner basis of I. So we proved that f ∈ hGmi. We skip the proof that Gm is a Groebner basis. Projection allows us to describe the image of polynomial maps. This is the key computational ingredient of the large area of birational geometry (where more generally the maps are given by rational functions):

Example 47 Consider the curve C = ϕ (A1 (k)) with

ϕ : A1 (k) → A3 (k) t 7→ (t2, t3, t4)

22 The graph Γ(ϕ) of ϕ has projections to the source and target of ϕ Γ(ϕ) = {(t, ϕ (t)) | t ∈ A1 (k)} ⊂ A1 (k) × A3 (k) .& π1 A1 (k) −→ A3 (k) ϕ and image (ϕ) = π1 (Γ (ϕ)). So in order to get the image of ϕ we have to compute the first elimination ideal of ­ ® I (Γ (ϕ)) = x − t2, y − t3, z − t4 By the above example a Groebner basis of I (Γ (ϕ)) in k [t, z, y, x] with respect to t > z > y > x is given by y2 − x3, z − x2, tx − y, ty − x2, t2 − x, hence ¡ ¢ image (ϕ) = V y2 − x3, z − x2 (Note that in this case image (ϕ) is closed, in general this is not true).

This is the key computational ingredient of the large area of birational geometry, where more generally the maps are given by rational functions, think for example of the parametrization of the circle. Proposition 48 Suppose k = k and we are given a rational map

m n ϕ : A (k) \Z → A ³(k) ´ f1(t) fn(t) t = (t1, .., tm) 7→ , ..., g1(t) gn(t) with fi, gi ∈ k [t1, .., tm] and Z = V (g), g = g1 · ... · gn the zeros of the denominators. Then image (ϕ) = V (Im+1) with

I = hg1x1 − f1, ..., gnxn − fn, 1 − gsi ⊂ k [s, t1, .., tm, x1, ..., xn] Note, that this stays true for any infinite field.

23 We skip the proof, as the idea is simple: To describe the graph, we fi(t) multiply the equations xi = by the denominator and add an additional gi(t) variable s and the equation 1 − gs to remove the solutions (x, t) with some gi (x) = 0.

Example 49 Consider the parametriziaton of the circle:

ϕ : A1 (k) \Z → A2 (k) , t 7→ (x (t) , y (t)) 1 − t2 2t x (t) = , y (t) = t2 + 1 t2 + 1 with Z = V (t2 + 1), given by the second intersection point of the line y = t (x + 1) with the circle:

D E A Groebner basis of I = (t2 + 1) x − (1 − t2) , (t2 + 1) y − 2t, 1 − (t2 + 1)2 s with respect to lex s > t > x > y is given by ½ ¾ 1 1 x2 + y2 − 1, ty + x − 1, tx + t − y, s − x − 2 2

2 2 and hence I2 = hx + y − 1i. What kind of information gives the second or third equation?

The following proposition describes the relationship between resultants and rational implicitation:

Proposition 50 Let k = k. Given f, g ∈ k [t, x1, ..., xn]

w f1 = awt + .. + a0 l f2 = blt + .. + b0 with ai, bi ∈ k [x1, ..., xn] for any zero c = (c1, ..., cn) of Res (f1, f2, t) with aw (c) 6= 0 and bl (c) 6= 0, there is a t0 with f1 (t0, c) = 0 and f2 (t0, c) = 0.

24 Proof. If aw (c) 6= 0 and bl (c) 6= 0 then

0 = Res (f1, f2, t)(c) = Res (f1 (t, c) , f2 (t, c) , t) hence f1 (t, c) and f2 (t, c) have a common factor, so a common zero. Explore this in the example of the circle:

2 2 Example 51 Computing the resultant of f1 = (t + 1) x − (1 − t ), f2 = (t2 + 1) y − 2t, we get   x + 1 0 y 0  0 x + 1 −2 y  Res (f , f , t) = det   = 4x2 + 4y2 − 4 ∈ I 1 2  x − 1 0 y −2  2 0 x − 1 0 y

2 The proposition tells us, that any c ∈ A (k) with Res (f1, f2, t)(c) = 0 is a point in the image of the parametrization ϕ or is a root of

a2 = x + 1 or b2 = y i.e. c ∈ {(−1, 0) , (1, 0)}, so

V (Res (f1, f2, t)) = image (ϕ) ∪ {(−1, 0)} = image (ϕ)

The point (−1, 0) is called a base point (i.e. a common zero) of the linear system lt = {y − t (x + 1)} used to parametrize the circle.

References

[1] Cox, Little, O´Shea: Ideals, Varieties, and Algorithms. UTM, Springer (1996).

[2] Decker, Schreyer: Varieties, Groebner Bases, and Algebraic Curves. Manuscript.

[3] Schenck: Computational Algebraic Geometry. LMSST 58, Cambridge University Press (2003).

25