Appendix A Some Concepts from Algebra

This appendix contains precise statements of various algebraic facts and definitions used in the text. For students who have had a course in abstract algebra, much of this material will be familiar. For students seeing these terms for the first time, keep in mind that the abstract concepts defined here are used in the text in very concrete situations.

§1. Fields and Rings

We first give a precise definition of a field.

Definition 1. A field consists ofa set k and two binary ope rations "." and "+ " defined on k for which the following conditions are satisfied: (i) (a +b)+c = a + (b+c) and (a ·b)·c = a· (b· c)for all a, b, c E k (associative). (ii) a + b = b + a and a . b = b· a for all a, bE k (commutative). (iii) a . (b + c) = a . b + a . c for all a, b, c E k (distributive). (iv) There are 0,1 E k such that a + 0 = a . 1 = afor all a E k (identities). (v) Given a E k, there is b E k such that a + b = 0 (additive inverses). (vi) Given a E k, a =1= 0, there is c E k such that a . c = 1 (multiplicative inverses).

The fields most commonly used in the text are CQ, JR, and

Definition 2. A commutative ring consists of a set R and two binary operations "." and "+" defined on R for which the following conditions are satisfied: (i) (a +b) +c = a + (b+c) and (a ·b)·c = a· (b ·c)forall a, b, c E R (associative). (ii) a + b = b + a and a . b = b· a for all a, bE R (commutative). (iii) a . (b + c) = a . b + a . cfor all a, b, c E R (distributive). (iv) There are 0,1 E R such that a + 0 = a . 1 = a for all a E R (identities). (v) Given a E R, there is bE R such that a + b = 0 (additive inverses). 480 Appendix A. Some Concepts from Algebra

Note that any field is obviously a commutative ring. Other examples of commutative rings are the 7L and the ring k[Xl, ... , xn]. The latter is the most commonly used ring in the book. In Chapter 5, we construct two other commutative rings: the coordinate ring k[V] of polynomial functions on an affine variety V and the quotient ring k[Xl, ... , xn]/ I, where I is an ideal of k[Xl, ... , xnl A special class of commutative rings are the integral domains.

Definition 3. A commutative ring R is an integral domain ifwhenever a, bE Rand a . b = 0, then either a = 0 or b = O.

Any field is an integral domain, and the k[Xl, ... ,xn ] is an integral domain. In Chapter 5, we prove that the coordinate ring k[V] of a variety V is an integral domain if and only if V is irreducible. Finally, we note that the concept of ideal can be defined for any ring.

Definition 4. Let R be a commutative ring. A subset I C R is an ideal if it satisfies: (i) 0 E I. (ii) If a, bEl, then a + bEl. (iii) If a E I and b E R, then b . a E I.

Note how this generalizes the definition of ideal given in §4 of Chapter 1.

§2. Groups

A group can be defined as follows.

Definition 1. A group consists ofa set G and a binary operation "." defined on G for which the following conditions are satisfied: (i) (a· b) . c = a . (b· c) for all a, b, c E G (associative). (ii) There are 1 E G such that 1 . a = a . 1 = a for all a E G (identity). (iii) Given a E G, there is bEG such that a . b = b . a = I (inverses).

A simple example of a group is given by the integers 7L under addition. Note 7L is not a group under multiplication. A more interesting example comes from linear algebra. Let k be a field and define

GL(n, k) = {A : A is an invertible n x n matrix with entries k}.

From linear algebra, we know that the product A B of two invertible matrices A and B is again invertible. Thus, matrix mUltiplication defines a binary operation on GL(n, k), and it is easy to verify that all of the group axioms are satisfied. For a final example of a group, let n be a positive and consider the set

Sn = {a : {l, ... , n} --+ {I, ... , n} : a is one-to-one and onto}. 3 Determinants 481

Then composition of functions turns Sn into a group. Since elements a E Sn can be regarded as permutations of the numbers 1 through n, we call Sn the permutation group. Note that Sn has n! elements. Finally, we need the notion of a subgroup.

Definition 2. Let G be a group. A subset H eGis called a subgroup if it satisfies: (i) 1 E H. (ii) If a, b E H, then a . b E H. (iii) If a E H, then a-l E H.

In Chapter 7, we study finite subgroups of the group GL(n, k).

§3. Determinants

Our first goal is to give a formula for the determinant of an n x n matrix. We begin by defining the sign of a permutation. Recall that the group Sn was defined in §2 of this appendix.

Definition 1. /fa E Sn, let Pu be the matrix obtained by permuting the columns ofthe n x n identity according to a. Then the sign of a, denoted sgn(a), is defined by

sgn(a) = det(Pu ).

Note that we can transform Pu back to the identity matrix by successively switching columns two at a time. Since switching two columns of a determinant changes its sign, it follows that sgn(a) equals ±1. Then one can prove that the determinant is given by the following formula.

Proposition 2. If A = (aij) is an n x n matrix, then

det(A) = L sgn(a)alu(l) ... anu(n)' ueSn

Proof. A proof is given in Chapter 5, §2 of FINKBEINER (1978). D

This formula is used in a crucial way in our treatment of resultants (see Proposition 4 from Chapter 3, §5). A second fact we need concerns the solution of a linear system of n equations in n unknowns. In matrix form, the system is written

AX=B, where A = (aij) is the n x n coefficient matrix, B is a column vector, and X is the column vector whose entries are the unknowns Xl, ••• , Xn• When A is invertible, the system has the unique solution given by X = A -1 B. One can show that this leads to the following explicit formula for the solution. 482 Appendix A. Some Concepts from Algebra

Proposition 3 (Cramer's Rule). Suppose we have a system of equations AX = B. If A is invertible, then the unique solution is given by

det(M ) x·----i I - det(A) , where Mi is the matrix obtainedfrom A by replacing its ith column with B.

Proof. A proof can be found in Chapter 5, §3 of FINKBEINER (1978). o This proposition is used to prove some basic properties of resultants (see Proposi• tion 5 from Chapter 3, §5). Appendix B Pseudocode

Pseudocode is commonly used in and computer science to present algo• rithms. In this appendix, we will describe the pseudocode used in the text. If you have studied the programming language Pascal, you will see a marked similarity between our pseudocode and Pascal. This is no accident, since programming languages are also designed to express algorithms. Indeed, one of the forerunners of Pascal was a programming language named ALGOL, which is short for "ALGOrithmic Language". The syntax, or "grammatical rules," of our pseudocode will not be as rigid as that of a programming language since we do not require that it run on a computer. However, pseudocode serves much thl! same purpose as a programming language. As indicated in the text, an algorithm is a specific set of instructions for performing a particular calculation with numerical or symbolic information. Algorithms have inputs (the information the algorithm will work with) and outputs (the information that the algorithm produces). At each step of an algorithm, the next operation to be performed must be completely determined by the current state of the algorithm. Finally, an algorithm must always terminate after a finite number of steps. Whereas a simple algorithm may consist of a sequence of instructions to be per• formed one after the other, most algorithms also use the following special structures: • Repetition structures, which allow a sequence of instructions to be repeated. These structures are also known as loops. The decision whether to repeat a group of instructions can be made in several ways, and our pseudocode includes different types of repetition structures adapted to different circumstances. • Branching structures, which allow the possibility of performing different se• quences of instructions under different circumstances that may arise as the algo• rithm is executed. These structures, as well as the rest of the pseudocode, will be described in more detail in the following sections.

§ 1. Inputs, Outputs, Variables, and Constants

We always specify the inputs and outputs of our algorithms on two lines before the start of the algorithm proper. The inputs and outputs are given by symbolic names in usual mathematical notation. Sometimes, we do not identify what type of information is represented by the inputs and outputs. In this case, their meaning should be clear from the context of the discussion preceding the algorithm. Variables (information stored for use during execution of the algorithm) are also identified by symbolic names. We freely 484 Appendix B. Pseudocode

introduce new variables in the course of an algorithm. Their types are determined by the context. For example, if a new variable called a appears in an instruction, and we set a equal to a polynomial, then a should be treated as a polynomial from that point on. Numerical constants are specified in usual mathematical notation. The two words true andfalse are used to represent the two possible truth values of an assertion. They behave like the Boolean constants true and false in Pascal.

§2. Assignment Statements

Since our algorithms are designed to describe mathematical operations, by far the most common type of instruction is the assignment instruction. The syntax is

:= .

The symbol := is the same as the assignment operator in Pascal. The meaning of this instruction is as follows. First, we evaluate the expression on the right of the assignment operator, using the currently stored values for any variables that appear. Then the result is stored in the variable on the left-hand side. If there was a previously stored value in the variable on the left-hand side, the assignment erases it and replaces it with the computed value from the right-hand side. For example, if a variable called i has the numerical value 3, and we execute the instruction

i := i + 1, the value 3 + 1 = 4 is computed and stored in i. After the instruction is executed, i will contain the value 4.

§3. Looping Structures

Three different types of repetition structures are used in the algorithms given in the text. They similar to the ones used in Pascal. The most general and most frequently used repetition structure in our algorithms is the WHILE structure. The syntax is

WHILE DO .

Here, is a sequence of instructions. In a WHILE structure, the action is the group of statements to be repeated. We always indent this sequence of instructions. The end of the action is signalled by a return to the level of indentation used for the WHILE statement itself. The after the WHILE is an assertion about the values of variables, etc., that is either true or false at each step of the algorithm. For instance, the condition

iSs AND divisionoccurred = false appears in a WHILE loop in the division algorithm from Chapter 2, §3. 4 Branching Structures 485

When we reach a WHILE structure in the execution of an algorithm, we determine whether the condition is true or false. 1£ it is true, then the action is performed once, and we go back and test the condition again. If it is still true, we repeat the action once again. Continuing in the same way, the action will be repeated as long as the condition remains true. When the condition becomes false (at some point during the execution of the action), that iteration of the action will be completed, and then the loop will terminate. To summarize, in a WHILE loop, the condition is tested before each repetition, and that condition must be true for the repetition to go on. A second repetition structure that we use on occasion is the REPEAT structure. A REPEAT loop has the syntax

REPEAT UNTIL .

Reading this as an English sentence indicates its meaning. Unlike the condition in a WHILE, the condition in a REPEAT loop tells us when to stop. In other words, the action will be repeated as long as the condition is false. In addition, the action of a REPEAT loop is always performed at least once since we only test the condition after doing the sequence of instructions representing the action. As with a WHILE structure, the instructions in the action are indented. The final repetition structure that we use is similar to the FOR loop of Pascal. We use the syntax FOR each s in S DO to represent the instruction: "perform the indicated action for each element s E S." Here S is a finite set of objects and the action to be performed will usually depend on which s we are considering. The order in which the elements of S are considered is not important. Unlike the previous repetition structures, the FOR structure will necessarily cause the action to be performed a fixed number of times (namely, the number of elements in S.) The FOR loop in Pascal can be seen as a special case, where typically S is a set of consecutive integers, such as S = {I, ... , n}, and the action is performed once for each integer s between 1 and n.

§4. Branching Structures

We use only one type of branching structure, which is general enough for our purposes. The syntax is

IF THEN ELSE .

The meaning is as follows. If the condition is true at the time the IF is reached, actionl is performed (once only). Otherwise (that is, if the condition was false), action2 is performed (again, once only). The instructions in actionl and action2 are indented, and the ELSE separates the two sequences of instructions. The end of action2 is signalled by a return to the level of indentation used for the IF and ELSE statements. 486 Appendix B. Pseudocode

In this general branching structure, the truth or falsity of the condition selects which action to perform. In some cases, we omit the ELSE and action2. This form is equivalent to

IF THEN ELSE do nothing. Appendix C Computer Algebra Systems

This appendix will discuss several computer algebra systems that can be used in con• junction with the text. We will describe Maple, Mathematica, and REDUCE in some detail, and then mention some other systems. These are all amazingly powerful pro• grams, and our brief discussion will not do justice to their true capability. It is important to note that we will not give a general introduction to any of the computer algebra systems we will discuss. This is the responsibility of your course instructor. In particular, we will assume that you already know the following: • How to enter and exit the program and how to enter commands and . Some systems require semicolons at the end of commands (such as Maple and REDUCE), whereas others do not. Also, some systems (such as Mathematica) are case sensitive, whereas others are not. Some systems require an asterisk for multiplication, whereas others do not. • How to refer to previous commands and how to save results to a file. The latter can be important, especially when an answer fills more than one computer screen. You should be able to save the answer in a file and print it out for further study. • How to work with lists. For example, in the Groebner basis command, the input contains a list of polynomials, and the output is another list which is a Groebner basis for the ideal generated by the polynomials in the input list. You should be able to find the length of a list and extract polynomials from a list. • How to assign symbolic names to objects. In many computations, the best way to deal with complicated data is to use symbolic names for polynomials, lists of polynomials, lists of variables, etc. If a course being taught from this book has a laboratory component, we would suggest that the instructor use the first lab meeting to cover the above aspects of the particular computer algebra system being used.

§1. Maple

Our discussion applies to Maple V. For us, the most important part of Maple is the Groebner basis package. To have access to the commands in this package, type: > with(grobner); (here, > is the Maple prompt, and as usual, all Maple commands end with a semicolon). Once the Groebner basis package is loaded, you can perform the division algorithm, compute Groebner bases, and carry out a variety of other commands described below. 488 Appendix C. Computer Algebra Systems

In Maple, a monomial ordering is called a termorder. Of the monomial orderings considered in Chapter 2, Maple knows lex and grevlex. Lex order is called plex (for "pure lexicographic") and grevlex order is called tdeg (for "total degree"). Be careful not to confuse tdeg with grlex. Since a monomial order depends also on how the variables are ordered, Maple needs to know both the termorder you want (plex or tdeg) and a list of variables. For example, to tell Maple to use lex order with variables x > y > z, you would need to input plex and [x, y, z] (remember that Maple encloses a list inside brackets [ ... ]). If you give no term order in the input, Maple will use tdeg (the default). There is no default ordering for the variables, so that the variable list must always be included. The most commonly used commands in Maple's Groebner basis package are nor• mal f, for doing the division algorithm, and gbas i s, for computing a Groebner basis. The name normalf stands for "normal form," and the command has the following syntax: > normalf(f,polylist,varlist,termorder); The output is the remainder of f on division by the polynomials in the list po 1 y 1 i s t using the monomial ordering specified by termorder and varlist. For example, to divide x 3 + 2y2 by x2 + y and x + 2xy using grevlex order with x > y, one would enter: > normalf(x A 3+3*y A 2, [x A 2+y,x+2*x*y], [x,y]); We omitted the termorder since tdeg is the default. The base field here is the rational numbers CQ. Note that normal f does not give the quotients in the division algorithm. As you might expect, gbasis stands for "Groebner basis," and the syntax is as follows: > gbasis(polylist,varlist,termorder); This computes a Groebner basis for the ideal generated by the polynomials in po 1 y 1 i s t with respect to the monomial ordering specified by termorder and varlist. The answer is a reduced Groebner basis (in the sense of Chapter 2, §7), except for clearing denominators. As an example of how gbas i s works, consider the command: > gb := gbasis([x A 2+y,2*x*y+y A 2], [x,y],plex); This computes a list (and gives it the symbolic name g b) which is a Groebner basis for the ideal (x2 + y, 2xy + y2) C CQ[x, y] using lex order with x > y. If you use polynomials with integer orrational coefficients in no rma 1 f or g ba sis, Maple will assume that you are working over the field CQ. Note that there is no limitation on the size of the coefficients. Another possible coefficient field is the Gaussian rational numbers CQ(i) = {a + bi : a, b E CQ}, where i = J=T. Maple can also work with coefficients that lie in rational function fields. To tell Maple that a certain variable is in the base field (a "parameter"), you simply omit it from the variable list in the input. Thus, > gbasis([v*x A 2+y,u*x*y+y A 2], [x,y],plex); will compute a Groebner basis for (vx2 + y, uxy + y2) C CQ(u, v)[x, y] forlex order with x > y. The answer is reduced up to clearing denominators (so the leading coefficients of the Groebner basis are polynomials in u and v). In older versions of Maple V, Groebner basis computations may not work when the polynomials have complex numbers as coefficients. To compute a Groebner basis in 2 Mathematica 489 such a case, suppose that the variables are Xl, ••• ,Xn and introduce a new variable j. Then replace i with j in all generators of the ideal and add the new generator j2 + 1. Now compute a Groebner basis G for a monomial order where each Xi is greater than any power of j. If you replace j by i in G, then it is a good exercise to show that this gives the desired Groebner basis. Some other useful Maple commands in the Groebner basis package are: • leadmon, which computes Le(f) and LM(f) for a polynomial f. • spoly, which computes the S-polynomial S(f, g) oftwo polynomials. • solvable, which uses the consistency algorithm from Chapter 4, §1 to deter• mine if a system of polynomial equations has a solution over an algebraically closed field. • finite, which uses the finiteness algorithm from Chapter 5, §3 to determine if a system of polynomial equations has finitely many solutions over an algebraically closed field. There is also a solve command which attempts to find all solutions of a system of equations. Maple has an excellent on-line help system that should make it easy to master these (and other) Maple commands. One can also consult the Maple V Library Reference Manual by CHAR ET AL. (1991). Finally, we should mention the existence of two Maple programs, written by Albert Lin of George Mason University, that extend the Groebner basis package. The first program defines a command that gives the quotients in the division algorithm, and the second program has a new Groebner basis command that computes a Groebner basis, together with a matrix telling how to express the Groebner basis in terms of the given polynomials. It also gives information on the number of nonzero remainders that occur. Copies of the programs can be obtained by writing David A. Cox, Department of Mathematics and Computer Science, Amherst College, Amherst MA 01002. To get a copy electronically, send email to [email protected].

§2. Mathematica

Our discussion applies to version 2.0 of Mathematica. The way Mathematica is struc• tured, there is no special package to load in order to compute Groebner bases: The basic commands are part of the kernel. Of the monomial orderings considered in Chapter 2, Mathematica only knows lex order. However, since a monomial order also depends on how the variables are ordered, Mathematica still needs to know a list of variables in order to specify which lex order you want. For example, to tell Mathematica to use lex order with variables X > Y > Z, you would input {x, y , z} (remember that Mathematica encloses a list inside braces { ... }). If you give no variable list, Mathematica will order them according to its own internal order (which is roughly alphabetical). For us, the most important Mathematica command is GroebnerBasis. The meaning of the name is evident and the syntax is as follows: In[l]:= GroebnerBasis[polylist,varlist] 490 Appendix C. Computer Algebra Systems

(where In [1] : = is the Mathematica prompt). This computes a Groebner basis for the ideal generated by the polynomials in polylist with respect to lex order with the variables ordered according to var 1 i s t. The answer is a reduced Groebner basis (in the sense of Chapter 2, §7), except for clearing denominators. As an example of how the GroebnerBasis works, consider: In[2]:= gb = GroebnerBasis[{x A 2+y,2*x*y+y A 2},{x,y}] This computes a list (and gives it the symbolic name gb) which is a Groebner basis for the ideal (x2 + y, 2xy + y2) c Q[x, y] using lex order with x > y. Mathematica also knows the division algorithm, but the user does not have direct access to it. However, it is possible to compute the remainder on division with respect to a Groebner basis. To divide a polynomial f by a Groebner basis for the ideal generated by polynomials f1, ... , fs, using the lex order determined by varlist, one would proceed as follows to get the remainder: In[3]:= gb = AlgebraicRules[{f1 == Q, ... ,fs == Q},varlist] In[4]:= f /. gb (we have suppressed the output statements). At the end of this section, we will give Mathematica code for a procedure that automates this process. If you use polynomials with integer or rational coefficients in GroebnerBasis or AlgebraicRules, Mathematica will assume that you are working over the field Q. Note that there is no limitation on the size of the coefficients. Another possible coefficient field is the Gaussian rational numbers Q(i) = {a + bi : a, b E Q}, where i = A. Mathematica is less successful at working with coefficients that lie in rational function fields. The strategy is that the variables in the base field (the "parameters") should be omitted from the variable list in the input. This will compute a Groebner basis in the polynomial ring with lex order where the parameters are less than all the other variables. One can prove that the result is a Groebner basis over the function field (this is a good exercise), but in general it will be neither reduced nor minimal (see Chapter 2, §7). In particular, there are usually too many polynomials in the Groebner basis. For example, the command: In[5]:= GroebnerBasis[{v*x A 2+y,u*x*y+y A 2},{x,y}] will compute anonminimal Groebner basis for (vx2 + y, uxy+ y2) c Q(u, v)[x, y] for lex order with x > y. The answer also clears denominators (so the leading coefficients of the Groebner basis are polynomials in u and v). Here are two other useful Mathematica commands: • Eliminate, which uses the Elimination Theorem of Chapter 3, §1 to eliminate variables from a system of polynomial equations . • Solve, which attempts to find all solutions of a system of equations. For further descriptions and examples, consult Mathematica by WOLFRAM (1991). As we promised earlier, here is the Mathematica code for finding the remainder with respect to a Groebner basis: In[6]:= GroebnerReduce[poly_,polylisL,varlisL__ ] .• Block['Private'v, 'Private'v = AlgebraicRules[(#l == Q & ) /@ polylist, varlist]i poly /. 'Private'v] As an example of how this works, 3 REDUCE 491

In[7]:= GroebnerReduce [x~2 y+x y~2+y~2,{x y - 1,y~2-1},{x,y}] finds the remainder of x2y +X y2 + y2 on division by a Groebner basis for (x y -1, y2 - 1) with respect to lex order with x > y. Finally, we should mention the existence of a Mathematica package, written by Susan Goldstine of Amherst College, which includes many commands relevant to the book. Using this package, students can use lex, grlex, or grevlex order to do the division algorithm (with both quotients and remainders) and compute Groebner bases (with information about the number of nonzero remainders that occur). It will also find reduced Groebner bases over rational function fields, and other algorithms from the book are included, such as ideal membership, radical membership, and finiteness of solutions. This package is slow compared to the GroebnerBasis command, but it can be used for most of the simpler examples in the text. Copies of the package can be obtained by writing David A. Cox, Department of Mathematics and Computer Science, Amherst College, Amherst MA 01002. To get a copy electronically, send email to [email protected].

§3. REDUCE

Our discussion applies to version 3.4 of REDUCE. For us, the most important part of REDUCE is the Groebner basis package. To have access to the commands in this package, type: 1: load groebneri (here, 1: is the REDUCE prompt, and as usual, all REDUCE commands end with a semicolon). Once the Groebner basis package is loaded, you can perform the divi• sion algorithm, compute Groebner bases, and carry out a variety of other commands described below. In REDUCE, a monomial ordering is called a term order. Of the monomial orderings considered in Chapter 2, REDUCE knows most of them, including lex, grlex, and grevlex. Lex order is called lex, grlex is called gradlex, and grevlex is called revgradlex. REDUCE can also work with product orders (see Exercise 10 of Chapter 2, §4) and weight orders (see Exercise 12 of Chapter 12, §4-note that weight orders in REDUCE always use lex order to break ties). These other term orders are described in detail in §3.9 of Groebner: A Package for Calculating with Groebner Bases by MELENK, MOLLER, and NEUN (1991). In REDUCE, a term order is specified by means of the torder command. Thus, to change the term order to revgradlex, you would type: 2: torder revgradlexi In response, REDUCE will print out the previous term order. When the Groebner basis package is first loaded, the term order is lex. Since a monomial order depends also on how the variables are ordered, REDUCE needs to know both the term order and a list of variables. As we just explained, the term order is set by torder. To indicate how the variables are ordered in a REDUCE command, you include a list of variables in the input. For example, to use grlex with 492 Appendix C. Computer Algebra Systems

variables x > y > z, you would need to change to gradlex using torder, and then use {x, y , z} as part of the input of the command (remember that REDUCE encloses a list inside braces { ... D. WARNING: When you use revgradlex, the variables are not ordered in the usual way. It is a good exercise to figure out how the variables are ordered. (Hint: See the gsort command mentioned below.) If you give no variable list, REDUCE will order them according to its own internal order (which may be unpredictable). The most commonly used commands in the REDUCE Groebner basis package are preduce, for doing the division algorithm, and groebner, for computing a Groebner basis. The name preduce stands for "polynomial reduce," and the command has the following syntax: 3: preduce(f,polylist,varlist); The output is the remainder of f on division by the polynomials in the list po 1 y 1 i s t using the monomial ordering specified by torder and varlist. For example, to divide x 3 + 2y2 by x2 + y and x + 2xy using grlex order with x > y, one would enter: 4: preduce(x A 3+3*y A 2,{x A 2+y,x+2*x*y},{x,y}); (this assumes we have already used torder to set the term order to gradlex). We would get the same answer if we omitted the variable list. In this example, the base field is the rational numbers CQ. Note that preduce does not give the quotients in the division algorithm. As you might expect, groebner stands for "Groebner basis," and the syntax is: 5: groebner(polylist,varlist); This computes a Groebner basis for the ideal generated by the polynomials in po ly 1 i s t with respectto the monomial ordering specified by torder and var 1 i st. The answer is a reduced Groebner basis (in the sense of Chapter 2, § 7), except for clear• ing denominators. As an example of how groebner works, consider the command:

6: gb := groebner({x A 2+y,2*x*y+y A 2},{x,y}); This computes a list (and gives it the symbolic name gb) which is a Groebner basis for the ideal (x2 + y, 2xy + y2) C CQ[x, y] using the order specified by torder with x> y. If you use polynomials with integer or rational coefficients in preduce or groeb• ner, REDUCE will assume that you are working over the field CQ. Note that there is no limitation on the size of the coefficients. Another possible coefficient field is the Gaussian rational numbers CQ(i) = {a + bi : a, b E CQ}, where i = P. To work over CQ(i), you need to issue the command: 7: on complex; before computing the Groebner basis. Similarly, to compute a Groebner basis over a finite field with p elements (where p is a prime number), you first need to issue the command 8: on modular; setmod p; REDUCE can also work with coefficients that lie in rational function fields. To tell REDUCE that a certain variable is in the base field (a "parameter"), you simply omit it from the variable list in the input. Thus,

9: groebner({v*x A 2+y,u*x*y+y A 2},{x,y}); 4 Other Systems 493 will compute a Groebner basis for (vx2 + y, uxy + y2) C Q(u, v)[x, y] for the current term order with x > y. The answer is reduced up to clearing denominators (so the leading coefficients of the Groebner basis are polynomials in u and v). Some other useful REDUCE commands in the Groebner basis package are: • gsplit, which computes LT(f) and f - LT(f). • gsort, which prints out the terms of a polynomial according to the term order. • gspoly, which computes as-polynomial S(f, g). • greduce, which computes the remainder on division by the Groebner basis of the ideal generated by the input polynomials. • preducet, which can be used to find the quotients in the division algorithm. • gzerodim?, which tests a Groebner basis (using the methods of Chapter 5, §3) to see if the equations have finitely many solutions over an algebraically closed field. • groesol ve, which attempts to find all solutions of a system of polynomial equations. • idealquotient, which computes an ideal quotient I: f (using an algorithm more efficient than the one described in Chapter 4, §4). • hilbert polynomial, which computes the affine Hilbert polynomial of an ideal (as defined in Chapter 9, §3). These (and many other) commands are described in detail in Groebner: A Pack• age for Calculating Groebner Bases by MELENK, MOLLER, and NEUN (1991). This document comes with all copies of REDUCE. Of the three computer algebra sys• tems discussed so far, REDUCE has the fastest implementation of the Groebner basis algorithm.

§4. Other Systems

Two other important computer algebra systems are MACSYMA and SCRATCHPAD. Both are as powerful as Maple, Mathematica, and REDUCE, and they can also compute Groebner bases. Unfortunately, we did not have access to MACSYMA or SCRATCH• PAD in writing this book, so that we are not able to describe their exact capabilities in this area. Besides the general computer algebra systems we have been discussing, there are two more specialized programs, Macaulay and CoCoA, that should be mentioned. These programs were designed primarily for researchers in algebraic geometry and commutative algebra, but less sophisticated users can make effective use of either program. One of their most attractive features is that they are free. It is a little more complicated to get started with Macaulay or CoCoA. For example, you have to tell the program in advance what the variables are and what field you are working over. The variables also have weights (which for us are usually all 1). Macaulay will only accept homogeneous polynomials as input, and in some versions, it is not easy to specify lex order. This makes it more difficult for a novice to use Macaulay. Neverthless, with proper guidance, beginning users should be able to work quite successfully with either Macaulay or CoCoA. 494 Appendix C. Computer Algebra Systems

Macaulay always works over a finite field, and CoCoA gives you a choice of working over CQ or a finite field. Over a finite field, some computations go considerably faster. As long as the coefficient size does not exceed the characteristic of the field (which is usually the case in simple examples), there is no problem. However, one must exercise some care in dealing with more complicated problems. This drawback must be weighed against the fact that such problems are often difficult to carry out on Mathematica, Maple, or REDUCE because of the extremely large amount of memory that may be required. For more advanced users, Macaulay and CoCoA offer a wonderful assortment of sophisticated mathematical objects to work with. Many researchers make frequent use of these programs to compute syzygies and free resolutions of modules. Macaulay also includes scripts for computing blow-ups, cohomology, cotangent sheaves, dual varieties, normal cones, radicals, and many other useful objects in algebraic geometry. Both programs are under continuous development, and one can expect further improvements in their computational power and user interface in the coming years. Appendix D Independent Projects

Unlike the rest of the book, this appendix is addressed to the instructor of the course. We will discuss several ideas for computer projects or research papers based on topics introduced in the text.

§ 1. General Comments

Independent projects can be valuable for a variety of reasons: • The projects get the students to actively understand and apply the ideas presented in the course. • The projects expose students to the next steps in subjects discussed in the text. • The projects give students more experience and sophistication as users of com• puter algebra systems. Projects of this type are also excellent opportunities for small groups of two or three students to work together and learn collaboratively. Some of the projects given below have a large computer component, whereas others are more theoretical. The list is in no way definitive or exhaustive, and users of the text are encouraged to contact the authors with comments or suggestions concerning these or other projects they have used. The description we give for each project is rather brief. Although references are provided, some of the descriptions would need to be expanded a bit before being given to the student.

§2. Suggested Projects

1. Implementing the Division Algorithm in k[x\, ... , xn ]. Many computer alge• bra systems (including REDUCE and Maple) have some sort of "normal form" or "reduce" command that performs a form of the division algorithm from Chapter 2. However, those commands usually display only the remainder. Furthl:more, in some cases, only certain monomial orders are allowed. The assignment here would be for the students to implement the general division algorithm, with in• put a polynomial t, a list of divisors F, a list of variables X, and a monomial ordering. The output would be the quotients and the remainder. This project would probably be done within a computer algebra system such a Maple or Mathematica. 496 Appendix D. Independent Projects

2. Implementing Buchberger's Algorithm. Many computer algebra systems have commands that comp11te a reduced Groebner basis of an ideal (fl, ... , Is). This project would involve implementing the algorithm in a way that produces more information and (possibly) allows more monomial orderings to be used. Namely, given the input of a list of polynomials F, a list of variables X, and a monomial order in k[Xl, ... ,xn ], the program should produce a reduced Groebner basis G for the ideal generated by F, together with a matrix of polynomials A expressing the elements of the Groebner basis in terms of the original generators G = A F. As with the previous project, this would be done within a computer algebra system. The program could also give additional information, such as the number of remainders computed at each stage of the algorithm. 3. The Complexity of the Ideal Membership Problem. In §9 of Chapter 2, we briefly discussed some of the worst-case complexity results concerning the computation of Groebner bases and solving the ideal membership problem. The purpose of this project would be to have the students learn about the Mayr and Meyer examples, and understand the double exponential growth of degree bounds for the ideal membership problem. A suggested reference here is BAYER and STILLMAN (1988) which gives a nice exposition of these results. With some guidance, this paper is accessible reading for strong undergraduate students. 4. Solving Polynomial Equations. For students with some exposure to numerical techniques for solving polynomial equations, an excellent project would be to implement the criterion given in Theorem 6 of Chapter 5, §3 to determine whether a system of polynomial equations has only finitely many solutions over CC. If so, the program should determine all the solutions to some specified precision. This would be done by using numerical techniques to solve for one variable at a time from a lexicographic Groebner basis. A comparison between this method and more standard methods such as the multivariable 's Method could also be made. As of this writing, very little theoretical work comparing the complexity of these approaches has been done. 5. Groebner Basis Conversion for Zero-Dimensional Ideals. As in the previ• ous project, to solve systems of equations, lexicographic Groebner bases are often the most useful bases because of their desirable elimination properties. However, lexicographic Groebner bases are often more difficult to compute than Groebner bases for other monomial orderings. For zero-dimensional ideals (i.e., I C CC[Xl, ... , xn] such that V(I) is a finite set), there are methods known for converting a Groebner basis with respect to some other order into a lexicographic Groebner basis. For this project, students would learn about these methods, and possibly implement them. There is a good introductory discussion of these ideas in HOFFMANN (1989). The original reference is FAUGERE, GIANNI, LAZARD, and MORA (1989). 6. Curve Singularities. A multitude of project topics can be derived from the gen• eral topic of curve singularities, which we mentioned briefly in the text. Imple• menting an algorithm for finding the singular points of a curve V (f (x, y)) C JR 2 or CC 2 could be a first part of such a project. The focus of the project would be for students to learn some of the theoretical tools needed for a more complete understanding of curve singUlarities: the Newton polygon, Puiseux expansions, 2 Suggested Projects 497

resolutions by quadratic transformations, etc. A good general reference for this would be BRIESKORN and KNORRER (1986). There are numerous other treatments in texts on algebraic curves as well. Some of this material is also discussed from the practical point of view of "curve tracing" in HOFFMANN (1989). 7. Surface Intersections. The focus of this project would be algorithms for ob• taining equations for plane projections of the intersection curve of two surfaces V(!1(x, y, z)), V(f2(X, y, z)) in JR3. This is a very important topic in geomet• ric modeling. One method, based on finding a "simple" surface in the pencil defined by the two given surfaces and which uses the projective closures of the two surfaces, is sketched in HOFFMANN (1989). Another method is discussed in GARRITY and WARREN (1989). 8. Bezier Splines. The B6zier cubics introduced in Chapter 1, §3 are typically used to describe shapes in geometric modeling as follows. To model a curved shape, we divide it into some number of smaller segments, then use a B6zier cubic to match each smaller segment as closely as possible. The result is a piecewise B6zier curve, or B6zier spline. For this project, the goal would be to implement a system that would allow a user to input some number of control points describing the shape of the curve desired and to see the corresponding B6zier spline curve displayed. Another interesting portion of this assignment would be to implement an algorithm to determine the intersection points of two B6zier splines. Some references can be found on p. xvi of FARIN (1990). We note that there has also been some recent theoretical work by BILLERA and ROSE (1989) that applies Groebner basis methods to the problem of determining the vector space dimension of multivariate polynomial splines of a given degree on a given polyhedral decomposition of a region in JRn • 9. The General Version of Wu's Method. In our discussion of Wu's method in geometric theorem proving in Chapter 6, we did not introduce the general algebraic techniques (characteristic sets, Ritt's decomposition algorithm) that are needed for a general theorem-prover. This project would involve researching and presenting these methods. Implementing them in a computer algebra system would also be a possibility. See CHOU (1988) and Wu (1983). 10. Molien's Theorem. An interesting project could be built around Molien's The• orem in invariant theory, which is mentioned in §3 of Chapter 7. The algorithm given in STURMFELS (1991) could be implemented to find a set of generators for k[Xl, ... , xn]G. This could be applied to find the invariants of some larger groups, such as the rotation group of the cube in JR3. Molien's theorem is also discussed in Chapter 7 of BENSON and GROVE (1985). 11. Groebner Bases over More General Fields. For students who know some field theory, an good project would be to compute Groebner bases over fields other than Q. In the discussion of Maple in § 1 of Appendix C, we explain how to compute Groebner bases for polynomials with coefficients in Q(i) that only uses the equation i 2 + 1 = O. More generally, if Q(ex) is any finite extension of Q, the same method works provided one knows the minimal polynomial of ex over Q. The needed field theory may be found in Sections 5.1, 5.3, and 5.5 of HERSTEIN (1975). The more advanced version of this project would discuss 498 Appendix D. Independent Projects

Groebner bases over finite extensions of CQ(Ul, ••• , urn). In this way, one could compute Groebner bases over any finitely generated extension of CQ. 12. Computer Graphics. In § 1 of Chapter 8, we used certain kinds of projections when we discussed how to draw a picture of a 3-dimensional object. These ideas are very important in computer graphics. The student could look describe various projections that are commonly used in computer graphics and explain what they have to do with projective space. If you look at the formulas in Chapter 6 of FOLEY, VAN DAM, FEINER and HUGHES (1990), you will see certain 4 x 4 matrices. This is because points in ]p3 have four homogeneous coordinates! References

M. F. Atiyah and I. G. MacDonald (1969), Introduction to Commutative Algebra, Addison-Wesley, Reading, Massachusetts. A. A. Ball (1987), The parametric representation of curves and surfaces using rational polynomial junctions, in The Mathematics of Swfaces, II, edited by R. R. Martin, Clarendon Press, Oxford, pp. 39-61. J. Baillieul et al. (1990), Robotics, Proceedings of Symposia in Applied Mathematics 41, American Mathematical Society, Providence, Rhode Island. D. Bayer and M. Stillman (1987a), A criterion for detecting m-regularity, Invent. Math. 87, 1-11. D. Bayer and M. Stillman (1987b), A theorem on refining division orders by the reverse lexicographic order, Duke J. Math. 55, 321-328. D. Bayer and M. Stillman (1988), On the complexity of computing syzygies, in Compu• tational Aspects of Commutative Algebra, edited by L. Robbiano, Academic Press, New York, pp. 1-13. C. T. Benson and L. C. Grove (1985), Finite Reflection Groups, Second Edition, Springer-Verlag, New York-Berlin-Heidelberg. L. Billera and L. Rose (1989), Grobner basis methods for multivariate splines, in Mathematical Methods in Computer Aided Geometric Design, edited by T. Lyche and L. Schumacher, Academic Press, New York, pp. 93-104. E. Brieskom and H. Kn6rrer (1986), Plane Algebraic Curves, Birkhauser, Basel• Boston-Stuttgart. J. W. Bruce and P. J. Giblin (1984), Curves and Singularities, Cambridge University Press, Cambridge. B. Buchberger (1985), Groebner bases: an algorithmic method in polynomial ideal theory, in Multidimensional Systems Theory, edited by N. K. Bose, D. Reidel Publishing Company, Dordrecht, pp. 184-232. B. Char, K. Geddes, G. Gonnet, B. Leong, M. Monogan, and S. Watt (1991), Maple V Library Reference Manual, Springer-Verlag, New York-Berlin-Heidelberg. S.-C. Chou (1988), Mechanical Geometry Theorem Proving, D. Reidel Publishing Company, Dordrecht. H. S. M. Coxeter (1973), Regular Polytopes, Third Edition, Dover, New York. J. H. Davenport, Y. Siret, and E. Toumier (1988), Computer Algebra, Academic Press, New York. T. W. DuM (1990), The structure ofpolynomial ideals and Grobner bases, preprint. D. Eisenbud, C. Huneke, and W. Vasconcelos (1990), Direct methods for primary decomposition, Invent. Math., to appear. 500 References

G. Farin (1990), Curves and Surfaces for Computer Aided Geometric Design, Second Edition, Academic Press, New York. 1. Faugere, P. Gianni, D. Lazard, and T. Mora (1989), Efficient change of ordering for Grabner bases of zero-dimensional ideals, preprint. D. T. Finkbeiner (1978), Introduction to Matrices and Linear Transformations, Third Edition, W. H. Freeman and Co., San Francisco. J. Foley, A. van Dam, S. Feiner, and J. Hughes (1990), Computer Graphics: Principles and Practice, Second Edition, Addison-Wesley, Reading, Massachusetts. T. Garrity and J. Warren (1989), On computing the intersection of a pair of algebraic surfaces, Computer Aided Geometric Design 6, 137-153. C. F. Gauss (1876), Werke, Volume III, Koniglichen Gesellschaft der Wissenschaften zu Gottingen, Gottingen. R. Gebauer and H. M. Moller (1988), On an installation of Buchberger's algorithm, in Computational Aspects of Commutative Algebra, edited by L. Robbiano, Academic Press, New York, pp. 141-152. P. Gianni, B. Trager and G. Zacharias (1988), Grabner bases and primary decomposition ofpolynomial ideals, in Computational Aspects of Commutative Algebra, edited by L. Robbiano, Academic Press, New York, pp. 15-33. P. Gritzmann and B. Sturmfels (1990), Minkowski addition of polytopes: computa• tional complexity and applications to Grabner bases, SIAM Journal of Discrete Mathematics, to appear. G. Hermann (1926), Der Frage der endlich vielen Schritte in der Theorie der Polyno• mideale, Math. Annalen 95, 736-788. I. N. Herstein (1975), Topics in Algebra, Second Edition, John Wiley & Sons, New York. D. Hilbert (1890), Ober die Theorie der algebraischen Formen, Math. Annalen 36, pp.473-534. Reprinted in Gesammelte Abhandlungen, Volume II, Chelsea, New York, 1965. W. V. D. Hodge and D. Pedoe (1968), Methods ofAlgebraic Geometry, Volumes I and II, Cambridge University Press, Cambridge. C. Hoffman (1989), Geometric and Solid Modeling: An Introduction, Morgan Kauf• mann Publishers, San Mateo, California. K. Kendig (1977), Elementary Algebraic Geometry, Springer-Verlag, New York-Berlin• Heidelberg. F. Klein (1884), Vorlesungen uber das Ikosaeder und die Auflasung der Gleichungen vom Funften Grade, Teubner, Leipzig. English Translation, Lectures on the Ikosa• hedron and the Solution of Equations of the Fifth Degree, Trubner, London, 1888. Reprinted by Dover, New York, 1956. S. Lang (1965), Algebra, Addison-Wesley, Reading, Massachusetts. D. Lazard (1983), Grabner bases, Gaussian elimination and resolution of systems of algebraic equations, in Computer Algebra: EUROCAL 83, edited by 1. A. van Hulzen, Lecture Notes in Computer Science 162, Springer-Verlag, New York• Berlin-Heidelberg, pp. 146-156. H. Matsumura (1986), Commutative Ring Theory, Cambridge University Press, Cam• bridge. References 501

E. Mayr and A. Meyer (1982), The complexity of the word problem for commutative semigroups and polynomial ideals, Adv. Math. 46, 305-329. H. Melenk, H. M. Moller, and W. Neun (1991), Groebner: A Package for Calculating Groebner Bases, Konrad-Zuse-Zentrum fUr Informationstechnik, Berlin. R. Mines, F. Richman, and W. Ruitenburg (1988), A Course in Constructive Algebra, Springer-Verlag, New York-Berlin-Heidelberg. H. M. Moller and F. Mora (1984), Upper and lower boundsfor the degree of Groebner bases, in EUROSAM 1984, edited by J. Fitch, Lecture Notes in Computer Science 174, Springer-Verlag, New York-Berlin-Heidelberg, pp. 172-183. D. Mumford (1976), Algebraic Geometry I: Complex Projective Varieties, Springer• Verlag, New York-Berlin-Heidelberg. R. Paul (1981), Robot Manipulators: Mathematics, Programming and Control, MIT Press, Cambridge, Massachusetts. L. Robbiano (1986), On the theory ofgraded structures, J. Symbolic Compo 2, 139-170. L. Roth and J. G. Semple (1949), Introduction to Algebraic Geometry, Clarendon Press, Oxford. A. Seidenberg (1974), Constructions in algebra, Trans. Amer. Math. Soc. 197, 273- 313. A. Seidenberg (1984), On the Lasker-Noether decomposition theorem, Am. J. Math. 106, 611-638. I. R. Shafarevich (1974), Basic Algebraic Geometry, Springer-Verlag, New York-Berlin• Heidelberg. B. Sturmfels (1989), Computingfinal polynomials andfinal syzygies using Buchberger's Grabner bases method, Results Math. 15, 351-360. B. Sturmfels (1991), Algorithms in Invariant Theory, RISC Series in Symbolic Com• putation, Springer-Verlag, New York-Berlin-Heidelberg, to appear. F. Winkler (1984), On the complexity of the Grabner bases algorithm over K[x, y, z], in EUROSAM 1984, edited by J. Fitch, Lecture Notes in Computer Science 174, Springer-Verlag, New York-Berlin-Heidelberg, pp. 184-194. S. Wolfram (1991), Mathematica: A System for Doing Mathematics by Computer, Second Edition, Addison-Wesley, Reading, Massachusetts. W.-T. Wu (1983), On the decision problem and the mechanization of theorem-proving in elementary geometry, in Automated Theorem Proving: After 25 Years, edited by W. Bledsoe and D. Loveland, Contemporary Mathematics 29, American Mathe• matical Society, Providence, Rhode Island, pp. 213-234. Index

A ideal membership, 94, 143 ideal quotient, 195, 203 admissible geometric theorem, 284ff irreducibility, 207 affine least common multiple, 187 cone over a projective variety, see polynomial implicitization, 128 cone, affine primality, 207 Dimension Theorem, 431 primary decomposition, 210 Hilbert function, see Hilbert func• projective closure, 378 tion, affine projective elimination, 387 Hilbert polynomial, see Hilbert poly• pseudodivision, 297 nomial, affine radical membership, 177, 286 space, see space, affine rational implicitization, 132 transformation, see transformation, resultant, 158 affine Ritt decomposition, 298, 304 variety, see variety, affine tangent cone, 467 algebraically independent set, 288,447, zero-dimensionality, 232 451,452 altitude, 293 algebraic relation, 333 Apollonius, 284, 291, 300ff algorithm, 37 artificial intelligence, 281 algebra (subring) membership, 329 ascending chain condition (ACC), 77, associated primes, 210 202, 386 Buchberger's, 89, 108, 373, 496 associated prime question, 210 Atiyah, Mo, 210 computation in k[Xl, 0 0 0 ,xnJ/J, 231 consistency, 171 automated geometric theorem proving, decomposition, 207 280ff dimension (affine variety), 432, 449 automorphism of varieties, 242 dimension (projective variety), 434 division in k[xJ, 38, 221, 297-8, B 495

division in k[Xl, 0 0 0 ,xnJ, 33, 62ff, Baillieul, Jo, 276 115, 195, 199,228,373 basis for an ideal, 30 Euclidean, 41,151,153,157,158, Groebner, 31,44, 76ff, 113ff, 179-80 128-9, 132, 161, 171, 177, finiteness of solutions, 232 187, 195, 228ff, 244, Gaussian elimination (row reduc• 269ff, 287, 304, 309, 311, tion), 9, 50-I, 53, 92, 155, 329ff, 335ff, 370, 378, 384, 299 390, 467ff, 495ff greatest common divisor, 188 minimal, 35, 73, 93 ideal equality, 92 standard, 76 ideal intersection, 187, 195, 203 Bayer, Do, 74, 111, 120 503 504 Index

Benson, C.T., 318, 329,497 CoCoA, 435, 493ff bihomogeneous polynomial, 391 Macaulay, 57, 177, 207,435, Bezier, P., 20 493ff cubic, 20, 27, 497 Maple, 37, 487ff birationally equivalent varieties, 251, 451 MACSYMA, 37, 493 blow-up, 476 Mathematica, 37, 489ff Brieskom, E., 497 REDUCE, 37, 57, 435, 49lff Bruce, J.w., 135, 140, 144 SCRATCHPAD, 177,207,210,493 Buchberger, B., 77, 106ff,228 cone, 9 affine, over a projective variety, 365,370,434,467 c projectivized tangent, 477 tangent, to a variety, 465ff centroid, 293 configuration space, 258 chain congruence (mod I), 220 ascending, of ideals, 77 conic section, 27, 134-5,347,354,395, descending, of varieties, 80, 202, 397 369 consistency question II, 45, 171 characteristic of a field, see field constructible set, 125 characteristic set, 298 coordinate ring of a variety (k[V)), 235, Chou, S.-c., 304, 497 337,444,447,450 Circle Theorem of Apollonius, 284, 291, coordinates, 235, 281 300ff homogeneous, 348, 356 circumcenter, 293 Pliicker, 404, 408 Classification of Quadrics, 400 coordinate subspace, 410 classification of varieties, 218, 252 translate of, 419, 425 closure coset, 343 projective, of an affine variety, 377, Coxeter, H.S.M., 318 389,434,442 Cramer's Rule, 154, 482 Zariski, 123, 192 cross ratio, 293 Closure Theorem, 123, 192,392,450 cube, 323 coefficient, 2 cubic collinearity, 282, 293 Bezier, 20, 27, 497 comaximal ideals, 190 twisted, see curve, twisted cubic commutative ring, 218, 480 curve complement of a monomial ideal (C(I)), cissoid of Diocles, 25 416 dual, 344 complete intersection, 445 folium of Descartes, 134 complexity, 110 four-leaved rose, 146 computer-aided geometric design (CAGD), rational normal, 380 20,22 twisted cubic, 8, 19, 32, 67, 86, computer algebra systems, 37, 39, 42, 197,218,363,364,375,376, 57, 66f, 92, 99,101,135,150, 378,431,446,447 204,207,210,234,298,435, strophoid, 24 487ff, 495ff cuspidal edge, 245 Index 505

D projective space, 365, 395, 403 variety, 344 Davenport, JoHo, 39,42, 150, 189 DuM, To, 110 decomposition minimal, of a variety, 204, 208ff, 369,458 E minimal, of an ideal, 205 primary, of an ideal, 209 echelon form matrix, 50, 77, 93, 408 decomposition question, 207 Eisenbud, Do, 177,207 degree elimination ideal, see ideal, elimination of a projective variety, 446 elimination order, see monomial order- total, of a monomial, 1 ing, elimination total, of a polynomial, 2 elimination step, 114 transcendence, of a field extension, Elimination Theorem, 114, 162, 187,382, 452 387 degenerate cases of a geometric config• projective, 388 uration, 288, 302 elimination theory, 17, 113ff dehomogenization projective, 381ff of an ideal, 386 envelope, 140 of a polynomial, 359, 467 Euler's formula, 365 derivative, formal, 46, 227, 456 Euler line, 293 Descartes, R., 134 extension step, 114 descending chain condition, 80, 202, 369 Extension Theorem, 117 desingularization, 477 equivalence determinant, 152,481 birational, 251, 451, 454 Vandermonde, 45 projective, 396 Dickson's Lemma, 70 difference of varieties, 192 dimension, 8ff, 233, 273, 410, 411, 417, F 420,422,430,434,449,450, 451,452,462 factorization of polynomials, 147ff at a point, 460, 473 Faltings, Go, 13 dimension question, 11, 426ff family of curves, 139 Dimension Theorem, 434 Faugere, Jo, 496 discriminant, 156, 313 Feiner, So, 498 division algorithm, see algorithm, divi- Fermat's Last Theorem, 13 sion field, 1, 36, 479 dodecahedron, 323 algebraically closed, 4, 34, 125, dominating map, 453 164-5,169,171-2,175-6,193, duality 197, 200-1, 205, 211, 232, of polyhedra, 323 370,371,372,378,388,389, projective principle of, 347 391,400,401,431,434,438, dual 439,440,441,442,443,450, curve, 344 467 projective plane, 355 finite, 3, 4-5, 226 506 Index

infinite, 3, 4, 32, 128, 130, 173, graded lexicographic order, see mono• 197, 368 mial ordering of characteristic zero, 181, 463 graded monomial order, see monomial of finite (positive) characteristic, ordering 181, 397,400,463 graded reverse lexicographic order, see of fractions of a domain, 245 monomial ordering of rational functions (k(V), 149, gradient, 137, 138 246,444,450,454 graph, 6, 127, 242 final remainder (in Wu's Method), 301 Grassmannian, 405 Finkbeiner, D.T., 158, 166, 400, 450, greatest common divisor (GCD), 40ff, 481 43, 179 finite generation of invariants question, Gritzmann, P., III 322, 328 Groebner basis for an ideal, 31,44, 76ff, finiteness question, 11, 232, 441 113ff, 128-9, 132, 161, 171, Foley, J., 498 177, 187, 195, 228ff, 244, folium of Descartes, 134 269ff, 287, 304, 309, 311, follows generically from, 289 329ff, 335ff, 370, 378, 384, follows strictly from, 286 390, 467ff, 495ff forward kinematic problem, 258 criterion for, 84, 106 function minimal,90 algebraic, 121 reduced, 9lf, 171, 177,286,290, coordinate, 235 370 polynomial, 3, 214 specialization of, 270ff, 278 rational, 15, 245 Groebner, W., 77 function field of a variety (k(V), 246, Grove, L.c., 318, 329, 497 444, 450, 454 group, 480 Fundamental Theorem of Algebra, 4 cyclic, 317 Fundamental Theorem of Symmetric Poly• finite matrix, 316 nomials, 307 general linear (GL(n,k», 316 generators for, 320 Klein four-, 321 G of symmetries of a cube, 317, 342 of symmetries of a tetrahedron, 323 Garrity, T., 497 orbit of a point under, 339 Gauss, c.p., 309 permutation, 480 Gaussian elimination, 9, 50-1, 53, 92, projective general linear (PGL(n+l, 155, 299 k» 406 Gebauer, R., 109 subgroup of, 480 generalized resultant, 162 general linear group (GL(n,k», see group, general linear H Geometric Extension Theorem, 122, 170, 381, 389-90 Hermann, G., 177,207 Gianni, P., 177,207,496 Herstein, LN., 313, 497 GL(n,k), see group, general linear Hilbert, D., 75, 172, 306, 413 Index 507

Hilbert Basis Theorem. 14.31. 75ff. 205. 329ff. 335ff. 370. 378. 384. 206. 225 390. 467ff. 495ff Hilbert function. 432. 440 homogeneous. 367 affine. 429. 435. 443. 449 in a ring. 224. 480 Hilbert polynomial. 433. 439. 440 intersection. 185. 443 affine. 429. 435. 443.449 irreducible. 208 Hironaka. H .• 77 maximal. 199 Hodge. W.V.D .• 406 monomial. 68ff. 409ff. 428. 433. Hoffman. C .• 496 448 homogeneous of a variety (I(V). 32. 368 ideal. see ideal. homogeneous of leading terms «LT(/)}). 74 polynomial. see polynomial. of relations (syzygy ideal). 334 homogeneous P-primary. 208 homogenization primary. 208 of an ideal. 374. 434. 467 prime. 196. 216. 374. 450 of a polynomial. 173. 360 product. 183. 443 homomorphism. ring. 174.223 projective elimination. 385. 391 kernel of. 227 principal. 40. 45. 80. 178. 180 Hughes. 1. 498 proper. 199 Huneke. C .• 177. 207 quotient. 193. 384 hyperboloid. 248 radical. 36. 174 hyperplane. 359. 396 saturation of. 196 at infinity. 357 standard basis for. 76 hypersurface. 359. 439. 442 sum. 182 cubic. 359 ideal description question. 34. 48. 75 nonsingular quadric. 401 ideal membership question. 34. 44. 48. quadric. 359. 397 66.71.81.94.496 quartic. 359 Ideal-Variety Correspondence quintic. 359 affine. 176. 237 projective. 368. 372 Theorem. 280. 462 implicit representation. 16 I implicitization question. 17. 48. 52. 97. ideal. 29. 480 126 basis for. 30 Inclusion-Exclusion Principle. 420. 424 colon. 193 index of regularity. 430 determinantal. III inflection. see point of inflection complete intersection. 445 integer polynomial. see polynomial. in- elimination. 114. 335 teger generated by a set of polynomials. invariance under a group. 319 29 invariant polynomial. 319 Groebner basis for. 31. 44. 76ff. inverse kinematic problem. 258 113ff. 128-9. 132. 161. 171. inverse lexicographic order. see mono• 177. 187. 195. 228ff. 244. mial ordering 269ff. 287. 304. 309. 311. integral domain. 216. 236. 444. 480 508 Index

irreducibility question, 207 Lang, S., 128, 452 irreducible Lasker-Noether Theorem, 209, 210 ideal, see ideal, irreducible Lazard, D., 110, 112, 496ff polynomial, see polynomial, irre• leading coefficient, 58 ducible leading monomial, 58 variety, see variety, irreducible leading term, 37, 58 irredundant intersection of ideals, 205 leading terms, ideal of, 74 irredundant union of varieties, 204 least common multiple (LCM), 82, 187 isomorphic level set, 218 rings, 223 lexicographic order, see monomial varieties, 218, 237, 338, 449 ordering, lexicographic Isomorphism Theorem, 227, 335 line isotropy subgroup, 343 affine, 3, 349 at infinity, 349 J limit of, 469ff projective, 346, 348, 359, 402 Jacobian matrix, see matrix, Jacobian secant, 469ff joints (of robots) tangent, 136, 138 ball, 257, 260 local property, 455 helical, 257, 260 prismatic, 256 revolute, 256 M "spin," 268 joint space, 258 manifold, 462 mapping dominating, 453 K polynomial, 214 k[Xl, ... , xn], 2 pullback, 239 k[fl, ... ,1m], 325 projection, 121, 214, 381,450 k[V], 215, 222, 236, 444, 447, 450 rational, 249 k(V), 246, 444, 450, 454 regular, 214 Kendig, K., 444, 461, 462 Segre, 380, 401 kernel, 190, 227 stereographic projection, 253 kinematic problems of robotics Macaulay, F.S., 428 forward, 258 Macaulay (program), see computer inverse, 258 algebra systems kinematic redundancy, 280 MacDonald, LG., 210 kinematic singularities, 272, 274 MACSYMA, see computer algebra Klein, F. 318 systems Klein four-group, 321 Maple, see computer algebra systems Knorrer, H., 497 Mathematica, see computer algebra systems matrix L echelon form, 50, 77, 93, 408 Lagrange multipliers, 10, 13, 10 I group, 316 Index 509

permutation, 317 Noether's Theorem, 327, 332 row-reduced echelon form, 50, 77, nons in gular 93, 408 point, see point, nonsingular Jacobian, 273, 461 quadric, see quadric, nonsingular Sylvester, 152 Normal Form for Quadrics, 397 Matsumura, H., 473 Nullstellensatz, 4, 34, 36,46, 123, 193, Mayr, E., 110 200, 232, 291, 378, 432 Melenk, H., 491 Hilbert's, 172, 192 Meyer, A., 11 0 in k[V], 237 Mines, R., 150, 177,207 Projective Strong, 371,434 minimal basis, see basis, minimal Projective Weak, 370 mixed order, see monomial ordering, mixed Strong, 175, 200 Molien's Theorem, 329, 497 Weak, 169, 200, 232 Moller, H.M., 109, 110,491 Mora, F., 110, 112, 496ff o monomial, I monomial ideal, see ideal, monomial octahedron, 323 monomial ordering, 54, 71, 335, 390, operational space, 258 467 orbit elimination, 74, 120 G-,339 graded, 375, 378,428, 443, 448 of a point, 339 graded lexicographic (grlex), 56, space, 339 467 ordering, see monomial ordering graded reverse lexicographic (grevlex), order (of a group), 319 57 orthocenter, 293 inverse lexicographic (invlex), 58 lexicographic (lex), 55, 95ff, 114, p 128,132,187,195,291,311, 384,467 Pappus's Theorem, 293, 352 mixed, 73 parametric representation product, 73 polynomial, 16, 127, 197,238,337 weight, 73 rational, 16, 198 multidegree (multideg), 58 parametrization question, 17 multinomial coefficient, 331 partial solution, 116, 122 multiple root, 136 Paul, R., 276 multiplicity of intersection, 136 Pedoe, D., 406 Mumford, D. 388, 394, 462, 472, 475 pencil of hypersurfaces, 365 N of lines, 355 of surfaces, 238 Neun, w., 491 of varieties, 238, 365 nilpotent, 224, 227 permutation, 481 Newton identities, 312 sign of, 481 Noether, E., 327 perspective, 346, 350 510 Index

PGL(n+l, k), see group, projective gen• product order, see monomial ordering, erallinear product plane projective affine, 3 closure, see closure, projective Euclidean, 280ff elimination ideal, see ideal, pro• projective, 346 jective elimination Pliicker coordinates, see coordinates, Pliicker equivalence, see equivalence, pro• point jective critical, 99 Extension Theorem, see Extension of inflection, 145 Theorem, projective nonsingular, 138 line, see line, projective singular, 7, 135, 138, 244, 407, plane, see plane, projective 460,465,496 space, see space, projective smooth, 244, 460, 473 variety, see variety, projective vanishing, 346 pseudocode, 37, 483ff polyhedron pseudodivision, 297 duality of, 323 successive, 301 regular, 323 pseudoquotient, 297 polynomial, 2 pseudoremainder, 297 affine Hilbert, 429, 435, 443, 449 pullback mapping, see mapping, pull• bihomogeneous, 391 back elementary symmetric, 307 Pythagorean Theorem, 283 Hilbert, 433, 439, 440 pyramid of rays, 350 homogeneous, 312, 358 homogeneous component of, 312 integer, 152 Q invariant, 319 irreducible, 147ff, 178 quadric hypersurface, 248, 359, 395ff, linear part of, 456 401 Newton-Gregory interpolating, 426 quotient(s) 60ff partially homogeneous, 383 quotient reduced (square-free), 46,179,458 field, see field, of fractions So, 82ff, 104 ring, see ring, quotient symmetric, 306 vector space, 427 weighted homogeneous, 391 Polynomial Implicitization Theorem, 128, R 337 polynomial mapping, see mapping, poly- radical of an ideal, 175, 369 nomial radical ideal question, 177,372,374 polynomial ring (k[Xl, ... ,x,,]), 2 radical membership question, 177 PostScript, 22 radical generators question, 177 power sums, 312 rank primality question, 206 deficient, 274 primary decomposition question, 210 of a matrix, 273ff, 462 principal ideal domain (PID), 40, 225 maximal, 274 Index 511

of a quadric, 399 Seidenberg, A., 177,207 Rational Implicitization Theorem, 132 Shafarevich, I., 444, 460, 461 rational secant line, see line, secant function, see function, rational Segre map, see mapping, Segre mapping, see mapping, rational Segre variety, see variety, Segre variety, see variety, rational Semple, J.G., 406 real projective plane, 346 sign, of permutation, see permutation, REDUCE, see computer algebra sign of systems singular reduction of a polynomial, 46, 179, 458 point, see point, singular regularity, index of, 430 quadric, see quadric, singular regular mapping, see mapping regular singular locus, 460 remainder on division, 60, 82, 89ff, 94, Siret, Y., 39, 42, 150, 189 229ff solving equations, 48, 95 resultant, 152ff space generalized, 162 affine, 3 reverse lexicographic order, see mono• configuration (of a robot), 258 mial ordering, graded reverse• joint (of a robot), 258 lexicographic orbit, 339 Reynolds operator, 325 projective, 356 Richman, E, 150, 177,207 quotient vector, 427 Riemann sphere, 357 tangent, to a variety, 456, 473 ring, 216, 319, 479 specialization of Groebner bases, 270ff cordinate, of a variety (k[V)), 235, S-polynomial, see polynomial, S• 337,444,447,450 stabilizer, 343 homomorphism, 174,223 Stillman, M., 74, 111, 120 isomorphism, 223, 337 Sturmfels, B., 111, 291, 329 of invariants, 319 subgroup, 481 quotient (k[Xl,' .• ,xn]Il), 221, 334, subring, 320 440 subspace Robbiano, L., 74 coordinate 410, 416, 448 robotics, 10, 13-14, 255ff subvariety, 236 root, multiple, 136 surface Roth, L., 406 Enneper, 133 row-reduced echelon form matrix, 50, hyperboloid of one sheet, 248 77,93,408 ruled, 100, 402 R-sequence,445 tangent, of the twisted cubic, 19, Ruitenberg, w., 150, 177,207 99,126,128-9,132,213,245 ruled surface, see surface, ruled Veronese, 219, 380, 391 Whitney umbrella, 133 symmetric polynomial, see polynomial, s symmetric syzygy, 35, 104, 111 SCRATCHPAD, see computer algebra homogeneous, 105 systems ideal, 334 512 Index

T twisted cubic curve, see curve, twisted tangent cubic cone, see cone, tangent tangent surface of, see surface, curve to a family, 141 tangent line to a curve, see line, tangent space to a variety, see space, u tangent unique factorization of polynomials, 150 Taylor's Formula, 456, 474 uniqueness question in invariant theory, term, 2 322, 333, 337 tetrahedron, 323 Theorem v Affine Dimension, 431 van Dam, A., 498 Circle, of Apollonius, 284, 300ff vanishing point, see point, vanishing Classification of Quadrics, 400 variety Closure, 123, 192, 392, 450 affine, 5 Dimension, 434 dual, 344 Elimination, 114, 162, 182, 187, irreducible, 196,202,204,216,337, 387 369, 374, 443, 450 Extension, of elimination theory, irreducible component of, 288, 443, 117, 388 462 Fermat's Last, 13 linear, 9, 359 Fundamental, of Algebra, 4 of an ideal (V(l)), 78, 368 Fundamental, of Symmetric Poly- projective, 358 nomials, 307 rational, 251 Geometric Extension, 122,381,390 reducible, 216 Hilbert Basis, 14, 31, 75ff, 205, Segre,380 206,225 subvariety of, 236 Implicit Function, 280, 462 unirational, 17 Isomorphism, 227, 335 Vasconcelos, W., 177, 207 Lasker-Noether, 209, 210 Veronese surface, see surface, Molien's, 329, 497 Veronese Noether's, 327, 332 Normal Form for Quadrics, 397 w Pappus's, 293, 352 Warren, J., 497 Polynomial Implicitization, 128, 337 weight order, see monomial ordering, Projective Extension, 388 weight Rational Implicitization, 132 weights, 391 Toumier, E., 39, 42, 150, 189 weighted homogeneous polynomial, see Trager, B., 177,207 polynomial, weighted homo• transformation geneous affine, 267 well-ordering, 54, 71 projective linear, 395 Whitney umbrella, see surface, Whitney transcendence degree, 452 umbrella triangular form system of equations, 298 Winkler, E, 11 0 Index 513

Wu's Method, 296ff, 497 Wu, w.-T., 296, 304 z Zacharias, G., 177,207 Zariski closure of a set, 123, 192, 198 dense set, 450 Undergraduate Texts in Mathematics

(continued)

Martin: The Foundations of Geometry and the Non-Euclidean Plane. Martin: Transformation Geometry: An Introduction to Symmetry. MllImanIParker: Geometry: A Metric Approach with Models. Second edition. Owen: A First Course in the Mathematical Foundations of Thermodynamics. Palka: An Introduction to Complex Function Theory. Peressini/Sullivan/Uhl: The Mathematics of Nonlinear Programming. Prenowitz/Jantosciak: Join Geometries. Priestly: : An Historical Approach. Protter/Morrey: A First Course in Real Analysis. Second edition. Protter/Morrey: Intermediate Calculus. Second edition. Ross: Elementary Analysis: The Theory of Calculus. Samuel: Projective Geometry. Readings in Mathematics. Scharlau/Opolka: From Fermat to Minkowski. Sigler: Algebra. SilvermanITate: Rational Points on Elliptic Curves. Simmonds: A Brief on Tensor Analysis. Singer/Thorpe: Lecture Notes on Elementary Topology and Geometry. Smith: Linear Algebra. Second edition. Smith: Primer of Modern Analysis. Second edition. Stanton/White: Constructive Combinatorics. Stillwell: Mathematics and Its History. Strayer: Linear Programming and Its Applications. Thorpe: Elementary Topics in Differential Geometry. Troutman: Variational Calculus with Elementary Conv'~xity. Wilson: Much Ado About Calculus.