<<

CHAPTER II

AFFINE

In the previous chapter we indicated how several basic ideas from geometry have natural interpretations in terms of vector spaces and . This chapter continues the process of formulating basic geometric concepts in such terms. It begins with standard material, moves on to consider topics not covered in most courses on classical deductive geometry or , and it concludes by giving an abstract formulation of the concept of geometrical incidence and closely related issues.

1. Synthetic affine geometry

In this section we shall consider some properties of Euclidean spaces which only depend upon the of incidence and parallelism

Definition. A three-dimensional incidence space is a triple (S, L, P) consisting of a nonempty set S (whose elements are called points) and two nonempty disjoint families of proper subsets of S denoted by L (lines) and P (planes) respectively, which satisfy the following conditions:

(I – 1) Every (element of L) contains at least two points, and every (element of P) contains at least three points. (I – 2) If x and y are distinct points of S, then there is a unique line L such that x, y ∈ L. Notation. The line given by (I –2) is called xy. (I – 3) If x, y and z are distinct points of S and z 6∈ xy, then there is a unique plane P such that x, y, z ∈ P . (I – 4) If a plane P contains the distinct points x and y, then it also contains the line xy. (I – 5) If P and Q are planes with a nonempty intersection, then P ∩ Q contains at least two points.

Of course, the standard example in R3 with lines and planes defined by the formulas in Chapter I (we shall verify a more general statement later in this chapter). A list of other simple examples appears in Prenowitz and Jordan, Basic Concepts of Geometry, pp. 141–146.

A few theorems in are true for every three-dimensional incidence space. The proofs of these results provide an easy introduction to the synthetic techniques of these notes. In the first six results, the triple (S, L, P) denotes a fixed three-dimensional incidence space.

Definition. A set B of points in S is collinear if there is some line L in S such that B ⊂ L, and it is noncollinear otherwise. A set A of points in S is coplanar if there is some plane P in S such that A ⊂ P , and it is noncoplanar otherwise. — Frequently we say that the points x, y, · · · (etc.) are collinear or coplanar if the set with these elements is collinear or coplanar respectively.

7 8 II. AFFINE GEOMETRY

Theorem II.1. Let x, y and z be distinct points of S such that z 6∈ xy. Then { x, y, z } is a noncollinear set.

Proof. Suppose that L is a line containing the given three points. Since x and y are distinct, by (I – 2) we know that L = xy. By our assumption on L it follows that z ∈ L; however, this contradicts the hypothesis z 6∈ xy. Therefore there is no line containing x, y and z.

Theorem II.2. There is a subset of four noncoplanar points in S.

Proof. Let P be a plane in S. We claim that P contains three noncollinear points. By (I – 1) we know that P contains three distinct points a, b, c0. If these three points are noncollinear, let c = c0. If they are collinear, then the line L containing them is a subset of P by (I – 4), and since L and P are disjoint it follows that L must be a proper subset of P ; therefore there is some c ∈ P such that c 6∈ L, and by the preceding result the set { a, b, c } is noncollinear. Thus in any case we know that P contains three noncollinear points.

Since P is a proper subset of S, there is a point d ∈ S such that d 6∈ P . We claim that { a, b, c d } is noncoplanar. For if Q were a plane containing all four points, then a, b, c ∈ P would imply P = Q, which contradicts our basic stipulation that d 6∈ P .

Theorem II.3. The intersection of two distinct lines in S is either a point or the empty set.

Proof. Suppose that x 6= y but both belong to L ∩ M for some lines L and M. By property (I – 2) we must have L = M. Thus the intersection of distinct lines must consist of at most one point.

Theorem II.4. The intersection of two distinct planes in S is either a line or the empty set.

Proof. Suppose that P and Q are distinct planes in S with a nonempty intersection, and let x ∈ P ∩ Q. By (I – 5) there is a second point y ∈ P ∩ Q. If L is the line xy, then L ⊂ P and L ⊂ Q by two applications of (I – 4); hence we have L ⊂ P ∩ Q. If there is a point z ∈ P ∩ Q with z 6∈ L, then the points x, y and z are noncollinear but contained in both of the planes P and Q. By (I – 3) we must have P = Q. On the other hand, by assumption we know P 6= Q, so we have reached a contradiction. The source of this contradiction is our hypothesis that P ∩ Q strictly contains L, and therefore it follows that P ∩ Q = L.

Theorem II.5. Let L and M be distinct lines, and assume that L ∩ M 6= ∅. Then there is a unique plane P such that L ⊂ P and M ⊂ P .

In less formal terms, given two intersecting lines there is a unique plane containing them.

Proof. Let x ∈ L ∩ M be the unique common point (it is unique by Theorem 3). By (I – 2) there exist points y ∈ L and z ∈ M, each of which is distinct from x. The points x, y and z are noncollinear because L = xy and z ∈ M − {x} = M − L. By (I – 3) there is a unique plane P such that x, y, z ∈ P , and by (I – 4) we know that L ⊂ P and M ⊂ P . This proves the existence of a plane containing both L and M. To see this plane is unique, observe that every plane Q containing both lines must contain x, y and z. By (I – 3) there is a unique such plane, and therefore we must have Q = P . 1. SYNTHETIC AFFINE GEOMETRY 9

Theorem II.6. Given a line L and a point z not on L, there is a unique plane P such that L ⊂ P and z ∈ P .

Proof. Let x and y be distinct points of L, so that L = xy. We then know that the set { x, y, z } is noncollinear, and hence there is a unique plane P containing them. By (I – 4) we know that L ⊂ P and z ∈ P . Conversely, if Q is an arbitrary plane containing L and z, then Q contains the three noncollinear points x, y and z, and hence by (I – 3) we know that Q = P .

Notation. We shall denote the unique plane in the preceding result by Lz.

Of course, all the theorems above are quite simple; their conclusions are probably very clear intuitively, and their proofs are fairly straightforward arguments. One must add Hilbert’s Ax- ioms of Order or the Euclidean Parallelism to obtain something more substantial. Since our aim is to introduce the postulate at an early point, we might as well do so now (a thorough treatment of geometric theorems derivable from the Axioms of Incidence and Order appears in Chapter 12 of Coxeter, Introduction to Geometry; we shall discuss the Axioms of Order in Section VI.6 of these notes).

Definition. Two lines in a three-dimensional incidence space S are parallel if they are disjoint and coplanar (note particularly the second condition). If L and L0 are parallel, we shall write L||L0 and denote their common plane by LL0. Note that if L||M then M||L because the conditions in the definition of parallelism are symmetric in the two lines.

Affine three-dimensional incidence spaces

Definition. A three-dimensional incidence space (S, L, P) is an affine three-space if the following holds:

(EPP) For each line L in S and each point x 6∈ L there is a unique line L0 ⊂ Lx such that x ∈ L and L ∩ L0 = ∅ (in other words, there is a unique line L0 which contains x and is parallel to L).

This property is often called the Euclidean Parallelism Property, the Euclidean Parallel Postu- late or Playfair’s Postulate (see the previously cited online reference for background on these names). A discussion of the origin of the term “affine” appears in Section II.5 of the following online site:

http://math.ucr.edu/∼res/math133/geomnotes2b.pdf

Many nontrivial results in Euclidean geometry can be proved for arbitrary affine three-spaces. We shall limit ourselves to two examples here and leave others as exercises. In Theorems 7 and 8 below, the triple (S, L, P) will denote an arbitrary affine three-dimensional incidence space.

Theorem II.7. Two lines which are parallel to a third line are parallel. 10 II. AFFINE GEOMETRY

Proof. There are two cases, depending on whether or not all three lines lie in a single plane; to see that the three lines need not be coplanar in ordinary 3-dimensional coordinate geometry, consider the three lines in R3 given by the z-axis and the lines joining (1, 0, 0) and (0, 1, 0) to (1, 0, 1) and (0, 1, 1) respectively.

THE COPLANAR CASE. Suppose that we have three distinct lines L, M, N in a plane P such that L||N and M||N; we want to show that L||N. If L is not parallel to N, then there is some point x ∈ L ∩ N, and it follows that L and N are distinct lines through x, each of which is parallel to M. However, this contradicts the Euclidean . Therefore the lines L and N cannot have any points in common.

THE NONCOPLANAR CASE. Let α be the plane containing L and M, and let β be the plane containing M and N. By the basic assumption in this case we have α 6= β. We need to show that L ∩ N = ∅ but L and N are coplanar.

The lines L and N are disjoint. Assume that the L and N have a common point that we shall call x. Let γ be the plane determined by x and N (since L||M and x ∈ L, clearly x 6∈ M). Since x ∈ L ⊂ α and M ⊂ α, Theorem 6 implies that α = γ. A similar argument shows that β = γ and hence α = β; the latter contradicts our basic stipulation that α 6= β, and therefore it follows that L and N cannot have any points in common.

Figure II.1

The lines L and N are coplanar. Let y ∈ N, and consider the plane Ly. Now L cannot be contained in β because β 6= α = LM and M ⊂ β. By construction the planes Ly and β have the point y in common, and therefore we know that Ly meets β in some line K. Since L and K are coplanar, it will suffice to show that N = K. Since N and K both contain y and all three lines M, N and K are contained in β, it will suffice to show that K||M. Suppose the lines are not parallel, and let z ∈ K ∩ M. Since L||M it follows that z 6∈ L. Furthermore, L ∪ K ⊂ Ly implies that z ∈ Ly, and hence y = Lz. Since z ∈ M and L and M 1. SYNTHETIC AFFINE GEOMETRY 11 are coplanar, it follows that M ⊂ Lz. Thus M is contained in Ly ∩ β, and since the latter is the line K, this shows that M = K. On the other hand, by construction we know that M ∩ N = ∅ and K ∩ N 6= ∅, so that M and K are obviously distinct. This contradiction implies that K||M must hold.

The next result is an analog of the Parallel Postulate for parallel planes.

Theorem II.8. If P is a plane and x 6∈ P , then there is a unique plane Q such that x ∈ Q and P ∩ Q = ∅.

Proof. Let a, b, c ∈ P be the noncollinear points, and consider the lines A0, B0 through x which are parallel to A = bc and B = ac. Let Q be the plane determined by A0 and B0, so that x ∈ Q by hypothesis. We claim that P ∩ Q = ∅.

Assume the contrary; since x ∈ QA and x 6∈ P , the intersection P ∩ Q is a line we shall call L.

Figure II.2

Step 1. We shall show that L 6= A, B. The proof that L 6= A and L 6= B are similar, so we shall only show L 6= A and leave the other part as an exercise. — If L = A, then L ⊂ Q. Since A0 is the unique line in Q which is parallel to A, there must be a point u ∈ B 0 ∩ A. Consider the plane B0c. Since c ∈ A, it follows that A ⊂ B0c. Hence B0c is a plane containing A and B. The only plane which satisfies these conditions is P , and hence B 0 ⊂ P . But x ∈ B0 and x 6∈ P , so we have a contradiction. Therefore we must have L 6= A.

Step 2. We claim that either A0 ∩ L and A ∩ L are both nonempty or else B0 ∩ L and B ∩ L are both nonempty. — We shall only show that if either A0 ∩ L is A ∩ L is empty then both B0 ∩ L and B ∩ L are both nonempty, since the other case follows by reversing the roles of A and B. Since L and A both lie in the plane P , the condition A ∩ L = ∅ implies A||L. Since A||A0, by Theorem 7 and Theorem 1 we know that A0||L. Since B 6= A is a line through the point c ∈ A, either B = L or B ∩ L 6= ∅ holds by the Parallel Postulate (in fact, B 6= L by Step 12 II. AFFINE GEOMETRY

1). Likewise, B and B0 are lines through x in the plane Q and L ⊂ Q, so that the A0||L and the Parallel Postulate imply B0 ∩ L 6= ∅.

Step 3. There are two cases, depending upon whether A0 ∩ L and A ∩ L are both nonempty or B0 ∩ L and B ∩ L are both nonempty. Only the latter will be considered, since the former follows by a similar argument. Let y ∈ B ∩ L and z ∈ B 0 ∩ L; since B ∩ B0 = ∅, if follows that y 6= z and hence L == yz. Let β be the plane BB0. Then L ⊂ β since z, y ∈ β. Since L 6= B, the plane β is the one determined by L and B. But L, B ⊂ P by assumption, and hence = β = P . In other words, B0 is contained in P . But x ∈ B0 and x 6∈ P , a contradiction which shows that the line L cannot exist.

Following standard terminology, we shall say that the plane Q is parallel to P or that it is the plane parallel to P which passes through x.

Corresponding definitions for incidence planes and affine planes exist, and analogs of Theorems 1, 2, 3 and 7 hold for these objects. However, incidence planes have far fewer interesting properties than their three-dimensional counterparts, and affine planes are best studied using the methods of that are developed in later sections of these notes.

EXERCISES

Definition. A line and a plane in a three-dimensional incidence space are parallel if they are disjoint.

Exercises 1–4 are to be proved for arbitrary 3-dimensional incidence spaces.

1. Suppose that each of two intersecting lines is parallel to a third line. Prove that the three lines are coplanar.

2. Suppose that the lines L and L0 are coplanar, and there is a line M not in this plane such that L||M and L0||M. Prove that L||L0.

3. Let P and Q be planes, and assume that each line in P is parallel to a line in Q. Prove that P is parallel to Q.

4. Suppose that the line L is contained in the plane P , and suppose that L||L0. Prove that L0||P or L ⊂ P .

In exercises 5–6, assume the incidence space is affine.

5. Let P and Q be parallel planes, and let L be any line which contains a point of Q and is parallel to a line in P . Prove that L is contained in Q. [Hint: Let M be the line in P , and let x ∈ L ∩ Q. Prove that L = Mx ∩ Q.]

6. Two lines are said to be skew lines if they are not coplanar. Suppose that L and M are skew lines. Prove that there is a unique plane P such that L ⊂ P and P is parallel to M. [Hint: Let x ∈ L, let M 0 be a line parallel to M which contains x, and consider the plane LM 0.] 2. AFFINE SUBSPACES OF VECTOR SPACES 13

2. Affine subspaces of vector spaces

Let F be a field, and let V be a over F (in fact, everything in this section goes through if we take F to be a skew-field as described in Section 1 of Appendix A). Motivated by Section I.3, we define lines and planes in V to be translates of 1- and 2-dimensional vector subspaces of V . Denote these families of lines and planes by LV and PV respectively. If dim V ≥ 3 we shall prove that (V, LV , PV ) satisfies all the conditions in the definition of an affine incidence 3-space except perhaps for the incidence axiom I – 5, and we shall show that the latter also holds if dim V = 3.

Theorem II.9. If V , etc. are as above and dim V ≥ 3, then LV and PV are nonempty disjoint families of proper subsets of V .

Proof. Since dim V ≥ 3 there are 1- and 2-dimensional vector subspaces of V , and therefore the families LV and PV are both nonempty. If we have Y ∈ LV ∩ PV , then we may write

Y = x + W1 = y + W2 where dim Wi = i. By Theorem I.4 we know that y ∈ x+W1 implies the identity x+W1 = y+W1, and therefore Theorem I.3 implies

W2 = −y + (y + W1) = −y + (y + W2) = W2 .

Since dim W1 6= dim W2 this is impossible. Therefore the families LV and PV must be disjoint. To see that an element of either family is a proper subset of V , suppose to the contrary that x + W = V , where dim W = 1 or 2. Since dim W < 3 ≤ dim V , it follows that W is a proper subset of V ; let v ∈ V be such that v 6∈ W . By our hypothesis, we must have x + v ∈ x + W , and thus we also have v = −x + (x + v) ∈ −x + (x + W ) = W which contradicts our fundamental condition on x. The contradiction arises from our assumption that x+W = V , and therefore this must be false; therefore the sets in LV and PV are all proper subsets of V .

Theorem II.10. Every line in V contains at least two points, and every plane contains at least three points.

Proof. Let x+W be a line or plane in V , and let { w1 } or { w1, w2 } be a basis for W depending upon whether dim W equals 1 or 2. Take the subsets { v, v + w1, } or { v, v + w1, v + w2 } in these respective cases.

Theorem II.11. Given two distinct points in V , there is a unique line containing them.

Proof. Let x 6= y be distinct points in V , and let L0 be the 1-dimensional vector subspace spanned by the nonzero vector y − x. Then L = x + L0 is a line containing x and y. Suppose now that M is an arbitrary line containing x and y. Write M = z + W where dim W = 1. Then Theorem I.4 and x ∈ M = z + W imply that M = x + W . Furthermore, y ∈ M = x + W then implies that y − x ∈ W , and since the latter vector spans L0 it follows that L0 ⊂ W . However, dim L0 = dim W , and therefore L0 = W (see Theorem A.8). Thus the line M = z + W must be equal to x + L0 = L. 14 II. AFFINE GEOMETRY

Theorem II.12. Given three points in V that are not collinear, there is a unique plane containing them.

Proof. Let x, y and z be the noncollinear points. If y − x and z − x were linearly dependent, then there would be a 1-dimensional vector subspace W containing them and hence the original three points would all lie on the line x+W . Therefore we know that y−x and z−x are linearly independent, and thus the vector subspace W they span is 2-dimensional. If P = x + W , then it follows immediately that P is a plane containing x, y and z. To prove uniqueness, suppose that v + U is an arbitrary plane containing all three points. As before, we must have v + U = x + U since bfx ∈ v + U, and since we also have y, z ∈ v + U = x + U it also follows as in earlier arguments that y − x and z − x lie in U. Once again, since the two vectors in question span the subspace W , it follows that W ⊂ U, and since the are equal it follows that W = U. Thus we have v + U = x + W , and hence there is only one plane containing the original three points.

Theorem II.13. If P is a plane in V and x, y ∈ P , then the unique line containing x and y is a subset of P .

Proof. As before we may write P = x + W , where W is a 2-dimensional subspace; we also know that the unique line joining x and y has the form L = x + L0, where L0 is spanned by y − x. The condition y ∈ P implies that y − x ∈ W , and since W is a vector subspace it follows that L0 ⊂ W . But this immediately implies that L = x + L0 ⊂ x + W = P , which is what we wanted to prove.

Theorem II.14. (Euclidean Parallelism Property) Let L be a line in V , and let y 6∈ L. Then there is a unique line M such that (i) y ∈ M, (ii) L ∩ M = ∅, (iii) L and M are coplanar.

Proof. Write L = x+L0 where dim L0 = 1, and consider the line M = y+L0. Then M clearly satisfies the first condition. To see it satisfies the second condition, suppose to the contrary that there is some common point z ∈ L ∩ M. Then the identities

z ∈ L = x + L0 z ∈ M = y + L0 imply that L = x + L0 = z + L0 = y + L0 = M, which contradicts the basic conditions that y ∈ M but y 6∈ L. Therefore L ∩ M = ∅. Finally, to see that M also satisfies the third condition, let W be the subspace spanned by L0 and y − x; the latter vector does not belong to L0 because y 6∈ L, and therefore W must be a 2-dimensional vector subspace of V . If we now take P = x + W , it follows immediately that L ⊂ P and also

M = y + L0 x + (y − x) + L0 ⊂ x + W = P so that L and M are coplanar. Therefore M satisfies all of the three conditions in the theorem.

To complete the proof, we need to show that there is only one line which satisfies all three conditions in the theorem. — In any case, we know there is only one plane which contains L and y, and hence it must be the plane P = x + W from the preceding paragraph. Thus if N is a line satisfying all three conditions in the theorem, it must be contained in P . Suppose then that N = y+L1 is an arbitrary line in P with the required properties. Since y+L1 ∈ x+W = y+W , it follows that L1 ⊂ W . Therefore, if 0 6= u ∈ L1 we can write u = s(y − x) + tv, where v is a 2. AFFINE SUBSPACES OF VECTOR SPACES 15

nonzero vector in L0 and s, t ∈ F. The assumption that L ∩ N = ∅ implies there are no scalars p and q such that x + pv = y + qu holds. Substituting for u in this equation, we may rewrite it in the form pv = (y − x) + qu = (1 + sq)(y − x) + qtv and hence by equating coefficients we cannot find p and q such that p = qt and 1 + sq = 0. Now if s 6= 0, then these two equations have the solution q = −s−1 and p = −s−1t. Therefore, if there is no solution then we must have s = 0. The latter in turn implies that u = tv and hence L1 = L0, so that N = y + L1 = y + L0 = M.

Theorem II.15. If dim V = 3 and two planes in V have a nonempty intersection, then their intersection contains a line.

Proof. Let P and Q be planes, and let x ∈ P ∩ Q. Write P = x + W and Q = x + U, where dim W = dim U = 2. Then P ∩ Q = (x + W ) ∩ (x + U) clearly contains x + (W ∩ U), so it suffices to show that the vector subspace W ∩ U contains a 1-dimensional vector subspace. However, we have dim(W ∩ U) = dim W − dim U − dim(W + U) = 4 − dim(W + U) and since dim(W + U) ≤ dim V = 3, the displayed equation immediately implies dim(W ∩ U) ≥ 4 − 3 = 1. Hence the intersection of the vector subspaces is nonzero, and as such it contains a nonzero vector z as well as the 1-dimensional subspace spanned by z.

The preceding results imply that Theorems 7 and 8 from Section 1, and the conclusions of Exercises 5 and 6 from that section, are all true for the models (V, LV , PV ) described above provided dim V = 3. In some contexts it is useful to interpret the conclusions of the theorems or exercises in temrs of the vector space structure on V . For example, in Theorem 8 if P is the plane x + W , then the parallel plane Q will be y + W . Another example is discussed in Exercise 5 below.

Generalizing incidence to higher dimensions

The characterization of lines and planes as translates of 1- and 2-dimensional subspaces suggests a simple method for generalizing incidence structures to dimensions greater than three.1 Namely, define a k-plane in a vector space V to be a translate of a k-dimensional vector subspace. The following quotation from Winger, Introduction to Projective Geometry,2 may help the reader understand the reasons for formulating the concepts of affine geometry in arbitrary di- mensions.3

th 1The explicit mathematical study of higher dimensional geometry began around the middle of the 19 century, particularly in the work of L. Schl¨afli (1814–1895). Many ideas in his work were independently discovered by others with the development of linear algebra during the second half of that century. 2See page 15 of that book. 3Actually, spaces of higher dimensions play an important role in theoretical physics. Einstein’s use of a four-dimensional space-time is of course well-known, but the use of spaces with dimensions ≥ 4 in physics was at least implicit during much of the 19th century. In particular, 6-dimensional configuration spaces were implicit in work on celestial mechanics, and spaces of assorted other dimensions were widely used in classical dynamics. 16 II. AFFINE GEOMETRY

The timid may console themselves with the reflection that the geometry of four and higher dimensions is, if not a necessity, certainly a convenience of language — a of the algebra — and let the philosophers ponder the metaphysical questions involved in the idea of a point set of higher dimensions.

We shall conclude this section with a characterization of k-planes in V , where V is finite- dimensional and 1 + 1 6= 0 in F; in particular, the result below applies to the V = Rn. An extension of this characterization to all fields except the field Z2 with two elements is given in Exercise 1 below.

Definition. Let V be a vector space over the field F, and let P ⊂ V . We shall say P is a flat subset of V if for each pair of distinct points x, y ∈ F the line xy is contained in F.

Theorem II.16. Let V be a finite-dimensional vector space over a field F in which 1 + 1 6= 0. Then a nonempty set P ⊂ V is a flat subset if and only if it is a k-plane for some integer k satisfying 0 ≤ k ≤ dim V .

Definition. A subset S ⊂ V is said to be an affine subspace if can be written as x + W , where x ∈ V and W is a vector subspace of V . With this terminology, we can restate the theorem to say that if 1 + 1 6= 0 in F, then a nonempty subset of V is an affine subspace if and only if it is a flat subset.

Proof. We split the proof into the “if” and “only if” parts.

Every k-plane is a flat subset. Suppose that W is a k-dimensional vector subspace and x ∈ V . Let y, z ∈ x + V . Then we may write y = x + u and z = x + v for some distinct vectors u, v ∈ W . A typical point on the line yz has the form y + t(z − y) for some scalar t, and we have y + t(z − y) = x + u + t(v − u) ∈ x + W which shows that x + W is a flat subset. Note that this implication does not require any assumption about the nontriviality of 1 + 1.

Every flat subset has the form x + W for some vector subspace W . If we know this, we also know that k = dim W is less than or equal to dim V . — Suppose that x ∈ P , and let W = (−x) + P ; we need to show that W is a vector subspace. To see that W is closed under scalar multiplication, note first that w ∈ W implies x + w ∈ P , so that flatness implies every point on the line x(x + w) is contained in P . For each scalar t we know that x + tw lies on this line, and thus each point of this type lies in P = x + W . If we subtract x we see that tw ∈ W and hence W is closed under scalar multiplication.

To see that W is closed under vector addition, suppose that w1 and w2 are in W . By the previous paragraph we know that 2w1 and 2w2 also belong to W , so that u1 = x + 2w1 and F 1 −1 u2 = x+2w2 are in P . Our hypothesis on implies the latter contains an element 2 = (1+1) , so by flatness we also know that 1 1 2 ( u1 + u2 ) u1 + 2 ( u2 − u1 ) ∈ P . If we now expand and simplify the displayed vector, we see that it is equal to v + w1 + w2. Therefore it follows that w1 + w2 ∈ W , and hence W is a vector subspace of V .

Not surprisingly, if S is an affine subspace of the finite-dimensional vector space V , then we define its by dim S = dim W , where W is a vector subspace of V such that S = x + W . 2. AFFINE SUBSPACES OF VECTOR SPACES 17

— This number is well defined because S = x + W = y + U implies y ∈ x + W , so that y + W = x + W = y + U and hence W = −y + (y + W ) = −y + (y + U) = U .

Hyperplanes

One particularly important family of affine subspaces in a finite-dimensional vector space V is the set of all in V . We shall conclude this section by defining such objects and proving a crucial fact about them.

Definition. Let n be a positive integer, and let V be an n-dimensional vector space over the field F. A subset H ⊂ V is called a in V if H is the translate of an (n−1)-dimensional subspace. — In particular, if dim V = 3 then a hyperplane is just a plane and if dim V = 2 then a hyperplane is just a line.

One reason for the importance of hyperplanes is that if k < dim V then every k-plane is an intersection of finitely many hyperplanes (see Exercise II.5.4).

We have seen that planes in R3 and lines in R2 are describable as the sets of all points x which satisfy a nontrivial first degree equation in the coordinates (x1, x2, x3) or (x1, x2) respectively (see Theorems I.1, I.5 and I.7). The final result of this section is a generalization of these facts to arbitrary hyperplanes in Fn, where F is an arbitrary field.

Theorem II.17. In the notation of the previous paragraph, let H be a nonempty subset of Fn. Then H is a hyperplane if and only if there exist scalars c1, · · · , cn which are not all zero such n that H is the set of all x ∈ F whose coordinates x1, · · · , xn satisfy the equation n c x = b X i i i=0 for some b ∈ F.

Proof. Suppose that H is defined by the equation above. If we choose j such that aj 6= 0 and th let ej be the unit vector whose j coordinate is 1 and whose other coordinates are zero, then we −1 have −aj bej ∈ H, and hence the latter is nonempty. Set W equal to (−z) + H, where z ∈ H is fixed. As in Chapter I, it follows that y ∈ W if and only if its coordinates y1, · · · , yn satisfy the equation i aiyi = 0. Since the coefficients ai are not all zero, it follows from Theorem A.10 that W Pis an (n − 1)-dimensional vector subspace of Fn, and therefore H = x + W is a hyperplane.

Conversely, suppose H is a hyperplane and write x + W for a suitable vector x and (n − 1)- dimensional subspace W . Let { w1, · · · , wn−1 } be a basis for W , and write these vectors out in coordinate form: wi = wi,1, · · · , wi,n  If B is the matrix whose rows are the vectors wi, then the rank of B is equal to (n − 1) by construction. Therefore, by Theorem A.10 the set Y of all y = ( y1, · · · , yn ) which solve the system yjwi,j = 0 1 ≤ i ≤ (n − 1) X  j is a 1-dimensional vector subspace of Fn. Let a be a nonzero (hence spanning) vector in Y . 18 II. AFFINE GEOMETRY

We claim that z ∈ W if and only if i aizi = 0. By construction, W is contained in the subspace S of vectors whose coordinatesP satisfy this equation. By Theorem A.10 we know that dim S = (n − 1), which is equal to dim W by our choice of the latter; therefore Theorem A.9 implies that W = S, and it follows immediately that H is the set of all z ∈ Fn whose coordinates z1, · · · , zn satisfy the nontrivial linear equation ai zi = ai xi (where Pi Pi x = ( x1, · · · , xn ) ∈ W is the fixed vector chosen as in the preceding paragraph).

EXERCISES

1. Prove that Theorem 16 remains true for every field except Z2. Give an example of a flat 3 subspace of (Z2) which is not a k-plane for some k.

2. Let F be a field. Prove that the lines in F2 defined by the equations ax + by + cz = 0 and a0x + by + c0z = 0 (compare Theorem I.7 and Theorem 17 above) are parallel or identical if and only if ab0 − ba0 = 0.

3. Find the equation of the hyperplane in R3 passing through the (noncoplanar) points (1, 0, 1, 0), (0, 1, 0, 1), (0, 1, 1, 0), and (1, 0, 0, 1).

4. Suppose that x1 + W1 and x2 + W2 are k1- and k2-planes in a vector space V such that x1 +W1 ∩ x2 +W2 6= ∅. Let z be a common point of these subsets. Prove that their intersection is equal to z + W1 ∩ z + W2, and generalize this result to arbitrary finite intersections.

5. Let V be a 3-dimensional vector space over F, and suppose we are given the configuration of Exercise II.1.6 (L and M are skew lines, and P is a plane containing L but parallel to M). Suppose that the skew lines are given by x + W and y + U. Prove that the plane P is equal to x + (U + W ) [Hint: Show that the latter contains L and is disjoint from M.].

6. Suppose that dim V = n and H = x + W is a hyperplane in V . Suppose that y ∈ V but y 6∈ H. Prove that H0 = y + W is the unique hyperplane K such that y ∈ K and H ∩ K = ∅. [Hints: If z ∈ H ∩ H0 then N = x + W = z + W = y + W = H 0. If K = y + U where U is some (n − 1)-dimensional vector subspace different from W , explain why dim W ∩ U = n − 2. Choose a basis A of (n − 2) vectors for this intersection, and let u0 ∈ U, w0 ∈ W such that u0, w0 6∈ U ∩ W . Show that A ∪ {u0, w0} is a basis for V , write y − x in terms of this basis, and use this equation to find a vector which lies in H ∩ K = (x + W ) ∩ (y + U).] 3. AFFINE BASES 19

3. Affine bases

We shall need analogs of linear independence and spanning that apply to arbitrary k-planes and not just k-dimensional vector subspaces. The starting points are two basic observations.

Theorem II.18. Suppose that S is a k-plane in a vector space V over the field F. Given a1, · · · , ar ∈ S, let t1, · · · , tr ∈ S be such that tj = 1. Then tjaj ∈ S. Pj Pj

Proof. Write S = x + W , where x ∈ S and W is a k-dimensional vector subspace, and for each i write ai = x + wi where wi ∈ F. Then t a = t (x + w ) = t x + t w = x + t w X j j X j j X j X j j X j j j j j j j where the latter holds because tj = 1. Since W is a vector subspace, we know that Pj tjwj ∈ W , and therefore it follows that tjaj ∈ W . Pj Pj

Theorem II.19. If V is as above and T ⊂ V is an arbitrary subset, define the affine hull of T by H(T ) = { x ∈ V | x = t w , where v ∈ T for all t and t = 1 } X j j i X j j j (note that the sum is finite, but T need not be finite). Then H(T ) is an affine subspace of V .

Sometimes we shall also say that H(T ) is the affine span of T , and we shall say that T affinely spans an affine subspace S if S = H(T ).

Proof. Suppose that x, y ∈ H(T ), and write x = s u = t v X i i X j j i j where ui ∈ T and vj ∈ T for all i and j, and the coefficients satisfy si = tj = 1. We Pi Pj need to show that x + c(y − x) ∈ H(T ). But

x + c(y − x) = s u + c ·  t v − s u  = X i i X j j X i i i  j i 

(1 − c) · s u + c · t v = s (1 − c)u + t cv . X i i X j j X i i X j j i j i j

We have no a priori way of knowing whether any of the vectors ui are equal to any of the vectors vj, but in any case we can combine like terms to rewrite the last expression as rqwq, Pq where the vectors wq run through all the vectors ui and vj, and the coefficients rq are given accordingly; by construction, we then have r = s (1 − c) + t c = (1 − c) · s + c · t = (1 − c) · 1 + c · 1 = 1 X q X i X j X i X j q i j i j and therefore it follows that the point x + c(y − x) belongs to H(T ). 20 II. AFFINE GEOMETRY

Thus a linear combination of points in an affine subspace will also lie in the subspace provided the coefficients add up to 1, and by Theorem 19 this is the most general type of linear combination one can expect to lie in S.

Definition. A vector v is anaffine combination of the vectors x0, · · · , xn if x = tjxj, Pj where j tj = 1. Thus the affine hull H(T ) of a set T is the set of all (finite) affine combinations of vectorsP in T .

AFFINE VERSUS LINEAR COMBINATIONS. If a vector y is a linear combination of the vectors v1 < · · · , vn, then it is automatically an affine combination of 0, v1, · · · , vn, for if y = i tixi then P

y =  1 − t  · 0 + t x . X j X i i  j  i

Definition. Let V be a vector space over a field F. and let X ⊂ V be the set {x0, · · · , xn}. We shall say that X is affinely dependent if one element of X is expressible as an affine combination of the others and affinely independent otherwise. By convention, one point subsets are affinely independent.

The next result gives the fundamental relationship between affine dependence and linear de- pendence (and, by taking negations, it also gives the fundamental relationship between affine independence and linear independence).

Theorem II.20. In the setting above, the finite set X = {x0, · · · , xn} ⊂ V is affinely dependent 0 if and only if the set X = {x1 − x0, · · · , xn − x0} is linearly dependent. Likewise, the finite set 0 X = {x0, · · · , xn} ⊂ V is affinely independent if and only if the set X = {x1 −x0, · · · , xn −x0} is linearly independent. Proof. Since affine dependence and affine independence are the negations of each other and similarly for linear dependence and linear independence, it will suffice to prove the first conclusion in the theorem.

Proof that X is affinely dependent if X 0 is linearly dependent. By linear dependence there is some k > 0 such that x − x0 = c (x − x0) k X i i i6=0,k and therefore we also have

  x = x0 + c x − c x0 = 1 − c + c x . k X i i X i X i X i i i6=0,k i6=0,k  i6=0,k  i6=0,k

Therefore xk is also an affine combination of all the xj such that j 6= k.

Proof that X0 is linearly dependent if X is affinely dependent. By assumption there is some k such that x = c x k X j j j6=k 3. AFFINE BASES 21

where = cj = 1. Therefore we also have Pj6 k   x − x0 = c x − x0 = c (x − x0) . k X j j X j j j6=k  j6=k Note that we can take the summation on the right hand side to run over all j such that j 6= k, 0 because x0 − x0 = 0.

There are now two cases depending on whether k > 0 or k = 0. In the first case, we have 0 0 obtained an expression for xk − x0 in terms of the other vectors in X , and therefore X is linearly dependent. Suppose now that k = 0, so that the preceding equation reduces to

0 = c (x − x0) . X j j j>0

Since 0 cj = 1 it follows that cm 6= 0 for some m > 0, and this now implies Pj> −cj xm − x0 = (xj − x0) X cm j6=m,0 which shows that X0 is linearly dependent.

One important characterization of linear independence for a set Y is that an arbitrary vector has at most one expression as a linear combination of the vectors in Y . There is a similar characterization of affine independence.

Theorem II.21. A (finite) set X of vectors in a given vector space V is affinely independent if and only if every vector in V has at most one expression as an affine combination of vectors in X.

Proof. Suppose X is affinely independent and that y = t v = s v X j j X j j j j where vj runs through the vectors in V and t = s = 1 . X j X j j j Then we have     t v − v0 = s v − v0 X j j X j j  j   j  which in turn implies

        t v − t v0 = s v − s v0 X j j X j X j j X j  j   j   j   j  so that we have t (v − v0) = s (v − v0) . X j j X j j j>0 j 22 II. AFFINE GEOMETRY

Since the vectors vj −v0 (where j > 0) is linearly independent, it follows that tj = sj for all j > 0. Once we know this, we can also conclude that t0 = 1 − j>0 tj is equal to 1 − j>0 sj = s0, and therefore all the corresponding coefficients in the twPo expressions are equal.P

Conversely, suppose X satisfies the uniqueness condition for affine combinations, and suppose that we have c (v − v0) = 0 . X j j j>0 We then need to show that cj = 0 for all j. But the equation above implies that

  v0 = c (v − v0) + v0 X j j  j>0  and if we simplify the right hand side we obtain the equation

  v0 = 1 − c v0 + c v . X j X j j  j>0  j>0

The coefficients on both sides add up to 1, so by the uniqueness assumption we must have cj = 0 for all j > 0 ; but this implies that the vectors vj − v0 (where j > 0) are linearly independent.

Definition. If S is an affine subspace of the finite-dimensional vector space V and T ⊂ S is a finite subset, then T is said to be an affine basis for S if T is affinely independent and affinely spans T .

There is a fundamental analog of the preceding results which relates affine bases of affine sub- spaces and vector space bases of vector subspaces.

Theorem II.22. Let V be a finite-dimensional vector space, let S be an affine subspace, and suppose that S = z + W for a suitable vector z and vector subspace V . Then the finite set 0 X = {x0, · · · , xm} ⊂ S is an affine basis for S if and only if the set X = {x1 −x0, · · · , xn −x0} is linear basis for W .

Proof. First of all, since x0 ∈ S we may write S = x0 + W and forget about the vector z.

Suppose that X = {x0, · · · , xm} is an affine basis for S, and let y ∈ W . Since x0 +y ∈ x0 +W = S, there exist s0, · · · , sm ∈ F such that si = 1 and x0 + y = sixi. Subtracting x0 from Pi Pi both sides and using the equation si = 1, we see that Pi y = s (x − x0) X i i i>0 and hence X0 spans W . Since X0 is linearly independent by Theorem 20, it follows that X 0 is a basis for W .

0 0 Conversely, suppose that X = {x1 − x0, · · · , xn − x0} is linear basis for W . Since X is linearly independent, by Theorem 20 we also know that X is affinely independent. To see that X affinely 0 spans S, let u ∈ S, and write u = x0 + v, where v ∈ S. Since X spans W we know that

u = x0 + s (x − x0) X i i i>0 3. AFFINE BASES 23

for appropriate scalars si, and if we set s0 = 1− 0 si, then we may rewrite the right hand side Pi> of the preceding equation as i≥0 sixi, where by construction we have i≥0 si = 1. Therefore X affinely spans S, and it folloPws that X must be an affine basis for X,P

Definition. Suppose that we are in the setting of the theorem and X is an affine basis for S, so that each y ∈ S can be uniquely written as an affine combination tixi, where ti = 1. Pi Pi Then the unique coefficients ti are called the barycentric coordinates of y with respect to X. The physical motivation for this name is simple: Suppose that F is the real numbers and we place weights wi > 0 at the vectors vi in S such that the total weight is w units. Let ti = wi/w be the normalized weight at vi; then tixi is the center of gravity for the resulting physical Pi system (a version of this is true even if one allows some of the coefficients ti to be negative). In analogy for linear bases of vector subspaces, the number of elements in an affine basis for an affine subspace S depends on S itself. However, as illustrated by the final result of this section, there is a crucial difference in the formula relating the dimension of S to the number of elements in an affine basis.

Theorem II.23. If V is a finite-dimensional vector space and S is an affine subspace, then S has an affine basis. Furthermore, if we write S = y + W for suitable y and W , then every affine basis for S has exactly dim W + 1 elements.

Proof. If X = {x0, · · · , xm} ⊂ S is an affine basis for S = y + W = x0 + W , then 0 X = {x1 − x0, · · · , xn − x0} is a linear basis for W by Theorem 22, and conversely. Therefore the existence of an affine basis for S follows from Theorem 22 and the existence of a linear basis for W . Furthermore, since every linear basis for W contains exactly dim W elements, by Theorem 22 we know that every affine basis for S contains exactly dim W + 1 elements.

EXERCISES

In the exercises below, assume that all vectors lie in a fixed finite-dimensional vector space V over a field F.

1. Let a, b, c ∈ V (a vector space over some field) be noncollinear and for i = 1, 2, 3 let xi = tia + uib + vic, where ti + ui + vi = 1. Prove that the points x1, x2, x3 are collinear if and only if t1 u1 v1

t2 u2 v2 = 0

t3 u3 v3 where the left hand side is a 3 × 3 determinan t.

2. Prove the Theorem of Menelaus:4 Let a, b, c ∈ V (a vector space over some field) be noncollinear, and suppose we have points p ∈ bc, q ∈ ac, and r ∈ ab. Write these three vectors as p = b + t(c − b) = tc + (1 − t)b q = a + u(c − a) = uc + (1 − u)a r = a + v(b − a) = vb + (1 − v)a

4Menelaus of Alexandria (c. 70 A. D–c. 130 A. D.) worked in geometry and astronomy, and he is particularly given credit for making numerous contributions to and trigonometry. 24 II. AFFINE GEOMETRY where t, u, v are appropriate scalars. Then p, q and r are collinear if and only if t u v · · = −1 . 1 − t 1 − u 1 − v

3. Prove the Theorem of Ceva:5 In the setting of the preceding exercise, the lines ap, bq and cr are concurrent (there is a point which lies on all three lines) if and only if t u v · · = +1 . 1 − t 1 − u 1 − v

4. Let a, b, c ∈ V (as above) be noncollinear, and suppose we have points y ∈ ba and x ∈ bc which are distinct from a, b, c and satisfy these three vectors as x = b + t(a − b) = ta + (1 − t)b y = b + u(c − a) = uc + (1 − u)b where t, u are appropriate scalars (neither of which is 0 or 1). Prove that the lines ay and cx have a point in common if and only if ut 6= 1. [Hint: Explain why the lines have no points in common if and only if y − a and x − c are linearly dependent. Write both of these vectors as linear combinations of a − b and c − b, and show that if z and w are linearly independent, then pz + qw and rz + sw are linearly dependent if and only if sp = rq. Finally, compare the two conclusions in the preceding sentence.]

5. Let V be a finite-dimensional vector space over the field F, let W ⊂ V be a vector subspace, suppose that dim W = k, and let X = {x1, · · · , xm} be a finite subset of W . Prove that W is a basis for W in the sense of linear algebra if and only if X ∪{0} is an affine basis for W = 0+W if the latter is viewed as a k-plane.

5Giovanni Ceva (1647–1734) is known for the result bearing his name, his rediscovery of Menelaus’ Theorem, and his work on hydraulics. 4. AFFINE BASES 25

4. Affine bases

In this section we shall generalize certain classical theorems of Euclidean geometry to affine planes over fields in which 1 + 1 6= 0. Similar material is often presented in many courses as proofs of geometric theorems using vectors. In fact, the uses of vectors in geometry go far beyond yielding alternate proofs for some basic results in classical Greek geometry; they are often the method of choice for studying all sorts of geometrical problems ranging from purely theoretical questions to carrying out the computations needed to create high quality computer graphics. We shall illustrate the uses of vector algebra in geometry further by proving some nonclassical theorems that figure significantly in the next two chapters. F 1 −1 Let be a field in which 1 + 1 6= 0, and set 2 equal to (1 + 1) . Given a vector space V over F 1 and two distinct vectors a, b ∈ V , the of a and b is the vector 2 (ab).

Theorem II.24. Let V and F as above, and let a, b, c be noncollinear points in V . Then the lines joining = x = midpoint(a, b) and y = midpoint(a, c) is parallel to bc.

In ordinary Euclidean geometry one also knows that the of the segment joining x to y is half the length of the segment joining b to c (the length of the segment is just the distance between the endpoints). We do not include such a conclusion because our setting does not include a method for defining distances (in particular, an arbitrary field has no a priori notion of distance).

Figure II.3

Proof, Let W be the subspace spanned by c − b; it follows that bc = b + W . On the other 1 hand, the line joining the is given by 2 (b + c) + U, where U is the subspace spanned by 1 1 1 2 (a + c) − 2 (a + b) = 2 (c − b) . 1 Since there is a vector w such that W is spanned by w and U is spanned by 2 w, clearly W = U, and therefore the line joining the midpoints is parallel to bc by the construction employed to prove Theorem 14.

Definition. Let V and F as above, and let a, b, c be noncollinear points in V . The affine ∆ abc is given by ab ∪ ac ∪ bc, and the medians of this affine triangle are the lines joining a to midpoint(b, c), b to midpoint(a, c), and c to midpoint(a, b).

Theorem II.25. Let V , F, a, b, c be as above. Then the medians of the affine triangle ∆ abc are concurrent (pass through a single point) if 1 + 1 + 1 6= 0 and parallel in pairs if 1 + 1 + 1 = 0. 26 II. AFFINE GEOMETRY

Figure II.4

1 −1 Proof. First case. Suppose that 1 + 1 + 1 6= 0, and let 3 = (1 + 1 + 1) . Assume that the 1 1 point x lies on the line joining a to 2 (b + c) and also on the line joining b to 2 (a + c). Then there exist s, t ∈ F such that 1 1 sa + (1 − s) 2 (b + c) = x = tb + (1 − t) 2 (a + c) . Since a, b and c are affinely independent in both expansions for x the coefficients of a, b and c add up to 1, we may equate the barycentric coordinates in the two expansions for x. In 1 1 particular, this implies s = 2 (1 − t) and t = 2 (1 − s). If we solve these equations, we find that 1 s = t = 3 , so that 1 x = 3 (a + b + c) . A routine computation shows that this point does lie on both lines.

1 1 In a similar fashion one can show that the lines joining a to 2 (b + c) and c to 2 (a + b) also 1 meet at the point 3 (a + b + c), and therefore we conclude that the latter point lies on all three medians.

1 Second case. Suppose that 1 + 1 + 1 = 0; in this case it follows that 2 = −1. The line joining 1 a to 2 (b + c) is then given by a + W , where W is spanned by 1 2 (b + c) − a = − (a + b + c) . Similarly computations show that the other two medians are given by b + W and c + W . To complete the proof, we need to show that no two of these lines are equal.

1 1 However, if, say, we had b + W = c + W then it would follow that c, 2 (a + b), b, and 2 (a + c) would all be collinear. Since the line joining the second and third of these points contains a by construction, it would then follow that a ∈ bc, contradicting our noncollinearity assumption. Thus b + W 6= c + W ; similar considerations show that a + W 6= c + W and a + W 6= b + W , and therefore the three medians are distinct (and by the preceding paragraph are each parallel to each other).

HYPOTHESIS FOR THE REMAINDER OF THIS SECTION. For the rest of this section, the vector space V is assumed to be two-dimensional.

Definition. Let a, b, c, d ∈ V be four ordered points, no three of which are collinear. If ab||cd and ad||bc, we shall call the union ab ∪ bc ∪ cd ∪ da the affine determined by a, b, c and d, and we shall write it abcd. The of the parallelogram are the lines ac and bd. 4. AFFINE BASES 27

Theorem II.26. Suppose that a, b, c, d ∈ V as above are the vertices of an affine parallelogram. Then the diagonals ad and bc have a point in common and this point is equal to midpoint(a, d) and midpoint(b, c).

If we take F = R and V = R2, this result reduces to the classical geometric result that the diagonals of a parallelogram bisect each other.

Proof. The classical parallelogram law for vector addition states that (c − a) = (d − a) + (b − a) (see the accompanying figure).

Figure II.5

In fact, it is trivial to verify that d + b − a lies on both the parallel to ad through b and the parallel to ab and c, and hence this point must be c. Therefore the common point of ac and bd satisfies the equations tb + (1 − t)d = sc + (1 − s)a = s(d + b − a) + (1 − s)a = sd + sb + (1 − 2s)a . Since a, b and d are affinely independent, if we equate barycentric coordinates we find that 1 s = t = 2 . But this implies that the two lines meet at a point which is equal to both midpoint(a.c) and midpoint(b, d).

We include the following two theorems because they help motivate the construction of in Chapter III.

Theorem II.27. Let v, a, b, c be four points, no three of which are collinear (but coplanar). Let a0 ∈ va, b0 ∈ vb, and c0 ∈ vc be distinct from a, b, c such that ab||a0b0 and ac||a0c0. Then bc||b0c0.

Figure II.6 28 II. AFFINE GEOMETRY

Proof. Since a0 ∈ va and b0 ∈ vb, we may write a0 − v + s(a − v) and b0 − v + t(b − v) for suitable scalars s and t. Since ab||a0b0, it follows that b0 − a0 = k(b − a) for some scalar k. But k(b − v) − k(a − v) = k(b − a) = b0 − a0 = (b0 − v) − (a0 − v) = t(b − v) − s(a − v) . Since b − v and a − v are linearly independent, it follows that s = t = k. For similar reasons we also have c0 − v = t(c − v).

To prove bc||bc0, it suffices to note that c0 − b0 = (c0 − v) − (b0 − v) = t(c − v) − t(b − v) = t(c − b) by the preceding paragraph, and hence bc||b0c0 follows.

Here is a similar result with slightly different hypotheses:

Theorem II.28. Let v, a, b, c be four points, no three of which are collinear (but coplanar). Let a0 ∈ va, b0 ∈ vb, and c0 ∈ vc be distinct from a, b, c such that ab||a0b0 but ac and a0c0 meet in some point x. Then bc and b0c0 also meet in some point y and we have xy||ab.

Figure II.7

Proof. As in Theorem 27 we may write a0 − v0 = t(a − v) and b0 − v0 = t(b − v); however, c0 − v0 = s(c − v) for some s 6= t (otherwise ac||a0c0). Expressions for the point x may be computed starting with the equation x = ra + (1 − r)c = qa0 + (1 − q)c0 which immediately implies that x−v = r(a−v) + (1−r)(c−v) = q(a0 −v) + (1−q)(c0 −v) = qt(a−v) + (1−q)s(c−v) . Since a − v and c − v are linearly independent, we find that r = qt and 1 − r = (1 − q)s. These equations determine r completely as a function of s and t: t(1 − s) r(s, t) = s − t A similar calculation shows that any common point to bc and b0c0 has the form r(s, t)b + 1 − r(s, t) c  and a reversal of the previous argument shows that this point is common to bc and b0c0. Therefore y − x = r(s, t) (b − a) which shows that xy||ab. 4. AFFINE BASES 29

There is a third result of a similar type; its proof is left as an exercise (we should note that all these results will be improved upon later).

Theorem II.29. In the notation of the theorems, assume that all three pairs of lines { ab, a0b0 }, { ac, a0c0 }, and { bc, b0c0 } all have points of intersection. Then the three intersection points are collinear.

EXERCISES

Definition. Let a, b, c, d ∈ V be four ordered points, no three of which are collinear (with no parallelism assumptions). The union ab ∪ bc ∪ cd ∪ da is called the affine determined by = a, b, c and d, and as before we shall write it abcd. The diagonals of the quadrilateral are the lines ac and bd. The sides of the quadrilateral are four lines whose union forms the affine quadrilateral;

In the exercises below, assume that all vectors lie in the vector space F2, where F is a field in which 1 + 1 6= 0.

1. Prove that an affine quadrilateral is a parallelogram if and only if its diagonals bisect each other (in the sense of Theorem 26).

2. Suppose we are given an affine parallelogram. Prove that a line joining the midpoints of a pair of parallel sides contains the intersection point of the diagonals.

Figure II.8

3. In the figure above, assume we are given a parallelogram abcd such that e = midpoint(a, b) and 1 + 1 + 1 6= 0 in F. Prove that 1 2 1 2 g = 3 c + 3 a = 3 c + 3 e .

Definition. An affine quadrilateral abcd is said to be an affine if either ab||cd or bc||ad but not both (and generally the points are labeled so that the first is true). The two parallel sides are called the bases. 30 II. AFFINE GEOMETRY

4. Suppose we are given affine trapezoid abcd as above with bases ab and cd. Prove that the line joining midpoint(a, d) and midpoint(b, c) is parallel to the bases.

5. In the same setting as in the previous exercise, prove that the line joining midpoint(a, c) and midpoint(b, d) is parallel to the bases.

6. In the same setting as in the previous exercises, prove that the line joining midpoint(a, d) and midpoint(b, c) is equal to the line joining midpoint(a, c) and midpoint(b, d).

7. Prove Theorem 29. [Hint: This can be done using the Theorem of Menelaus.]

In the next exercise, assume that all vectors lie in a vector space V over a field F in which 1 + 1 = 0; the most basic example is the field Z2 which has exactly two elements.

8. Suppose that 1 + 1 = 0 in F, and aside from this we are in the setting of Theorem II.26: Specifically, let a, b, c, d ∈ V be four noncollinear points such that the first three are noncollinear the four points are the vertices an affine parallelogram. Prove that in this case the lines ad and bc are parallel. Is the converse also true? Give reasons for your answer. 5. GENERALIZED GEOMETRICAL INCIDENCE 31

5. Generalized geometrical incidence

Our geometry is an abstract geometry. The reasoning could be followed by a disembodied spirit who had no understanding of a physical point, just as a man blind from birth could understand the Electromagnetic Theory of Light. — H. G. Forder (1889–1981)

Although we have not yet defined a geometrical incidence space of arbitrary dimension, it is clear that the families of k-planes in Fn should define an n-dimensional incidence space structure on Rn. Given this, it is not difficult to guess what the correct definitions should be.

Definition. A geometrical incidence space is a triple (S, Π, d) consisting of a set S, a family of subsets Π the geometrical subspaces), and a function d from Π to the positive integers (the dimension) satisfying the following conditions:

(G-1) : If x0 · · · , xn are distinct points of S such that no P ∈ Π with d(P ) < n contains them all, then there is a unique P ∈ Π such that d(P ) = n and xi ∈ P for all i.

Notation. We denote the geometrical subspace P in the preceding statement by x0 · · · xn. The condition on the xi is expressed in the statement that the set { x0 · · · , xn } is (geometrically) independent.

(G-2) : If P ∈ Π and { x0 · · · , xm } is a set of geometrically independent points in P , then the geometrical subspace x0 · · · xm is contained in P . (G-3) : If P ∈ Π, then P contains at least d(P ) + 1 points.

If P ∈ Π and d(P ) = k, then we shall say that P is a k-plane; the set of all k-planes is denoted by Πk. By convention, a 1-plane is often called a line and a 2-plane is often simply called a plane. Note that the defining conditions do not give any a priori information about whether or not there are any k-planes in S (however, if S contains at least two elements, one can prove that Π1 must be nonempty).

For most of the examples that we shall consider, the whole space S is one of the geometrical subspaces. If this is the case and dim S = n, then we shall say that the system (S, Π, d) is an abstract n-dimensional geometrical incidence space. When n = 2 we shall also say the system is an abstract incidence plane.

If we are given a geometrical incidence space and the explicit data Π and d are either clear from the context or are not really needed in a discussion, we shall often simply say that “S is a geometrical incidence space.”

EXAMPLE 1. A three-dimensional incidence space (as defined above) is a geometrical incidence space if we take Π1 = cL, Π2 = cP , and Π3 = {S}.

EXAMPLE 2. A trivial example: Let Π be the family of all finite subsets of a set S with at least two elements, and for each such subset Q let d(Q) = #(Q) − 1, where #(Q) is the number of elements in Q. The figure below illustrates the special case in which S has three elements {a, b, c} and d(S) = 2. 32 II. AFFINE GEOMETRY

Figure II.9

EXAMPLE 3. An important class of examples mentioned at the beginning of this section: Let V be an n-dimensional vector space over a field F (where n > 0), and define the affine incidence structure on V such that for each positive integer k the set Πk is the set of k-planes considered in earlier sections of this chapter. The properties (G-1)–(G-3) may be verified as follows:

Proof of (G-1). If a set of vectors { v0, · · · , vk } is not geometrically independent, then the set is also affinely dependent, for if { v0, · · · , vk } is contained in a q-plane y + W for some q < k, then we have y + W = x0 + W and the k vectors v1 − v0, · · · , vk − v0 ∈ W must be linearly dependent because dim W < k. Hence the original vectors are affinely dependent as claimed. Taking negations, we see that if { v0, · · · , vk } is geometrically independent, then the set is also affinely independent.

Let Q be the affine span of v0, · · · , vk; then Q = bfx0 + W where W is the linear span of v1 − v0, · · · , vk − v0, and W is k-dimensional because these vectors are linearly independent. 0 Therefore Q is a k-plane containing the vectors vi. Conversely, if Q is an arbitrary k-plane 0 containing the vectors vi, then we may write Q = v0 + U where U is a vector subspace of dimension k which contains the difference vectors v1 − v0, · · · , vk − v0; it follows that U contains the k-dimensional vector subspace W described above, and since dim U = dim W it follows that W = U, so that = Q = Q0.

Proof of (G-2). Since a k-plane is closed under forming affine combinations, if v0, · · · , vm is contained in P then the affine span of v0, · · · , vm is also contained in P .

Proof of (G-3). Given a k-plane P , express it as v0 + W , where W is a k-dimensional vector subspace, and let v0, · · · , vk be an affine basis for W . Then the set

{ v0, v0 + v1, · · · , vk + v1 } forms an affine basis for P by Theorem 22. Hence P contains at least (k + 1) points.

We have not yet described an analog of the axiom implying that two planes in 3-space intersect in a line if they are not disjoint. Before formulating an appropriate generalization of this,6 we derive some consequences of (G-1)–(G-3):

Theorem II.30. Let x0, · · · , xm be geometrically independent points in a geometrical inci- dence space S, and suppose y 6∈ {x0, · · · , xm}. Then the set {x0, · · · , xm, y} is geometrically independent.

6The definition follows the proof of Theorem 35 below. 5. GENERALIZED GEOMETRICAL INCIDENCE 33

Proof. If the points are not geometrically independent then for some k ≤ m there is a k-plane P which contains all of them. Since {x0, · · · , xm} is geometrically independent, it follows that d(P ) ≥ m, so that d(P ) = m and P = x0 · · · xm. But this contradicts the hypothesis.

Theorem II.31. Let P be a k-plane in the geometrical incidence space S. Then there is a set of independent points {x0, · · · , xm } such that P = x0 · · · xm.

Proof. Let F be the family of finite independent subsets of P . No element in this family contains more than m + 1 elements, for every subset with more points will be geometrically dependent. Let k be the largest integer for which some element of F has k + 1 elements; by the preceding sentence, we must have k ≤ m. By (G-1) we have k = m.

Assume the contrary, so that k < m. Let { x0, · · · , xk } ∈ F. Then the k-plane Q = x0 · · · xk is contained in F, and k < m implies that Q is a proper subset of P . Let xk+1 be a point which is in P but not in Q. Then Theorem 30 implies that { x0, · · · , xk, y} is geometrically independent. But this contradicts the defining condition for k, which is that there are no geometrically independent subsets of P with k + 2 elements. Therefore we must have k = m.

Before proceeding, we shall adopt some standard conventions for a geometrical incidence space (S, Π, d). (G-0.a) The empty set is a geometrical subspace whose dimension is −1. (G-0.b) Each one point subset of S is a geometrical subspace whose dimension is zero. (G-0.c) One point subsets of S and the empty set are geometrically independent. These conventions allow us to state the next result in a relatively concise fashion.

Theorem II.32. Every finite subset X of the geometrical incidence space S contains a maximal independent subset Y = {y0, · · · , yk}. Furthermore, X is contained in y0 · · · yk, and the latter is the unique minimal geometrical subspace containing X.

Proof. Let Y ⊂ X be an independent subset with a maximum number of elements, let Q be the k-plane determined by Y , and let w ∈ X be an arbitrary element not in Y . Since y 6∈ Q would imply that Y ∪ {y} would be independent, it follows that y ∈ Q. Thus X ⊂ Q as claimed. Suppose that Q0 is another geometrical subspace containing X; then Q ⊂ Q0 by (G-2), and hence every geometrical subspace that contains X must also contain Q.

When we work with vector subspaces of a vector space, it is often useful to deal with their intersections and sums. The next two results show that similar constructions hold for geometrical subspaces:

Theorem II.33. The intersection of a nonempty family of geometrical subspaces of S is a geometrical subspace (with the conventions for 0- and (−1)-dimensional subspaces preceding Theorem 32).

Proof. Clearly it suffices to consider the case where the intersection contains at least two points. Let { Pα | α ∈ A} be the family of subspaces, and let F be the set of all finite independent subsets of ∩α Pα. As before, the number of elements in a member of F is at most d(Pα) + 1 for all α. Thus there is a member of F with a maximal number of elements; call this subset {x0, · · · , xk}. By (G-2) we know that x0 · · · xk is contained in each Pα and hence in ∩α Pα. If 34 II. AFFINE GEOMETRY

it were a proper subset and y is a point in the intersection which does not belong to x0 · · · xk, then {x0 · · · xk, y} would be an independent subset of the intersection with more elements than {x0, · · · , xk}. This contradiction means that x0 · · · xk = ∩α Pα. Although the union of two geometrical subspaces P and Q is usually not a geometrical subspace, the next result shows that there is always a minimal geometrical subspace containing both P and Q; this is analogous to the concept of sum for vector subspaces of a vector space.

Theorem II.34. If P and Q are geometrical subspaces of the geometrical incidence space S, then there is a unique minimal geometrical subspace containing them both.

Proof. By Theorem 31 we may write P = x0 · · · xm and Q = y0 · · · yn. Let A = {x0, · · · , xm, y0, · · · , yn}, and let T be the smallest subspace containing A (which exists by Theorem 32). Then P, Q ⊂ T , and if T 0 is an arbitrary geometrical subspace containing P and Q then it also contains A, so that T 0 must contain T as well.

The subspace given in the preceding result is called the join of P and Q, and it is denoted by P ? Q. Motivation for this definition is given by Exercise III.4.17 and Appendix B.

Theorem II.35. If P and Q are geometrical subspaces of S, then d(P ? Q) ≤ d(P ) + d(Q) − d(P ∩ Q).

It is easy to find examples of geometrical subspaces in Rn (even for n = 2 or 3) in which one has strict inequality. For example, suppose that L and M are parallel lines in R3; then the left hand side is equal to 2 but the right hand side is equal to 3 (recall that the empty set is (−1)-dimensional). Similarly, one has strict inequality in R3 if P is a plane and Q is a line or plane which is parallel to P .

Proof. Let P ∩ Q = x0 · · · xm. By a generalization of the argument proving Theorem 31 (see Exercise 1 below), there exist independent points y0, · · · , yp ∈ P and z0, · · · , zq ∈ Q such that

P = x0 · · · xmy0 · · · yp , Q = x0 · · · xmz0 · · · zq .

Let X = {x0, · · · , xm, y1, · · · , yp, z1, · · · , zq, and let T be the unique smallest geometrical subspace containing X. It is immediate that P ⊂ T and Q ⊂ T , so that P ? Q ⊂ T . On the other hand, if a geometric subspace B contains P ? Q, then it automatically contains X and hence automatically contains T . Therefore we have T = P ? Q.

It follows from Theorem 32 that d(P ? Q) = dim S ≤ #(X) + 1, and therefore we have dim(P ? Q) ≤ m + 1 + p + q = (m + p + 1) + (m + q + 1) − (m + 1) = d(P ) + d(Q) − d(P ∩ Q) which is what we wanted to prove.

The following definition contains an abstract version of the 3-dimensional incidence assumption about two planes intersecting in a line or not at all.

Definition. A geometrical incidence space is regular if the following holds:

(G-4) : If P and Q are geometrical subspaces such that P ∩ Q 6= ∅, then d(P ? Q) = d(P ) + d(Q) − d(P ∩ Q) . 5. GENERALIZED GEOMETRICAL INCIDENCE 35

EXAMPLE 1. Ordinary 3-dimensional incidences as defined in Section II.1 are regular. The only nontrivial case of the formula arises when P and Q are distinct planes, so that P ? Q = S.

EXAMPLE 2. Ordinary 3-dimensional incidences as defined in Section II.1 are regular. The for- mula in this case follows immediately from the standard inclusion-exclusion identity for counting the elements in finite sets: #(P ∪ Q) = #(P ) + #(P Q) − #(P ∩ Q) .

EXAMPLE 3. Here is an example which is not regular. Let F be a field, take the standard notions of lines and planes for F4, and let d(F4) = 3. Then the planes V and W spanned by {e1, e2} and {e3, e4} have exactly one point in common.

Logical indpendence of the regularity condition. The preceding example shows that it is not possible to prove the assumption in the definition of regularity from the defining assumptions for a geometrical incidence space. — For if it were possible to prove the regularity condition from the definition, then it would NOT be possible to construct an example of a geometrical incidence space that did not satisfy the regularity condition. The preceding remark illustrates the mathematical approach to concluding that one statement (say Q) cannot be derived as a logical consequence of other statements (say P1, · · · , Pn): It is only necessary to produce and example of an object which satisfies P1, · · · , Pn but does not satisfy Q.

EXAMPLE 4. The incidence structure associated to a finite-dimensional vector space V over a field F is regular. To prove this, we first establish the following.

Theorem II.36. Let P and Q be affine subspaces of the incidence space structure associated to a vector space V , and assume P ∩ Q 6= ∅. Write P = x + W and Q = x + U where x ∈ P ∩ Q. Then P ? Q = x + (W1 + W2).

REMARK. Since P ∩ Q = x + (W1 ∩ W2) is readily established (see Exercise II.2.4), Theorem 36 and the dimension formula for vector subspaces (Theorem A.9) imply the regularity of V .

Proof. The inclusion P ? Q ⊂ x + (W1 + W2) is clear since the right hand side is a geometrical subspace containing P and Q. To see the reverse inclusion, first observe that P ? Q = x + U for some vector subspace U; since P, Q ⊂ x+U it follows that W1, W2 ⊂ U and hence W1+W2 ⊂ U. The latter yields x + (W1 + W2) ⊂ P ? Q, and therefore the two subsets are equal. 

Finally, we introduce an assumption reflecting the Euclidean Parallel Postulate.

Definition. Two lines in a geometrical incidence space are parallel if they are coplanar but disjoint. A regular incidence space is said to be affine if given a line L and a point x 6∈ L, then there is a unique line M such that x ∈ M and M is parallel to L. — If S is an affine incidence space and d(S) = n, then we say that it is an affine n-space. If n = 2, it is also called an affine plane.

One can use the argument proving Theorem 14 to verify that the affine incidence space associated to a vector space is affine in the sense defined above. 36 II. AFFINE GEOMETRY

Equivalent mathematical structures

Our discussion of geometrical incidence fits closely with the main themes of these notes, but the formulation is definitely nonstandard. Normally such incidence structures are viewed in equivalent but more abstract terms. One approach involves specifying a sequence of dependence relations on finite subsets of a given set S; in this formulation, there is a class of such subsets that are called independent and satisfy a few suitable properties. As one might expect, a finite subset of k + 1 points is said to be independent if and only if there is no k-dimensional subspace containing them; the resulting structure is called a matroid. Details of abstract dependence theory and matroids are described in the paper by H. Whitney listed in the bibliography, and a more recent account of matroid theory is given in the following online reference: http://home.gwu.edu/∼jbonin/survey.ps

The matroid approach to independence leads naturally to another interpretation in terms of partially ordering relations on sets. Every matroid has a family of subsets which satisfy the conditions for a geometrical incidence space, and the associated ordered family of subsets satisfies the conditions for a partially ordered set to be a geometric lattice. A classic reference for the theory of lattices is the book by G. Birkhoff (Lattice Theory) cited in the bibliography.

EXERCISES

1. Let P be a geometrical subspace of S, and suppose that {x0, · · · , xk} is an independent subset of P . Prove that there is a (possibly empty) set of points xk+1, · · · , xm ∈ P such that {x0, · · · , xm} is independent and P = x0 · · · xm. [Hint: Imitate the proof of Theorem 31 using the family G ⊂ F of all subsets containing {x0, · · · , xk}.] 2. Prove that a subset of an independent set of points in a geometrical incidence space is independent.

3. (i) Let (S, Π, d) be a geometrical incidence space, let T ⊂ S be a geometrical subspace, and let ΠT be the set of all geometrical subspaces in Π which are contained in T . Prove that (S, ΠT , dT ) is a geometrical incidence space, where dT is the restriction or T . (ii) Prove that a geometrical subspace of a regular geometrical incidence space is again regular. Is the analog true for affine spaces? Give reasons for your answer.

4. Let S be a geometrical incidence space with d(S) = n. A hyperplane in S is an (n − 1)- plane. Prove that for every k < n, every k-plane in S is an intersection of (n − k) distinct hyperplanes. [Hint: If T is a k-plane and T = x0 · · · xk, choose y1, · · · , yn−k such that {x0 · · · xk, y1, · · · , yn−k} is independent. Let Pi be the hyperplane determined by all these points except yj.]

5. Prove that the join construction on geometrical subspaces of a geometrical incidence space satisfies the associativity condition (P ? Q) ? R = P ? (Q ? R) for geometrical subspaces P. Q, R ⊂ S. [Hint: Show that both of the subspaces in the equation are equal to the smallest geometrical subspace containing P ∪ Q ∪ R.]

6. Show that the Euclidean parallelism property is not a logical consequence of the defining conditions for a regular n-dimensional geometrical incidence space. [Hint: Look at the so-called 5. GENERALIZED GEOMETRICAL INCIDENCE 37 trivial examples from the previous section in which Π is the collection of all finite subsets and if P is finite then d(P ) = #(P ) − 1. Why do standard counting formulas imply that every pair of coplanar lines in this system will have a common point?]

Remark. For affine planes and the examples considered in the previous section, the following modi- fication of the Euclidean Parallelism Property is valid: Given a line L and a point x 6∈ L, then there is at most one line M such that x ∈ M and M is parallel to L. It is also not possible to prove this from the conditions defining a regular incidence space, and the following Beltrami-Klein incidence plane is an example: Let P be the set of all points in R2 which are interior to the unit ; this set is defined by the inequality x2 + y2 < 1 (see the figure below). The lines in P are taken to be the open chords of the unit circle with equation x2 + y2 = 1, or equivalently all nonempty intersections of ordinary lines in R2 with P . As the figure suggests, this example has the following property: Given a line L and a point x 6∈ L, then there are at least two lines M such that x ∈ M and M is parallel to L. — In fact, there are infinitely many such lines.

Figure II.10

The plane is the shaded region inside the circle, with the boundary excluded. The open chord L has no points in common with the open chords determined by M and N.

7. (Further examples as above) Define a finitely supportable convex body in Rn to be a nonempty set D defined by finitely many strict inequalities fi(x) > 0, where each fi is a linear polynomial in the coordinates of x. Let ΠD denote the set of all nonempty intersections P ∩ D, n where P is a geometrical subspace of R , and set the dimension dD(P ∩ D equal to d(P ). Prove that this structure makes D into a regular geometrical incidence n-space. [Hint: Try to do this first for n = 2, then for n = 3.]

8. Assume we are in the setting of the preceding exercise. (i) Let D be the open in R2 defined by x > 0, y > 0, −x > −1 and −y > −1 (hence the coordinates lie between 0 and 1). Prove that D has the parallelism properties that were stated but not proved for the Beltrami-Klein plane described above. (ii) Let D be the upper half plane in R2 defined by y > 0. Prove that for some lines L and external points x there is a unique parallel to L through x, but for others there are infinitely many such parallels. 38 II. AFFINE GEOMETRY

6. Isomorphisms and automorphisms

Although the mathematical concepts of isomorphism and automorphism are stated abstractly, they reflect basic concepts that arise very quickly in elementary geometry. Congruent are fundamental examples of isomorphic objects: The statement ∆ ABC =∼ ∆ DEF means that the obvious 1–1 correspondence of vertices from {A, B, C} to {D, E, F } preserves the basic structural measurements, so that the distances between the vertices satisfy d(A, B) = d(D, E), d(B, C) = d(E, F ), d(A, C) = d(D, F ) and the (degree or radian) measurements of the satisfy similar equations: measure(∠ ABC) = measure(∠ DEF ) measure(∠ ACB) = measure(∠ DF E) measure(∠ CAB) = measure(∠ F DE) Suppose now that we have an isosceles triangle ∆ ABC in which d(A, B) = d(A, C). In such cases one has a natural of the triangle with respect to the line joining A to the midpoint of B and C, and one aspect of this symmetry is a nontrivial of the isosceles triangle with itself; specifically, ∆ ABC =∼ ∆ ACB. In mathematics, an isomorphism from an object to itself is called an automorphism. The identity map from the object to itself is a trivial example of an automorphism, and the isosceles triangle example indicates that some structures may have other automorphisms.

The word “isomorphic” means ”identical ” or “same form,” and in mathematics it means that one has a rule for passing between two objects that preserves all the mathematical structure that is essential in a given context. In particular, if two objects are isomorphic, a statement about the structure of the first object is true if and only if the corresponding statement about the second object is true. Such principles can be extremely useful, especially when one of the objects is relatively easy to work with and the other is less so.

The precise mathematical definitions of isomorphisms and automorphisms vary, and the details depend upon the sort of objects being considered. Also, there may be several different notions of isomorphism or automorphism, depending upon the amount of structure that remains un- changed. For example, if we are working with triangles, it is often useful to consider triangles that might not be congruent but are similar; in other words, one still has a 1–1 correspondence correspondence of vertices from {A, B, C} to {D, E, F } as before such that the measure equations are satisfied, but now we only know that there is a positive number r (the ratio of similitude) such that d(A, B) = r · d(D, E), d(B, C) = r · d(E, F ), d(A, C) = r · d(D, F ) . Of course, congruence is the special case of for which r = 1.

Here is the appropriate definition for geometrical incidence spaces:

Definition. Let (S, Π, d) and (S0, Π0, d0) be geometrical incidence spaces. An isomorphism of geometrical incidence spaces from (S, Π, d) to (S 0, Π0, d0) is a 1–1 correspondence f : S → S0 such that if P ⊂ S, then P ∈ Π if and only if its image f[P ] belongs to Π0, and in this case d(P ) = d0( f[P ] ). In other words, P is a k-plane in S if and only if f[P ] is a k-plane in S 0.7

7As noted in the Prerequisites, we use f[P ] to denote the image of P under the mapping f. 6. ISOMORPHISMS AND AUTOMORPHISMS 39

It is standard to write A =∼ B when two mathematical systems are isomorphic, and we shall do so throughout these notes.

The first theorem of this section implies that isomorphism of geometrical incidence spaces is an .

Theorem II.37. (i) If f : S → S0 is an isomorphism of geometrical incidence spaces, then so is its inverse f −1 : S0 → S. (ii) If f : S → S0 and g : S0 → S00 are isomorphisms of geometrical incidence spaces, then so is their composite g o f : S → S00.

Proof. (i) If f is 1–1 and onto, then it has an inverse map f −1 which is also 1–1 and onto. If Q ⊂ S is a k-plane, then the identity Q = f f −1[Q] implies that f −1[Q] is a k-plane in S.   Similarly, if f −1[Q] is a k-plane in S then so is Q = f f −1[Q] .   (ii) If f and g are 1–1 onto, then so is g o f. If P ⊂ S is a k-plane, then so is f[P ] because f is an isomorphism of geometrical incidence spaces, and since g is also an isomorphism of geometrical incidence spaces then g o f[P ] = g f[P ]   is also a k-plane. Conversely, if the latter is a k-plane, then so is f[P ] since g is an isomorphism of geometrical incidence spaces, and therefore P is too because f is an isomorphism of geometrical incidence spaces.

The next result illustrates earlier assertions that isomorphisms preserve the basic properties of mathematical structures like abstract geometrical incidence spaces. Theorem II.38. Let f : S → S0 is an isomorphism of geometrical incidence spaces, and let X = { x0, · · · xm } be a finite subset of S. Then X is a geometrically independent subset of S if and only if f[X] is a geometrically independent subset of X 0.

Proof. Suppose that X is independent, and assume that f[X] is not. Then there is some k-plane Q ⊂ S0 such that d(P ) < q and f[X] ⊂ Q. Let P = f −1[Q]. By the definition of isomorphism, it follows that P is also a k-plane, where k < q, and X = f −1 f[X] is contained in P . Therefore X is not independent, which contradicts our choice of X . Thus it follows that f[X] is geometrically independent if X is. The converse statement follows by applying the preceding argument to the inverse isomorphism f −1.

If G is a family of geometrical incidence spaces, then a classifying family for G is a subfamily C such that each object in G is isomorphic to a unique space in C. The standard coordinate affine spaces Fn (where F is some field) comes relatively closed to being a classifying family for n-dimensional affine incidence spaces for n ≥ 3. It is only necessary to add standard coordinate affine n-spaces over skew-fields (see the second paragraph of Appendix A). The only difference between fields and skew-fields is that multiplication is not necessarily commutative in the latter; the standard nontrivial example is given by the quaternions, which are discussed in Appendix A. If F is a skew-field, let Fn be the right vector space of ordered n-tuples as defined in Appendix A. As indicated there, all of the linear algebra used to prove that Fn is an affine n-space goes through if F is a skew-field. In particular, the right vector space Fn is an affine n-space. The classification of n-dimensional affine spaces (where n ≥ 3) is then expressible as follows: 40 II. AFFINE GEOMETRY

Theorem II.39. Let (S, etc.) be an affine n-space, where n ≥ 3. Then there is a skew-field F such that S is isomorphic to Fn as a geometrical incidence space. Furthermore, if E ind F are skew-fields such that S is isomorphic to En and Fn, then E and F are algebraically isomorphic.

The proof of this result involves several concepts we have not yet introduced and is postponed to Remark 3 on following the proof of Theorem IV.19. Specifically, it will be a fairly simple ap- plication of the projective coordinatization theorem (Theorem IV.18) to the synthetic projective extension of S (which is defined in the Addendum to Chapter 3).

Automorphisms and symmetry

A standard dictionary definition of the word symmetry is “regularity in form and arrangement,” and the significance of automorphisms in mathematics is that they often yield important patterns of regularity in a mathematical system. This is apparent in our previous example involving isosceles triangles. Of course, the amount of regularity can vary, and an equilateral triangle has a more regular structure than, say, an isosceles right triangle; generally speaking, the more regular an object is, the more automorphisms it has.

Definition. An automorphism of a geometrical incidence space (S, Π, d) and (S 0, Π0, d0) is a 1–1 onto map from S to itself which is an isomorphism of geometrical incidence spaces. — The identity map of a mathematical object is normally an example of an automorphism, and it is a routine exercise to check that for each geometrical incidence space (S, Π, d), the identity map of S defines an automorphism of (S, Π, d).

The following result is an immediate consequence of Theorem 38 and the preceding sentence.

Theorem II.40. The set of all automorphisms of a geometrical incidence space (S, Π, d) forms a with respect to composition of mappings.

Notation. This group is called the geometric symmetry group of (S, Π, d) and is denoted by ΣΣ(S, Π, d).

EXAMPLE 1. Let (S, Π, d) be the affine incidence space associated to a vector space V . If T is an invertible linear self-map of V , then T also defines an incidence space automorphism. This is true because the condition Kernel(T ) = {0} implies that T maps each k-dimensional vector subspace W to another subspace T [W ] of the same dimension (see Theorem A.14.(iv) ). Furthermore, if S is an arbitrary subset and x ∈ V , then it is easy to check that T [x + S] = T (x) + T [S] (this is left to the reader as an exercise). .

EXAMPLE 2. If V is as above and x ∈ V , define the mapping Tx : V → V (translation by x) to be the mapping Tx(v) = x + v. Then Tx is clearly 1–1 and onto. Furthermore, it defines a geometrical space automorphism because Tx[y + W ] = (x + y) + W shows that P is a k-plane in V if and only if Tx[P ] is (again, filling in the details is left to the reader).

The preceding observations show that affine incidence spaces associated to vector spaces gener- ally have many geometric . Define the affine group of V — denoted by Aff(V ) — to be all symmetries expressible as a composite T o S, where T is a translation and S is an invertible linear transformation. We shall say that the elements of Aff(V ) are affine transformations. The terminology suggests that the affine transformations form a subgroup, and we shall verify this in the next result. 6. ISOMORPHISMS AND AUTOMORPHISMS 41

Theorem II.41. The set Aff(V ) is a subgroup of the group of all geometric symmetries of V . It contains the groups of linear automorphisms and translations as subgroups.

Proof. First of all, the identity map is a linear transformation and it is also translation by 0, so clearly the identity belongs to Aff(V ). If A is an arbitrary affine transformation, then for each x ∈ V we have A(x) = S(x) + y, where y ∈ V and S is an invertible transformation. If A0 is another such transformation, write A0(x) = S0(x) + y0 similarly. Then we have A0 o A(x) = A0 S(x) + y =  S0 (S(x) + y) + y0 = S0S(x) + S0(y) + y0 )  showing that A0 o A is also affine; hence Aff(V ) is closed under multiplication. To see that this family is closed under taking inverses, let A be as above, and consider the transformation B(x) = S−1(x) − S−1(y) . By the above formula, B o A and A o B are both equal to the identity, so that A−1 is the affine transformation B. The second statement in the theorem follows from Examples 1 and 2 above.

In general, Aff(V ) is not the full group of geometric symmetries. For example, if F = C (the complex numbers), the cooreinatewise conjugation map χ : Cn −→ Cn n taking (z1, · · · , zn) to (z1, · · · , zn) is a geometrical symmetry of C which does not lie in Aff (Cn).8 However, if F is the integers mod p (where p is prime, the rational numbers, or the real numbers, then Aff(V ) does turn out to be the entire group of geometric symmetries of Fn.

Historical note. Although the concept of geometric symmetry was not formulated explicitly until the early 19th century, special symmetries of Rn (for at least n = 2, 3) known as rigid motions or isometries are implicit in classical Greek geometry, particularly in attempts at “proof by superposition” (a rigid motion is defined to be a 1–1 correspondence T from Rn to itself that preserves distances (i.e., d(x, y) = d T (x), T (y) for all x, y ∈ Rn — such maps also preserve  angles).9 More precisely, superposition phrases of the form, “Place figure A so that points B coincide with points C,” may be interpreted as saying, “Find a rigid motion of Rn that maps points B to points C.” Indeed, it seems that the inability of classical geometry to justify the notion of superposition resulted from the lack of a precise definition for rigid motions.

EXERCISES

1. Give a detailed verification of the assertion T [x + S] = T (x) + T [S] which appears in Example 1 above.

2. If V and W are vector spaces over a field F, a map T : V → W is called affine if it has the form T (v) = S(v) + w0, where S is linear and w0 ∈ W . Prove that T tx + (1 − t)y = tT (x) + (1 − t)T (y)  for all x, y ∈ V and t ∈ F.

8See Exercise V.2.5 for a full explanation. 9The fact that rigid motions are geometrical symmetries follows because they all lie in the group Aff(V ) defined previously. A proof is given in the Addendum to Appendix A. 42 II. AFFINE GEOMETRY

3. Suppose that {v0, · · · , vn } and {w0, · · · , wn } are affine bases for the vector space V . Prove that there is a unique T ∈ Aff(V ) such that T (vi) = wi for i = 0, · · · , n. [Hint: The sets {v1 − v0, · · · , vn − v0 } and {w1 − w0, · · · , wn − w0 } are bases. Apply Theorem A.13(v).]

4. Suppose F 6= Z2 is a field, V is a vector space over F, and T : V → V is a 1–1 onto map satisfying the condition in Exercise 2: T tx + (1 − t)y = tT (x) + (1 − t)T (y)  Prove that T is an affine transformation. [Hint: Set S(v) = T (v) − T (0) and prove S is linear. Observe that S maps 0 to itself. — If the problem seems too hard as stated, try to prove if for fields in which 1 + 1 6= 0.]

5. Let f : (S, Π, d) → (S0, Π0, d0) be an isomorphism of geometrical incidence spaces, and let P and Q be nonempty geometrical subspaces of S in the extended sense (i.e., we include the possibility that either consists of one point). If “?” is the join construction described in the notes, prove that f[P ? Q] = f[P ] ? f[Q]. [Hint: recall that the join of two geometrical subspaces is the unique smallest geometrical subspace containing both of them.]

6. Let f : (S, Π, d) → (S0, Π0, d0) be an isomorphism of geometrical incidence spaces. (i) Prove that (S, Π, d) is regular if and only if (S 0, Π0, d0) is.

(ii) Prove that (S, Π, d) is an affine n-space if and only if (S 0, Π0, d0) is.

(iii) Let m > 2 be an integer. Prove that every line in (S, Π, d) contains exactly m points if and only if the same is true for (S0, Π0, d0). Explain why a similar conclusion holds if we substitute “infinitely many” for “exactly m” in the statement.