<<

ALGEBRA - LECTURE V

1. Bilinear forms Let R be a with 1, and M and N two R-modules. A map T : M → N is a homomorphism of R-modules if (i) T (v + u) = T (v) + T (u) for any two u, v in M. (ii) T (rv) = rT (v) for any r in R and v in M. Of course, if R is a field than this is simply a definition of a linear transformation. The set of all such homomorphism is denoted by HomR(M,N). This set is an R- itself, under the addition (T1 + T2)(v) = T1(v) + T2(v) and multiplication (rT )(v) = r(T (v)). For m n example, if M = R and N = R , then HomR(M,N) is simply the set of all n × m matrices with coefficients in R. Let M,N and L be three R-modules. A on M × N with values in L is a map B : M × N → L which is linear in each variable, that is, (i) B(v + u, w) = B(v, w) + B(u, w) for any v, u in M and w in N. (ii) B(v, u + w) = B(v, u) + B(v, w) for any v in M and u, w in N. (iii) B(rv, u) = rB(v, u) = B(v, ru) for any v in M, u in N and r in R.

The set of all such bilinear forms on is denoted by BilR(M × N,L). It is an R module with respect to the obvious operations. m n Assume that M = R with a e1, . . . , em and N = R with a basis f1, . . . , fn. Then we can write any v in M as v = x1e1 + ··· + xmem and any w in N as w = y1f1 + ··· ynfn. Thus, we can identify v with an m × 1 x and w with an n × 1 matrix y. If B is a bilinear form, then the axioms imply that X T B(v, w) = xiyjB(ei, fj) = x Ay i,j where A is an n × m matrix with entries B(ei, fj). Thus we can identify BilR(M × N,L) with Mn×m(L), the set of n × m matrices with coefficients in L. If M = N = Rn then A is a . In this case B is said to be symmetric if B(v, w) = B(w, v). This is equivalent to A = AT .

2. Tensor products

In this section we shall define the M ⊗R N of two R-modules M and N. This operation closely related to bilinear forms on M × N. The tensor product of M and N is defined as follows. Let F be the free R-module generated by all pairs (v, w) where v is in M and w in N. (In other words, F is a free module with basis given by elements in M × N.) Let I ⊆ F be a submodule of F generated by elements (i) (v + u, w) − (v, w) − (u, w) for any v, u in M and w in N, (ii) (v, u + w) − (v, u) − (v, w) for any v in M and u, w in N, 1 2 ALGEBRA - LECTURE V

(iii) (rv, u) − r(v, u) and (v, ru) − r(v, u) for any v in M, u in N and r in R.

Then M ⊗R N is defined to be the quotient F/I. The image of a pair (v, w) in M ⊗R N is denoted by v ⊗ w. These elements are called pure tensors and they generate the tensor product. Due to the relations contained in I we have relations between pure tensors such as (v + u) ⊗ w = v ⊗ u + u ⊗ w and (rv) ⊗ v = r(v ⊗ u) = v ⊗ (ru) in M ⊗R N. It is important to notice that the identity element for addition in M ⊗R N is equal to the pure tensor 0 ⊗ 0. In order to see this we need to show that the pair (0M , 0N ) ∈ F is contained in the submodule I. (I am just for a moment using subscripts to differentiate among 0’s of different modules.) This is easy to see. Substituting 0M = 0R · 0M and subtracting 0F = 0R · (0M , 0n) we get

(0M , 0N ) = (0R · 0M , 0N ) − 0R · (0M , 0N ) ∈ I.

As the first example of a tensor product, we shall compute Z/nZ ⊗Z Z/mZ assuming that m and n are relatively prime. Consider a pure tensor x ⊗ y. Since n is invertible modulo m, we can write y = nz for some z modulo m. Then x ⊗ y = x ⊗ nz = n(x ⊗ z) = nx ⊗ z = 0 ⊗ z = 0 ⊗ 0 where the last identity is derived using 0 = 0 · 0 for the first factor. We have shown that every pure tensor is trivial. It follows that

Z/nZ ⊗Z Z/mZ = 0. As this example shows, tensor products are tricky matter. However, the following is clear. If M is generated by e1, . . . , em and N by f1, . . . , fn, as R-modules, then M ⊗R N is generated by pure tensors ei ⊗ fj. In the case of free modules this can be sharpened as follows. Proposition 2.1. Rm ⊗ Rn =∼ Rmn m n m n Proof. Let ei and fj be the standard basis vectors in R and R , respectively. Then R ⊗R is generated by pure tensors ei ⊗ fj. In order to prove the proposition we have to show that ei ⊗ fj are linearly independent. This is accomplished as follows. Let Mm,n(R) be the R- module of m×n-matrices with coefficients in R. Consider the map S : F → Mm,n(R) defined on basis elements (u, v) in F by S(u, v) = uvT . Here uvT is the usual product of an m × 1 matrix u and an 1 × n matrix vT . By the distibution property of , the submodule I is in the of S. Thus m n the map descends to a map S from R ⊗R R to Mm,n(R). Under this map the pure tensor ei ⊗fj goes to the matrix with 1 at the (i, j)position and 0 elsewhere. This shows that ei ⊗fj are linearly independent.  m n If R = k is a field, then the last proposition shows that the tensor product k ⊗k k can be identified with the set of m × n matrices. Under this identification pure tensors correspond to 1 matrices. Thus a general element of a tensor product can not be written as a pure tensor, which is a common mistake for people with little experience with tensor products. We finish this section by showing a tautological relationship between tensor products and bilinear forms. More precisely, an R-homomorphism from M ⊗R N = F/I to an R-module L can be composed with the natural projection from F onto F/I to obtain a homomorphism from F to L. Clearly, a homomorphism from F to L corresponds to a homomorphism from ALGEBRA - LECTURE V 3

F/I to L if and only if it is trivial on I. Since F is freely generated by pairs (v, u), in order to define an element B in HomR(F,L) one picks, freely, a value B(v, u) in L for any pair (v, u) in M × N. Next, note that B is equal to 0 on the submodule I if and only if B is a bilinear form on M × N. Thus we have shown that ∼ HomR(M ⊗R N,L) = BilR(M × N,L). This relationship is, in essence, what we used to show that Rm ⊗ Rn =∼ Rmn. Indeed, we m n picked L = Mm,n(R) and defined, using matrix multiplication, a bilinear map from R × R to Mm,n(R). In general, this is likely the only way to show that an element in M ⊗R N is non-trivial. That is, one needs to find a module L and a bilinear form with values in L such that the corresponding homomorphism is non-zero when evaluated on the element in question. ∼ As an example, we shall show that Q ⊗Z Q = Q. First, using the usual properties of tensor products, a pure tensor in Q ⊗Z Q can be reduced to a c ad c ac ⊗ = ⊗ = ⊗ 1. b d bd d bd a c Here b and d are usual fractions, and we used the property of tensor products to move the integers d and c across ⊗. Next, if r1, . . . , rn are rational numbers, then n n X X ri ⊗ 1 = ( ri) ⊗ 1. i=1 i=1

This shows that any element in Q ⊗Z Q is represented by a pure tensor of the form r ⊗ 1. Finally, we use a Z-bilinear form B : Q × Q → Q

defined by (r, s) 7→ r · s. Then B defines a homomorphism (of Z-modules) from Q ⊗Z Q to Q where r ⊗ 1 maps to r. In particular, we have constructed a natural isomorphism between Q ⊗Z Q and Q. 3. Quadratic forms Assume that R = k is a field of characteristic 6= 2 and V a of n. Let B be a on V with values in k. Then the Q(x) = B(x, x) is a attached to B. For example, if V = kn and B corresponds to the identity matrix, then 2 2 Q((x1, . . . , xn)) = x1 + ··· + xn. The bilinear form B can be in turn recovered from the quadratic form Q by 1 B(v, w) = [Q(v + w) − Q(v − w)]. 4 (Note that dividing by 4 does not make sense for a general ring or for a field in characteristic 2.) Hence it makes sense to introduce the following axioms. A quadratic form is a map Q : V → k such that (i) Q(λv) = λ2Q(v). 4 ALGEBRA - LECTURE V

(ii) 1 B(v, w) = [Q(v + w) − Q(v − w)] 4 defines a bilinear (necessarily symmetric) form on V . A quadratic space is a pair (V,Q) consisting of a vector space V and a quadratic form Q on V . An isometry between two quadratic spaces (V,Q) and (V 0,Q0) is a linear transformation ρ : V → V 0 such that Q(v) = Q0(ρ(v)) for any v in V . Of course, if ρ is invertible, then the two quadratic spaces are said to be isometric. The problem of classifying quadratic spaces is a interesting problem which has a long history. Fix a quadratic space (V,Q). Let B be the corresponding symmetric bilinear form. The quadratic space V is regular (non-degenerate) if {v ∈ V | B(v, u) = 0 for all u ∈ V } = 0. I claim that if (V,Q) is regular then any isometry ρ : V → V 0 must be one to one. To see this, let v be in the kernel of ρ. Then ρ(v + u) = ρ(v − u) for any u in V and, since ρ is an isometry, Q(v + u) = Q(v − u). It follows that 1 B(v, u) = [Q(v + u) − Q(v − u)] = 0 4 for all u and therefore v = 0 by regularity of V . This proves the claim. The form B defines a T : V → V ∗, where V ∗ is the to V , by T (v)(u) = B(v, u) for all u ∈ V. Proposition 3.1. Let V be a quadratic space and T defined as above. The following three statements are equivalent: (i) The quadratic space V is regular. (ii) The map T : V → V ∗ is an isomorphism. (iii) If A is a matrix of B with respect to a basis e1, . . . , en then det(A) 6= 0. Proof. The first two statements are equivalent since the kernel of T consists of all v in V such that B(v, u) = 0 for all u in V . Furthermore, given the basis e1, . . . , en in V we can pick a ∗ ∗ ∗ dual basis e1, . . . , en (i.e. ei (ej) = δi,j). Then T is given by the matrix A. The equivalence of the last two statements follows.  Assume that V is a regular quadratic space. For every subspace U ⊆ V define U ⊥ = {v ∈ V | B(v, u) = 0 for all u ∈ U}. Since U ⊆ V , the restriction of linear functionals from V to U gives a surjection P : V ∗ → U ∗. Then U ⊥ is the kernel of the composite P ◦ T . Since P ◦ T is surjective, the dimension of the kernel of P ◦ T is the difference of dimensions of V and U ∗. This gives dim(U) + dim(U ⊥) = dim(V ). Proposition 3.2. Assume that V is a regular quadratic space. Then for every subspace U, we have (U ⊥)⊥ = U. ALGEBRA - LECTURE V 5

Proof. Note that U ⊆ (U ⊥)⊥. Thus, it suffices to show that their dimensions coincide. Since dim(U) + dim(U ⊥) = dim(V ) and, likevise, dim(U ⊥) + dim((U ⊥)⊥) = dim(V ) ⊥ ⊥ it follows that dim(U) = dim((U ) ). The proposition follows. 

If (V1,Q1) and (V2,Q2) are two quadratic spaces, then one can define a direct sum (V1 ⊕ V2,Q) to be, as a vector space, a direct sum of the two spaces with the quadratic form

Q((v1, v2)) = Q1(v1) + Q2(v2).

Notice that V1 and V2 can be viewed as subspaces of V and V is regular if and only if both ⊥ ⊥ V1 and V2 are regular. If that is the case then V1 = V2 and V2 = V1. Proposition 3.3. Let V be a regular quadratic space. Let U be a subspace of V . Then U is also a quadratic space with respect to the restriction of the bilinear form B to U. Then U is regular if and only if V = U ⊕ U ⊥. Proof. Assume that U is regular. Let v be in V . Then u → B(v, u) defines a functional on U. Since U is regular, there exists w in U such that B(v, u) = B(w, u) for all u in U. This means that v − w is in U ⊥. Since v = w + (v − w) we have shown that any element of V can be written as a sum of elements in U and U ⊥. Since the dimensions of U and U ⊥ add to the dimension of V , the space V is indeed a direct sum of U and U ⊥. In the other direction, assume that V = U ⊕ U ⊥. If we pick a basis in U and a basis in U ⊥, the union of the two is a basis of V . The matrix A of the bilinear form is a , with two diagonal blocks corresponding to the two summands. Thus, if det(A) 6= 0, the same holds for the blocks. It follows that U is regular, as claimed. 

Corollary 3.4. Let V be regular orthogonal space. Then there exists a basis e1, . . . , en of V such that B(ei, ej) = 0 for all i 6= j. Such a basis is called normal. Proof. Proof is by induction on dimension of V . If the dimension is one, there is nothing to prove. Since V is non-degenerate, there exists a vector e1 in V such that Q(e1) 6= 0. Let U ⊥ ⊥ be the line spanned by e1. Then V = U ⊕ U and dim(U ) = dim(V ) − 1. By the induction assumption there exists a normal basis e2, . . . , en of U. The combined basis e1, . . . , en is a normal basis of V .  Let V be a one-dimensional quadratic space. If we pick a basis vector e, then the quadratic form Q is given by Q(x) = ax2 for some non-zero a in k. We shall denote the pair (k, ax2) by hai. Thus the above proposition shows that ∼ V = ha1i ⊕ · · · ⊕ hani

for some non-zero elements a1, . . . , an in k. If we replace the vector a by a multiple b·e then the form ax2 is replaced by (ab2)x2. Thus

hai =∼ hab2i. 6 ALGEBRA - LECTURE V

In particular, if k = R, the field of real numbers, any regular quadratic space of dimension n is isomorphic to h1i ⊕ · · · ⊕ h1i ⊕ h−1i ⊕ · · · ⊕ h−1i. where we have p and q summands of each type (p + q = n). The difference p − q is called the signature of the real quadratic space. Two real quadratic spaces are isomorphic if and only if they have the same signature. This is a consequence of Witt’s lemma. As the second example, consider H, a quadratic space of dimension 2 with a basis e1, e2 such that the matrix of the bilinear form is  0 1  A = . 1 0 This quadratic space is called a hyperbolic plane. Let U be a one dimensional space spanned ⊥ by e1. Note that U = U in this case. However, if we consider a different basis

f1 = e1 + e2 and f2 = e1 − e2 then the lines U1 and U2 spanned by f1 and f2 respectively are perpendicular to each other. We have shown that H =∼ h2i ⊕ h−2i. A quadratic space (V,Q) is called isotropic if there exists a non-zero vector v in V such that Q(v) = 0. Otherwise, the space is called anisotropic. An interesting property of a regular isotropic space is that for every a in k there is a vector u in V such that Q(u) = a. This is seen as follows. Fix a non-zero v such that Q(v) = 0. Then, for every w in V consider the line w + tv through w in the direction of v. Since V is regular, there exists w in V such that B(v, w) = 1. Then, for t ∈ k,

Q(w + tv) = Q(w) + 2B(w, v)t + Q(v)t2 = Q(w) + 2t.

In words, t 7→ Q(w + tv) is a linear function. It clearly takes all possible values in k. (Here we definitely need that 2 6= 0.)

Proposition 3.5. If V is a regular isotropic space then V contains a subspace isomorphic to the hyperbolic plane.

Proof. Let v be a non-zero vector such that Q(v) = 0. Since V is regular, there exists a vector w such that B(v, w) = 1. Next, consider u = w + tv. Then

Q(w + tv) = Q(w) + 2B(w, v)t + Q(v)t2 = Q(w) + 2t.

Q(w) Q(w) In particular, Q(w + tv) = 0 for t = − 2 . It follows that v and u = w − 2 v span a hyperbolic plane.  Corollary 3.6. For any a in k×, the quadratic space hai ⊕ h−ai is isometric to a hyperbolic plane.

Proof. This is clear since hai ⊕ h−ai is isotropic.  ALGEBRA - LECTURE V 7

4. Witt’s lemma Let V be a regular quadratic space, and v a vector such that Q(v) 6= 0. Then one can define a map σv : V → V (reflection about v) by B(u, v) σ (u) = u − 2 v. v B(v, v) ⊥ Notice that σv(v) = −v and σv(u) = u for every u perpendicular to v. Since V = k·v⊕(k·v) , it is clear that σv is an isometry (and why it is called a reflection). Now we are ready to prove the main result of this section.

Theorem 4.1. (Witt’s Lemma) Let V be a regular space and V1 ⊆ V a regular subspace. Let ρ1 be an isometry from V1 into V . Then ρ1 can be extended to an isometry ρ from V to V .

Proof. Let v be a vector in V1 such that Q(v) 6= 0. Let u = ρ1(v). Then Q(u) = Q(v). The quadratic form satisfies a parallelogram equality Q(v + u) + Q(v − u) = 2(Q(v) + Q(u)) = 4Q(v) 6= 0.

Thus Q(v + u) 6= 0 or Q(v − u) 6= 0. If Q(v − u) 6= 0 then the reflection σv−u is well defined and σv−u(v) = u. If Q(v + u) 6= 0 then σv+u(v) = −u and (σv+u ◦ σv)(v) = u.

Now we can proceed by induction. If dim(V1) = 1 then we are done. Otherwise, replace ρ1 0 0 0 by ρ1 = σv−u ◦ ρ (or ρ1 = σv ◦ σv+u ◦ ρ). Then ρ1 is an isometry from V1 into V such that 0 0 0 that ρ1(v) = v. Now let V and V1 be orthogonal complements of v in V and V1, respectively. 0 0 0 0 Note that the restriction of ρ1 to V1 gives an isometry from V1 into V . By the induction 0 0 0 0 assumption on dimension of V1, ρ1 extends to an isometry ρ from V to V . Then ρ = σv−u ◦ρ 0 (or σv+u ◦ σv ◦ ρ ) is the required extension of ρ1. The theorem is proved.  The proof of Witt’s lemma shows that any isometry of an n-dimensional regular space can be written as a product of not more than 2n reflections.

Corollary 4.2. Let U1, U2 and U be three regular spaces. If U1 ⊕ U is isometric to U2 ⊕ U then U1 and U2 are isometric.

Proof. Let α be an isometry from U1 ⊕ U to U2 ⊕ U. Restrict α to U, the second summand of U1 ⊕ U. By Witt’s lemma, α extends to an isometry β from U2 ⊕ U to U2 ⊕ U. Put −1 γ = α ◦ β. Then γ is an isometry from U2 ⊕ U to U1 ⊕ U, identity on U. It follows that γ is an isometry from U2 to U1 since they are orthogonal complements of U.  Now we can show that real quadratic spaces are classified by signature. Indeed, if two spaces of different signature were isometric, for example h1i ⊕ h1i ⊕ h−1i =∼ h1i ⊕ h−1i ⊕ h−1i, then by the corollary we could cancel out isomorphic summands to reduce to an isometry of h1i =∼ h−1i. This is clearly impossible since Q ≥ 0 on the former and Q ≤ 0 on the latter. 8 ALGEBRA - LECTURE V

5. Witt Ring We start with an abstract construction of a group from a semigroup with cancelation. Let S be a commutative semigroup with a binary operation + such that the following additional property holds: a + b = a + c ⇒ b = c. A typical example is the set of natural numbers N = {1, 2, 3,...}. In order to extend S to a group G we define an equivalence relation on S × S by (a, b) ∼ (c, d) if a + d = b + c. The relation is obviously reflexive and symmetric. As for the transitivity, assume that (a, b) ∼ (c, d) and (c, d) ∼ (e, f). Then a + d = b + c and c + f = d + e. We can add this two equations, to get a + d + c + f = b + c + d + e. Next, we can cancel out d + c to get a + f = b + e which means that (a, b) ∼ (e, f), as desired. Proposition 5.1. Assume that S is a semigroup with a cancelation property. Let G be the set of equivalence classes with respect to ∼. Then G is a commutative group with respect to addition (a, b) + (c, d) = (a + c, b + d). Proof. There are, of course, several things to prove. The first is to show that this is well defined, in the sense that it does not depend on the choices of class representatives. I will skip that. However, notice that the identity element is the class of (a, a). Indeed, (a, a) + (b, c) = (a + b, a + c). Since (b, c) ∼ (a + b, a + c), (b, c) and (a + b, a + c) belong to the same class. The inverse of the class of (m, n) is the class of (n, m).  ∼ For example, if S = N then G = Z, the group of integers, under the isomorphism given by (m, n) 7→ m − n. The second example is given by quadratic spaces over a field k. Let S(k) be the set of equivalence classes (under isometries) of regular quadratic spaces over k. The set S(k) is a semigroup with respect to taking direct sums of orthogonal spaces. By the Witt’s lemma, the cancelation property holds! In particular, we can build the group G(k) which, in this case, is called the Grothendieck group of quadratic spaces over k. Note that there is a natural map deg : G(k) → Z given by deg(V,U) = dim(V ) − dim(U). Moreover, the notation (V,U) for elements in G(k) can be replaced by V − U and, in view of existence of a normal basis, elements of G(k) can be thought of as linear combinations of one-dimensional quadratic spaces hai with integer coefficients.

But this is not all. The set N has two operations, addition and multiplication. It is a semigroup with respect to both operations. Moreover, the number 1 is the identity element for multiplication, and the two operations are related by the distribution property. Now on the set of equivalence classes of pairs (a, b) we can define multiplication by (a, b) · (c, d) = (ac + bd, bc + ad). ALGEBRA - LECTURE V 9

This formula is derived by multiplying integers (a − b)(c − d) = ac + bd − (bc + ad). This multiplication is well defined, on the level of classes, due to the distribution property. (Check this!) The set S(k) of equivalence classes of quadratic spaces also has a multiplication operation. It is given by the tensor product! If V and U are two regular quadratic spaces with symmetric bilinear forms BV and BU then the tensor product V ⊗ U is also a quadratic space for the bilinear form 0 0 0 0 B(v ⊗ u, v ⊗ u ) = BV (v, v ) · BU (u, u ). This looks a bit awkward, but it is easy to understand using normal bases. First of all, for one dimensional quadratic spaces we have hai ⊗ hbi =∼ habi. ∼ ∼ Thus, if V = ha1i ⊕ · · · hami and U = hb1i ⊕ · · · ⊕ hbni then V ⊗ U is a direct sum of haibji for all i and j. By analogy with integers, we can now define multiplication in G(k) by (U, V ) · (W, Z) = (U ⊗ W ⊕ V ⊗ Z,U ⊗ Z ⊕ V ⊗ W ). Since (V ⊕ U) ⊗ W =∼ (V ⊗ W ) ⊕ (U ⊗ W ) the distributive property holds and the group G(k) is a ring. The identity element is given by V = k with the quadratic form x2. It is clear that the map deg(U, V ) = (deg(U), deg(V )), from G(k) to Z = N × N/ ∼ preserves the multiplication. Thus, the degree map is a ring homomorphism. We are now ready to introduce the Witt ring. Let H be the hyperbolic plane. Then H can be viewed as an element in G(k) (by taking the class of (H, 0), to be precise). Let I be the principal ideal of G(k) generated by H. Every element in I is equal to (V − U)H = V ⊗ H − U ⊗ H. We want to understand this better. Lemma 5.2. Let V be a regular quadratic space of dimension m. Then V ⊗ H is isometric to a direct sum of m hyperbolic planes. ∼ ∼ Proof. Assume, as we always can, that V = ha1i ⊕ · · · ⊕ hami. Since H = h1i ⊕ h−1i, we have ∼ ∼ V ⊗ H = ha1i ⊕ h−a1i ⊕ · · · hami ⊕ h−ami = m · H.  The lemma implies that (V − U)H = V ⊗ H − U ⊗ H = (dim(V ) − dim(U))H, a multiple of H. Thus the ideal I consists of multiples of H. The Witt ring W (k) is the quotient of G(k) by the ideal I. Notice that W (k) 6= 0 since the degree function takes even values on I, so it descends to a surjective map deg : W (k) → Z/2Z. The main difference between the Witt ring and the Grothendieck ring is that elements of the Witt ring can be represented by honest quadratic spaces. To see this, let V − U represent an ∼ − ∼ element in G(k). Assume that U = hb1i ⊕ · · · ⊕ hbni and define U = h−b1i ⊕ · · · ⊕ h−bni. Since hai ⊕ h−ai =∼ H, 10 ALGEBRA - LECTURE V it follows that U ⊕ U − =∼ mH. Therefore, in G(k), we have V − U = (V + U −) − (U + U −) = (V + U −) − mH ≡ V + U − (mod I). Let’s try to understand the Witt ring in a special case of the field of real numbers. In this case we have a natural map  : G(R) → Z given by the signature (V − U) = (V ) − (U). Since the signature of the hyperbolic plane is 0, the map descends down to a map from W (R) to Z. Proposition 5.3. The signature homomorphism is an isomorphism between W (R) and Z. Proof. In order to show this it suffices to show that every element in W (R) is equal to an integer multiple of h1i. First, we know that every quadratic space over R is a direct sum of one dimensional spaces isomorphic to h1i or h−1i. Since h1i⊕h−1i =∼ H, we have h−1i ≡ −h1i (mod I) in G(R). It follows that every element in W (R) is an integral multiple of h1i, as claimed. 

Exercises

1) Show that Q ⊗ Q = Q and Q ⊗ (Z/nZ) = 0. √ Z Z √ 2) The ring Z[ 2] is, naturally, a module for its subring Z[2 2]. Find a 2-torsion element in √ √ √ Z[ 2] ⊗ [2 2] Z[ 2]. Z √ √ Hint: The idea is to exploit the fact that the factor 2 2 (but not 2) can be moved from one tensor factor to another. The example I found was a difference of two pure tensors. The√ hard part was then to show√ that√ the element√ is non-zero.√ To that end, I exploited a Z[2 2]-bilinear map from Z[ 2] × Z[ 2] to Z[ 2]/Z[2 2] defined by √ √ √ (a + b 2, c + d 2) 7→ ad 2. This example answers the second part of a question on the last√ exam. That√ is, a tensor product of two torsion-free modules can have torsion. Answer: 2 ⊗ 1 − 1 ⊗ 2. 3 3) Let V = Q be a 3-dimensional rational vector space with a quadratic form Q(x1, x2, x3) = 2 2 2 3x1 − 2x2 − x3. Find a vector v in V such that Q(v) = 1979. Hint: notice that Q(1, 1, 1) = 0. 4) Let H be a hyperbolic plane over a field of characteristic 6= 2. Find an explicit isometry between H and h1i ⊕ h−1i. 5) Let k be a field and V = k3, a 3-dimensional vector space, with a quadratic form 2 2 2 Q(x1, x2, x3) = x1 + x2 − x3. Compute the matrix of the reflection σv for v = (1, 1, 1).