<<

SOME NECESSARY .

1. Bilinear forms Let V be a over a field k.A bilinear form h( , ) is a function V × V → k that is linear in both arguments. For any choice of a v1, . . . , vn in V we can write a H = (hij) for the form h( , ) in this basis, by assigning hij = h(vi, vj). Then P P P P for two vectors u = aivi and w = bivi we have h(u, w) = h ( aivi, bivi) = P P ajbjh(vi, vj) = hijaibj. 0 0 If two bases {vi} and {vi} are related by a linear transformation L, that is, vi = Lvi, 0 and L has matrix A in the basis {vi}, then the matrices H and H for a bilinear form h 0 0 T in the bases {vi} and {vi} respectively are related as follows: H = A HA. We call a bilinear form: • degenerate if there is a u s.t. h(u, v) = 0 for all v, or a v s.t. h(u, v) = 0 for all u. We call a form non-degenerate otherwise. • symmetric if h(u, v) = h(v, u). • antisymmetric if h(u, v) = −h(v, u). • a symplectic form if it is antisymmetric and non-degenerate. Every bilinear form can be presented as a sum of a symmetric and an antisymmetric form: h(u, v) = hsym(u, v) + hasym(u, v). This presentation exists and is unique because sym 1 asym 1 then h (u, v) = 2 (h(u, v) + h(v, u)), and h (u, v) = 2 (h(u, v) − h(v, u)). To every bilinear form h( , ) we can associate a q : V → k by setting q(v) := h(v, v). Lemma 1. The form q is identically zero on V iff h is antisymmetric. Proof. Since h is bilinear, for all u, v we have h(u+v, u+v) = h(u, u)+h(u, v)+h(v, u)+ h(v, v). Then h(u + v, u + v) = h(u, u) = h(v, v) = 0 implies h(u, v) + h(v, u) = 0, so h(u, v) = −h(u, v) for all u, v.  If q(v) = h(v, v), you can recover the symmetric part hsym(u, v) from q via the following formula: 1  h(u, v) = q(u + v, u + v) − q(u, u) − q(v, v) . 2 If the form h( , ) is either symmetric or antisymmetric, h(u, v) = 0 implies h(v, u) = 0, so we can say that u and v are orthogonal iff h(u, v) = h(v, u) = 0. Theorem 1. Any is diagonalizable, i.e. for any symmetric bilinear form h there is a basis in which the matrix H is diagonal.

Proof. All we need is to find an orthogonal basis, i.e. a basis v1, . . . , vn with vi ⊥ vj for all i 6= j. Let us do that by induction on the of V . If dim V = 1, pick any vector as a basic vector. If dim V > 1, find a vector v such that h(v, v) 6= 0. If there is no such vector, then by Lemma 1 the form h is antisymmetric, but since by assumption h is symmetric, it means that h is identically zero and we are done. Now 1 2 SOME NECESSARY LINEAR ALGEBRA.

⊥ we can choose this vector v as the first basic vector v1. The space hv1i of all vectors v such that h(v1, v) = 0 is the zero space of a non-trivial linear function, hence it has dimension n − 1. By induction, we can find an orthogonal basis v2, . . . , vn there. Then v1, v2, . . . , vn is an orthogonal basis of V .  Corollary 1. If the field k is algebraically closed, for any non-degenerate symmetric form h there is a basis where H is the identity matrix.

Proof. Let v1, . . . , vn be an orthogonal basis of V with respect to the form h. Denote h(vi, vi) by hi. Since the form is non-degenerate, hi 6= 0 for all i. Let ξ1, . . . , ξn be such 2 numbers that ξi = hi (those exist since k is algebraically closed). Then v1/xi1, . . . , vn/ξn is the required basis.  If we choose a basis in V , thus establishing a system of coordinates on it, a quadratic form is a homogeneous quadratic polynomial of these coordinates. n Corollary 2. Any two hypersurfaces in P given by non-degenerate quadratic forms are isomorphic (assuming the base field is algebraically closed). In particular, any two 2 smooth quadratic curves in P are isomorphic. 2. product and dual spaces 2.1. The definition of . Let V , W be two vector spaces. Define the tensor product V ⊗ W as the quotient of the space freely generated by all expressions of the form v ⊗ w where v ∈ V , w ∈ W , by the subspace generated by all expressions of the form (av + bu) ⊗ w − a(v ⊗ w) − b(u ⊗ w) and v ⊗ (au + bw) − a(v ⊗ u) − b(v ⊗ w).

Lemma 2. If {v1, . . . , vn} is a basis for V , and {w1, . . . , wm} is a basis for W , then {vi ⊗wj} for all i, j is a basis for V ⊗W . In particular, if V and W are finite dimensional, V ⊗ W also is, and dim V ⊗ W = dim V · dim W . Proof. Any element of V ⊗ W is a (class of a) finite linear combination of elements of the form v ⊗ w, and every v ⊗ w can be presented as a linear combination of vi ⊗ wj: if P P P v = aivi and w = bjwj, then v ⊗ w = aibjvi ⊗ wj by the relations that define the tensor product. Therefore, the elements vi ⊗ wj span V ⊗ W . On the other hand, every expression of the form (av + bu) ⊗ w − a(v ⊗ w) − b(u ⊗ w) or v ⊗ (au + bw) − a(v ⊗ u) − b(v ⊗ w) produces all zero coefficients as a linear combination P of vi ⊗ wj as above, so no aijvi ⊗ wj can be a linear combination of these expressions when not all aij are zero, so {vi ⊗ wj} are linearly independent.  Tensoring by a given vector space U is a functor from the of vector spaces to itself, meaning that not only we can produce a vector space U ⊗ V for every vector space V , but we can produce a map (denoted id ⊗ L) between U ⊗ V and U ⊗ W for every map V −→L W . This map takes every u ⊗ v to u ⊗ L(v). The space V ⊗ W does not only consists of vectors of the form v ⊗ w, v ∈ V , w ∈ W , but it is spanned by such vectors, and we will be using this a lot in the subsequent proofs. 2.2. The definition of the . Let V be a vector space. The dual vector ∨ space V consists of all linear functions V → k. If v1, . . . , vn is a basis for V , then ∨ ψ1, . . . , ψn is a basis for V , where ψi is defined by ψi(vi) = 1, ψi(vj) = 0 for i 6= j. Thus dim V ∨ = dim V . SOME NECESSARY LINEAR ALGEBRA. 3

For each linear transformation L : V → W we get a linear transformation L∨ : W ∨ → V ∨ (other common notation for this map is L∗) by setting (Lφ)(v) = φ(Lv), where φ ∈ W ∨. Note that for the composition V −→L W −→M U the dual picture will be ∨ ∨ U ∨ −−→M W ∨ −−→L V ∨, so (ML)∨ = L∨M ∨. If we fix bases in V and W , and take their dual bases in V ∨ and W ∨, the matrix of L∨ will be the of the matrix of L. P P Note that V ⊗ W ' W ⊗ V (the isomorphism sends vi ⊗ wi to wi ⊗ vi), and (V ⊗ W )∨ ' V ∨ ⊗ W ∨ (for any linear functions φ on V and ψ on W , the function P vi ⊗ wi 7→ φ(vi)ψ(wi) is linear on V ⊗ W ). 2.3. and Hom’s. For every two vector spaces define the space Hom(V,W ) as the vector space of all linear transformations V → W . We have the following canonical maps: (1) Hom(U ⊗ V,W ) −→ Hom(U, Hom(V,W )) given by φ 7→ [u 7→ [v 7→ φ(u ⊗ v)]], where φ : U ⊗ V → W , and (2) Hom(U, V ) ⊗ W −→ Hom(U, V ⊗ W ) given by φ ⊗ w 7→ [u 7→ φ(u) ⊗ w] (extend to all of Hom(U, V ) ⊗ W by linearity). When U, V , and W are finite dimensional, these maps are isomorphisms. If we fix a space U, the assignment V 7→ Hom(U, V ) is functorial, meaning that every V −→L W induces a map Hom(U, V ) → Hom(U, W ) (it takes a map U → V and composes it with L, producing a map U → W ). The assignment V 7→ Hom(V,U) is also functorial, but the functor is contravariant, meaning that for a linear map L : V → W we get a map Hom(W, U) → Hom(V,U) (again, it takes a map W → U and composes it with L, but this time the composition is in different order). The maps above are transformations of functors (commute with applying the corresponding functors to maps between arguments) when we view these spaces as functors of U, V , or W . Lemma 3. We have k ⊗ V =∼ V , and Hom(k,V ) =∼ V . 2.4. Viewing Hom(V,W ) as W ⊗ V ∨. There is a canonical map called the evaluation P P ∨ map Hom(U, V ) ⊗ U → V given by Li ⊗ ui 7→ Li(ui). Since by definition V = Hom(V, k), we have an evaluation map V ∨ ⊗ V → k. From now on, assume that all vector spaces are finite dimensional. Lemma 4. We have Hom(V,W ) =∼ W ⊗ V ∨. The evaluation map Hom(V,W ) ⊗ V → W id⊗ev coincides with the map W ⊗ V ∨ ⊗ V −−−−→ W ⊗ k =∼ W where we denote the evaluation map V ∨ ⊗ V → k by ev. Proof. Hom(V,W ) ⊗ V =∼ Hom(V,W ⊗ k) ⊗ V =∼ W ⊗ Hom(V, k) ⊗ V. Commutation with the evaluation maps can be checked directly on elements that have form w ⊗ φ ⊗ v in the rightmost space, and extends to all of the space by linearity.  Lemma 5. The composition map Hom(V,W ) ⊗ Hom(U, V ) → Hom(U, W ) is the map id ⊗ ev ⊗ id : W ⊗ V ∨ ⊗ V ⊗ U ∨ → W ⊗ U ∨.

Proof. Similar to the previous lemma.  4 SOME NECESSARY LINEAR ALGEBRA.

3. Symmetric and wedge products 3.1. Symmetric and wedge products. Define the nth symmetric power SnV of V ⊗n as the quotient of V by the relations v1 ⊗ ... ⊗ vn = vσ(1) ⊗ ... ⊗ vσ(n) for every permutation σ. The multiplication sign in SnV is usually omitted. Define the nth exterior power ΛnV of V as the quotient of V ⊗n by the relations v1 ⊗ ... ⊗ vn = sign(σ)vσ(1) ⊗ ... ⊗ vσ(n) for every permutation σ. The multiplication n sign in Λ V is usually denoted by ∧, so we write v1 ∧ ... ∧ vn. There is a map SnV → V ⊗n given by 1 X (3) v . . . v 7→ v ⊗ ... ⊗ v 1 n n! σ(1) σ(n) σ such that its composition with the canonical projection V ⊗n → SnV is identity. Similarly, there is a map ΛnV → V ⊗n given by 1 X (4) v ∧ ... ∧ v 7→ sign(σ)v ⊗ ... ⊗ v 1 n n! σ(1) σ(n) σ such that its composition with the canonical projection V ⊗n → ΛnV is identity. Lemma 6. These maps add up to an isomorphism S2V ⊕ Λ2V −→∼ V ⊗ V . Caution: this is not the case for V ⊗n with n > 2. Note also that there is no such thing as a symmetric or exterior product of two different vector spaces, for obvious reasons. 3.2. Bilinear forms. Just as linear transformations V → V can be viewed as the tensor product V ⊗ V ∨, bilinear forms on V can be viewed as the tensor product V ∨ ⊗ V ∨. P ∨ ∨ An element h = φi ⊗ ψi ∈ V ⊗ V produces a bilinear form by taking h(u, v) = P 2 ∨ ∨ ∨ φi(u)ψi(v) for u, v ∈ V . Symmetric forms correspond to S V ⊂ V ⊗ V and antisymmetric forms correspond to Λ2V ∨ ⊂ V ∨ ⊗ V ∨. 3.3. Tensor, symmetric, and exterior algebras. Given a space V , we can produce ∞ ∞ the tensor algebra T •V = L V ⊗n, the symmetric algebra S•V = L SnV , and the n=0 n=0 ∞ • L n • ⊗n Λ V = Λ V . In T V , the product of u1 ⊗ ... ⊗ un ∈ V and n=0 ⊗m ⊗n+m v1 ⊗ ... ⊗ vm ∈ V is u1 ⊗ ... ⊗ un ⊗ v1 ⊗ ... ⊗ vm ∈ V ( this multiplication extends to all of T •V by linearity). The multiplication on S•V and Λ•V is inherited from T •V . n ∨ If V = k , the coordinates x1, . . . , xn form a basis in V that is the dual basis to n • ∨ the standard basis of k . Then there is an algebra isomorphism S (V ) ' k[x1, . . . , xn]. The space Sk(V ∨) is identified with homogeneous polynomials of degree k. In particular, S2(V ∨) gets identified with homogeneous quadratic polynomials; this is the identification between symmetric bilinear forms and quadratic forms.