
MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24,2015 M. J. HOPKINS 1. Bilinear forms 1.1. Bilinear forms and matrices. Definition 1.1. Suppose that F is a field and V is a vector space over F .A bilinear form on V is a map B : V × V ! F having the property that for all w; x; y 2 V and all λ 2 F , the following hold B(w; x + y) = B(w; x) + B(w; y) B(w; λx) = λB(w; x) B(w + x; y) = B(w; y) + B(x; y) B(λw; x) = λB(w; x): Definition 1.2. Suppose that F is a field. A bilinear form over F is a pair B = (V; B) consisting of a finite dimensional vector space V over F and a bilinear form B on V . Example 1.3. Suppose that V = F n and for x = (x1; : : : ; xn) y = (y1; : : : ; yn) we define B(x; y) = x · y = x1y1 + ··· + xnyn: Then B is a bilinear form (this is the usual dot product). Example 1.4. More generally suppose we are given λ1; : : : ; λn 2 F and we define B(x; y) = λ1x1y1 + ··· + λnxnyn: Then B is a bilinear form. Example 1.5. Even more generally suppose we are given an n×n matrix M = (λij). Then X (1.6) B(x; y) = λijxiyj is a bilinear form. 1 2 M. J. HOPKINS Example 1.7. The most degenerate case of Example 1.5 is when the matrix M is zero. We will call this the zero form. There is one zero bilinear form of every dimension over F . We denote it by 0n = (F n; 0). We should probably check that (1.6) is actually bilinear. There's a relatively painless way to do this. Write x and y as column vectors. Then (1.6) can be re-written as B(x; y) = xT · M · y: From this is easy to check the conditions. Now we will show that every bilinear form arises in this way from a matrix. Suppose that V is finite dimensional of dimension n, and that α = fv1; : : : ; vng is an ordered basis of V . Define a matrix Bα by (Bα)ij = B(vi; vj): By writing x 2 V as x = x1v1 + ··· xnvn we can represent each x uniquely as a column vector 2 3 x1 6 . 7 (1.8) xα = 4 . 5 xn Then T (1.9) B(x; y) = xα Myα: We now have two structures in linear algebra that correspond to square matrices. For a linear transformation T : V ! V a choice of ordered basis α = fv1; : : : ; vng n α of V allows us to identify V with F and express T as a matrix Tα . For another β choice of ordered basis β = fw1; : : : ; wng we get another matrix Tβ . The matrices α β Tα and Tβ are related by β −1 α Tβ = S Tα S where S is the matrix constructed by solving X wj = sijvi: For a bilinear form B : V ×V ! F a choice of ordered basis allows us to represent B by a matrix Bα (your book denotes this as α(B)). The bilinear form may then be computed as (1.9). If α and β are two ordered bases, related by a matrix S as above, then T Bβ = S BαS: Two matrices M1 and M2 are similar if there is an invertible matrix S for which −1 M2 = S M1S. Thus linear transformations T : V ! V correspond to matrices up to similarity. Two matrices M1 and M2 are congruent if there is an invertible T matrix S for which M2 = S M1S. Bilinear forms correspond to matrices up to congruence. QUADRATIC FORMS 3 1.2. Symmetric bilinear forms. Definition 1.10. A bilinear form B is symmetric if B(x; y) = B(y; x) for all x; y 2 V Exercise 1.1. Show that B is symmetric if and only if for every ordered basis α, the matrix Bα is a symmetric matrix. Definition 1.11. A bilinear form B is non-degenerate for every 0 6= v 2 V there exists w 2 V such that B(v; w) 6= 0. Exercise 1.2. Show that a bilinear form on a finite dimensional vector space V is non-degenerate if and only if for every ordered basis (v1 ··· ; vn) the matrix Bα is an invertible matrix. Exercise 1.3. A bilinear form B on V gives a map B~ : V ! V ∗ defined by B~(x)(y) = B(x; y): Show that B is non-degenerate if and only if B~ is a monomorphism. We will now restrict our attention to symmetric bilinear forms. When the char- acteristic of F is not equal to 2 it turns out that every symmetric bilinear form can be put into the form of Example 1.5. This fact is called the diagonalizability of quadratic forms (over fields of characteristic not equal to 2). Danny and I went through the proof in class, but it's worth thinking through some of the details. Theorem 1.12. Suppose that B is a symmetric bilinear form on a finite dimen- sional vector space V over a field F . If the characteristic of F is not equal to 2, then there is an ordered basis α = fv1; : : : ; vng of V having the property that (1.13) B(vi; vj) = 0 if i 6= j: (equivalently the matrix Bα is diagonal). Let's go through the proof. For the rest of this section we will assume that the characteristic of F is not 2. Lemma 1.14. Suppose that B is a symmetric bilinear form on V . If B is non-zero then there is a vector v 2 V for which B(v; v) 6= 0: Proof: If B is non-zero there are vectors x; y 2 V for which B(x; y) 6= 0. Using bilinearity and the fact that B is symmetric, we expand B(x + y; x + y) = B(x; x) + B(y; y) + 2B(x; y): Since 2 6= 0 2 F the rightmost term is non-zero. It follows that at least one of the other terms must be non-zero. We can choose v to be x, y, or x+y accordingly. It will also be useful to have some more terminology. Suppose that (V; B) is a symmetric bilinear form over F and W ⊂ V is a subspace. We can then define a symmetric bilinear form (W; BW ) by setting BW (x; y) = B(x; y): Definition 1.15. The restriction of B to W is the bilinear form (W; BW ) con- structed above. 4 M. J. HOPKINS Definition 1.16. Suppose that U ⊂ V is a subset of V . The B-orthogonal com- plement (or just orthogonal complement) of U is the set U ? = fv 2 V j B(u; v) = 0 8u 2 Ug: Exercise 1.4. Show that U ? is always a subspace of V . Proof of Theorem 1.12: We prove the result by induction on the dimension of V . The result is obvious when dim V = 1. Suppose then that dim V = n and we have proved the result for all symmetric bilinear forms on vector spaces of dimension less than n. If B(x; y) is zero for all x and y then any basis of V will satisfy. We may therefore suppose that B is non-zero. By Lemma 1.14 there is a v 2 V for which B(v; v) 6= 0. Let W = fvg? = fx 2 V j B(v; x) = 0g; and let BW be the restriction of B to W , so that BW (x; y) = B(x; y): Note that W is the kernel of the linear transformation B(v; − ): V ! F: Since B(v; v) 6= 0, this transformation is surjective, and so its kernel W has dimen- sion (n − 1). We may therefore employ the induction hypothesis and produce a basis fv1; : : : ; vn−1g of W satisfying BW (vi; vj) = B(vi; vj) = 0 if i 6= j: Now one easily checks that fv1; : : : ; vn−1; vg is a basis of W satisfying (1.13). Exercise 1.5. Prove the last two assertions in the above proof: that fv1; : : : ; vn−1; vg is indeed a basis of W and that it satisfies (1.13). Exercise 1.6. With the notation of the proof of Theorem 1.12, show that if B is non-degenerate then so is BW . Exercise 1.7. Suppose that the characteristic of F is not 2, and that B is a sym- metric bilinear form on a vector space V of dimension n. Let fv1; : : : ; vng be a basis of V satisfying (1.13), and let λi = B(vi; vi). Is the set fλ1; : : : ; λng determined by B? In other words does another basis satisfying (1.13) lead to the same set of λi's? Exercise 1.8. Show that if F is the field of real numbers and B is non-degenerate then one can find a basis fv1; : : : ; vng of V for which B(vi; vj) = 0 if i 6= j and for which B(vi; vi) = ±1 for i = 1; : : : ; n. Let p be the number of i for which B(vi; vi) = 1 and q the number of i for which B(vi; vi) = −1. The number p − q is called the signature of B and is independent of the choice of basis. The number n is called the rank of B. A symmetric bilinear form over R is thus determined by its rank and its signature.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-