
Vector spaces, duals and endomorphisms A real vector space V is a set equipped with an additive operation which is commutative and associative, has a zero element 0 and has an additive inverse −v for any v 2 V (so V is an abelian group under addition). Further there is an operation of multiplication of the reals on the vectors (r; v) ! rv 2 V, for each real r and each vector v 2 V, called scalar multiplication, which obeys, for any reals r and s and any vectors v 2 V and w 2 V, the relations: 0v = 0; 1v = v; (−1)v = −v; r(sv) = (rs)v; (r + s)v = rs + sv; r(v + w) = rv + rw: The trivial vector space, said to be of zero dimensions, is a vector space consisting of only the zero vector. The basic family of non-trivial examples of a vector space are the spaces Rn, n 2 N. Here Rn consists of all n-tuples x = (x1; x2; : : : ; xn), with each xi real. The operations of Rn, valid for any reals xi; yi; i = 1; 2; : : : ; n and any real s are: x + y = (x1; x2; : : : ; xn) + (y1; y2; : : : ; yn) = (x1 + y1; x2 + y2; : : : ; xn + yn); x = (sx1; sx2; : : : ; sxn): The zero element is 0 = (0; 0;:::; 0) (often this element is just written 0). The additive inverse of x is −x = (−x1; −x2;:::; −xn). Given a pair of vector spaces V and W a map f : V ! W is said to be linear if f(rx + sy) = rf(x) + sf(y), for any reals r and s and any x 2 V and y 2 V. In particular we have f(0) = 0 and f(−v) = −f(v), for any v 2 V. Denote the space of linear maps from V to W by Hom(V; W). If f and g are in Hom(V; W), their sum map is defined by (f + g)(v) = f(v) + g(v), for any v 2 V. Also we can multiply f 2 Hom(V; W) by a real scalar r, giving the map rf, such that (rf)(v) = rf(v) for any v 2 V. Then f + g and rf each lie in Hom(V; W) and these operations give Hom(V; W) the natural structure of a vector space. If f 2 Hom(V; W) and g 2 Hom(W; X), for vector spaces V; W and X, then the composition g ◦f : V ! X is well-defined and linear, so lies in Hom(V; X). Also the composition map (f; g) ! f ◦ g is linear in each argument. 1 A linear map f 2 Hom(V; W) is said to be an epimorphism if f is surjective, a monomorphism if f is injective (if and only if the equation f(v) = 0 has as its only solution the vector v = 0 and an isomorphism if f is bijective, in −1 −1 −1 which case f has an inverse f such that f ◦ f = idW and f ◦ f = idV are the identity maps (each of the latter is an isomorphism, each its own inverse). All trivial vector spaces are isomorphic. The space Hom(Rn; Rm) is isomorphic to the space Rmn. An element f 2 Hom(Rn; Rm) is given by the n i Pn i j formula, for any x 2 R , f(x) = y where y = j=1 fj x ; i = 1; 2 : : : ; m, for i i an m by n matrix fj . Then f is surjective if and only if the matrix fj has rank m and is injective, if and only if there are no solutions to the matrix i j i equation fj x = 0, except for the solution x = 0; i = 1; 2; : : : ; n. Then f is an isomorphism if and only if m = n and the equation f(x) = 0 has as its only solution the vector x = 0, if and only if m = n and f has rank n. A linear map from V to itself is called an endomorphism. Then the space Hom(V; V) of all endomorphisms of V is an algebra, with associative multi- plication (distributive over addition) given by composition. The space Hom(R; V) is naturally isomorphic to V itself: simply map f in Hom(R; V) to f(1) 2 V. The space Hom(V; R) is called the dual vector space of V and is written V∗. If f 2 Hom(V; W) and α 2 Hom(W; R), then f ∗(α) = α ◦ f is an ele- ment of Hom(V; R). As α 2 W∗ varies, f ∗(α) depends linearly on α, so f ∗ gives a linear map from W∗ to V∗. The map f ∗ 2 Hom(W∗; V∗) is called the adjoint of the map f. Then the map f ! f ∗ is linear and (g ◦ f)∗ = f ∗ ◦ g∗, for any f 2 Hom(V; W) and g 2 Hom(W; X) and any vector spaces V, W and X. The adjoint of an identity map is itself, so the adjoint of an isomorphism is an isomorphism. 2 Bases and finite dimensionality A vector space is said to be finite dimensional if there is an isomorphism f : V ! Rn, for some integer n. Any such isomorphism is called a basis for V. If v 2 V and f is a basis, then f(v) 2 Rn is called the co-ordinate vector of v in the basis f. The basis elements of Rn are the defined to be j the n-vectors, fei; i = 1; 2; : : : ; ng, such that the j-the entry of ei is δi , the Kronecker delta, so is 1 if j = i and zero otherwise. Given a basis f : V ! Rn of a vector space V, the corresponding basis −1 elements of V are the vectors ffi = f (ei); i = 1; 2; : : : ; ng. Then for each 1 2 n Pn i v 2 V, we have f(v) = v = (v ; v ; : : : ; v ) if and only if v = i=1 v fi. If f : V ! Rn and g : V ! Rm are bases for V, the map f ◦ g−1 : Rm ! Rn is an isomorphism (with inverse g ◦ f −1), so m = n. So an n 2 N, such that a basis f : V ! Rn exists, is unique. It is called the dimension of V. If f : V ! Rn is a basis, then the adjoint f ∗ :(Rn)∗ ! V∗ is an isomorphism, so (f ∗)−1 : V∗ ! (Rn)∗ is an isomorphism. Now (Rn)∗ = Hom(Rn; R), so is isomorphic to Rn itself. This isomorphism maps t 2 (Rn)∗ to the element n t = (t(e1); t(e2); : : : ; t(en)) of R , where e1; e2; : : : ; en are the standard basis elements for Rn. We call this isomorphism T . Then f : V ! Rn is a basis, the map f T = T ◦ (f ∗)−1 : V∗ ! Rn is a basis for V∗ called the dual basis to that of f. Then (f T )T = f. In particular V and V∗ have the same dimension. If v 2 V a vector space, then v gives an element v0 of (V∗)∗ = Hom(V∗; R) by the formula: 0 ∗ v (α) = α(v); for any α 2 V Then the map V ! (V∗)∗, v ! v0 is an injection and is an isomorphism if V is finite dimensional. Let V and W be vector spaces of dimensions n and m respectively. Let s : V ! Rn and t : W ! Rm be bases. Then if f 2 Hom(V; W), define µ(f) = t ◦ f ◦ s−1. Then µ(f) 2 Hom(Rn; Rm) so is represented by a ma- trix. If fsi; i = 1; 2; : : : ng are the basis elements of V for the basis s and ftj; j = 1; 2; : : : mg are the basis elements of W for the basis t, then we have: Pm j j f(si) = j=1 fi tj, where fi is the matrix of µ(f). If v 2 V has s-co-ordinate i Pn i j vector v, then f(v) has t-co-ordinate vector w, where w = j=1 fj v . 3 Tensor algebras Let V be a real vector space. The tensor algebra of V, denoted T (V), is the associative algebra with identity over the reals, spanned by all the monomials of length k, v1v2 : : : vk, for all integers k, where each vi lies in V, subject to the relations of V: • If av + bw + cx = 0, with v, w and x in V and a, b and c real numbers, then aαvβ + bαwβ + cαxβ = 0, for any tensors α and β. If a tensor is a linear combination of monomials all of the same length k, the tensor is said to be of type k. The vector space of tensors of type k is de- noted T k(V). We allow k = 0, in which case the tensor is just a real number. The tensors of type one are naturally identified with the vector space V itself. If µ : W ! V is a homomorphism of vector spaces, then there is a unique algebra homomorphism T (µ): T (W) !T (V), which reduces to µ when act- ing on W. Then µ maps each monomial w1w2 : : : wk to µ(w1)µ(w2) : : : µ(wk), for any w1; w2 : : : wk in W and any integer k. If also λ : X ! W is also a vector space homomorphism, then we have: T (µ ◦ λ) = T (µ) ◦ T (λ). n Using the standard basis elements fei; i = 1; 2; : : : ; ng of R , the tensor al- n gebra T (R ) has a natural basis given by the monomials ej1 ej2 : : : ejk where 1 ≤ jr ≤ n, for r = 1; 2; : : : ; k and any k 2 N, together with the number 1, the natural basis for tensors of type zero.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-