Vector spaces, duals and endomorphisms
A real vector space V is a set equipped with an additive operation which is commutative and associative, has a zero element 0 and has an additive inverse −v for any v ∈ V (so V is an abelian group under addition). Further there is an operation of multiplication of the reals on the vectors (r, v) → rv ∈ V, for each real r and each vector v ∈ V, called scalar multiplication, which obeys, for any reals r and s and any vectors v ∈ V and w ∈ V, the relations: 0v = 0, 1v = v, (−1)v = −v, r(sv) = (rs)v,
(r + s)v = rs + sv, r(v + w) = rv + rw. The trivial vector space, said to be of zero dimensions, is a vector space consisting of only the zero vector. The basic family of non-trivial examples of a vector space are the spaces Rn, n ∈ N. Here Rn consists of all n-tuples x = (x1, x2, . . . , xn), with each xi real. The operations of Rn, valid for any reals xi, yi, i = 1, 2, . . . , n and any real s are:
x + y = (x1, x2, . . . , xn) + (y1, y2, . . . , yn) = (x1 + y1, x2 + y2, . . . , xn + yn),
x = (sx1, sx2, . . . , sxn). The zero element is 0 = (0, 0,..., 0) (often this element is just written 0). The additive inverse of x is −x = (−x1, −x2,..., −xn).
Given a pair of vector spaces V and W a map f : V → W is said to be linear if f(rx + sy) = rf(x) + sf(y), for any reals r and s and any x ∈ V and y ∈ V. In particular we have f(0) = 0 and f(−v) = −f(v), for any v ∈ V.
Denote the space of linear maps from V to W by Hom(V, W). If f and g are in Hom(V, W), their sum map is defined by (f + g)(v) = f(v) + g(v), for any v ∈ V. Also we can multiply f ∈ Hom(V, W) by a real scalar r, giving the map rf, such that (rf)(v) = rf(v) for any v ∈ V. Then f + g and rf each lie in Hom(V, W) and these operations give Hom(V, W) the natural structure of a vector space. If f ∈ Hom(V, W) and g ∈ Hom(W, X), for vector spaces V, W and X, then the composition g ◦f : V → X is well-defined and linear, so lies in Hom(V, X). Also the composition map (f, g) → f ◦ g is linear in each argument.
1 A linear map f ∈ Hom(V, W) is said to be an epimorphism if f is surjective, a monomorphism if f is injective (if and only if the equation f(v) = 0 has as its only solution the vector v = 0 and an isomorphism if f is bijective, in −1 −1 −1 which case f has an inverse f such that f ◦ f = idW and f ◦ f = idV are the identity maps (each of the latter is an isomorphism, each its own inverse). All trivial vector spaces are isomorphic. The space Hom(Rn, Rm) is isomorphic to the space Rmn. An element f ∈ Hom(Rn, Rm) is given by the n i Pn i j formula, for any x ∈ R , f(x) = y where y = j=1 fj x , i = 1, 2 . . . , m, for i i an m by n matrix fj . Then f is surjective if and only if the matrix fj has rank m and is injective, if and only if there are no solutions to the matrix i j i equation fj x = 0, except for the solution x = 0, i = 1, 2, . . . , n. Then f is an isomorphism if and only if m = n and the equation f(x) = 0 has as its only solution the vector x = 0, if and only if m = n and f has rank n.
A linear map from V to itself is called an endomorphism. Then the space Hom(V, V) of all endomorphisms of V is an algebra, with associative multi- plication (distributive over addition) given by composition.
The space Hom(R, V) is naturally isomorphic to V itself: simply map f in Hom(R, V) to f(1) ∈ V.
The space Hom(V, R) is called the dual vector space of V and is written V∗. If f ∈ Hom(V, W) and α ∈ Hom(W, R), then f ∗(α) = α ◦ f is an ele- ment of Hom(V, R). As α ∈ W∗ varies, f ∗(α) depends linearly on α, so f ∗ gives a linear map from W∗ to V∗. The map f ∗ ∈ Hom(W∗, V∗) is called the adjoint of the map f. Then the map f → f ∗ is linear and (g ◦ f)∗ = f ∗ ◦ g∗, for any f ∈ Hom(V, W) and g ∈ Hom(W, X) and any vector spaces V, W and X. The adjoint of an identity map is itself, so the adjoint of an isomorphism is an isomorphism.
2 Bases and finite dimensionality A vector space is said to be finite dimensional if there is an isomorphism f : V → Rn, for some integer n. Any such isomorphism is called a basis for V. If v ∈ V and f is a basis, then f(v) ∈ Rn is called the co-ordinate vector of v in the basis f. The basis elements of Rn are the defined to be j the n-vectors, {ei, i = 1, 2, . . . , n}, such that the j-the entry of ei is δi , the Kronecker delta, so is 1 if j = i and zero otherwise.
Given a basis f : V → Rn of a vector space V, the corresponding basis −1 elements of V are the vectors {fi = f (ei), i = 1, 2, . . . , n}. Then for each 1 2 n Pn i v ∈ V, we have f(v) = v = (v , v , . . . , v ) if and only if v = i=1 v fi.
If f : V → Rn and g : V → Rm are bases for V, the map f ◦ g−1 : Rm → Rn is an isomorphism (with inverse g ◦ f −1), so m = n. So an n ∈ N, such that a basis f : V → Rn exists, is unique. It is called the dimension of V.
If f : V → Rn is a basis, then the adjoint f ∗ :(Rn)∗ → V∗ is an isomorphism, so (f ∗)−1 : V∗ → (Rn)∗ is an isomorphism. Now (Rn)∗ = Hom(Rn, R), so is isomorphic to Rn itself. This isomorphism maps t ∈ (Rn)∗ to the element n t = (t(e1), t(e2), . . . , t(en)) of R , where e1, e2, . . . , en are the standard basis elements for Rn. We call this isomorphism T . Then f : V → Rn is a basis, the map f T = T ◦ (f ∗)−1 : V∗ → Rn is a basis for V∗ called the dual basis to that of f. Then (f T )T = f. In particular V and V∗ have the same dimension.
If v ∈ V a vector space, then v gives an element v0 of (V∗)∗ = Hom(V∗, R) by the formula: 0 ∗ v (α) = α(v), for any α ∈ V Then the map V → (V∗)∗, v → v0 is an injection and is an isomorphism if V is finite dimensional.
Let V and W be vector spaces of dimensions n and m respectively. Let s : V → Rn and t : W → Rm be bases. Then if f ∈ Hom(V, W), define µ(f) = t ◦ f ◦ s−1. Then µ(f) ∈ Hom(Rn, Rm) so is represented by a ma- trix. If {si, i = 1, 2, . . . n} are the basis elements of V for the basis s and {tj, j = 1, 2, . . . m} are the basis elements of W for the basis t, then we have: Pm j j f(si) = j=1 fi tj, where fi is the matrix of µ(f). If v ∈ V has s-co-ordinate i Pn i j vector v, then f(v) has t-co-ordinate vector w, where w = j=1 fj v .
3 Tensor algebras
Let V be a real vector space. The tensor algebra of V, denoted T (V), is the associative algebra with identity over the reals, spanned by all the monomials of length k, v1v2 . . . vk, for all integers k, where each vi lies in V, subject to the relations of V: • If av + bw + cx = 0, with v, w and x in V and a, b and c real numbers, then aαvβ + bαwβ + cαxβ = 0, for any tensors α and β. If a tensor is a linear combination of monomials all of the same length k, the tensor is said to be of type k. The vector space of tensors of type k is de- noted T k(V). We allow k = 0, in which case the tensor is just a real number. The tensors of type one are naturally identified with the vector space V itself.
If µ : W → V is a homomorphism of vector spaces, then there is a unique algebra homomorphism T (µ): T (W) → T (V), which reduces to µ when act- ing on W. Then µ maps each monomial w1w2 . . . wk to µ(w1)µ(w2) . . . µ(wk), for any w1, w2 . . . wk in W and any integer k. If also λ : X → W is also a vector space homomorphism, then we have: T (µ ◦ λ) = T (µ) ◦ T (λ).
n Using the standard basis elements {ei, i = 1, 2, . . . , n} of R , the tensor al- n gebra T (R ) has a natural basis given by the monomials ej1 ej2 . . . ejk where 1 ≤ jr ≤ n, for r = 1, 2, . . . , k and any k ∈ N, together with the number 1, the natural basis for tensors of type zero. Then every tensor of type k in T (Rn) can be written uniquely:
i1i2...ik T = T ei1 . . . eik . Here the Einstein summation convention is used: repeated indices are summed over.
If λ : V → Rn is a basis, so an isomorphism, then T (λ) gives an isomor- phism of T (V) with T (Rn). In particular, every tensor T of type k in T (V) has a unique expression:
i1i2...ik −1 T = T fi1 . . . fik , where fi = λ (ei), i = 1, 2, . . . , n. The vector space T k(V) has dimension nk, for each nonnegative integer k.
Finally, it is sometimes necessary, for clarity, to use a notation for the tensor product operation: then TU is written T ⊗ U.
4 Covariant and contravariant tensors
Let V be a vector space of dimension n with dual space V∗. The full tensor algebra of V is the sub-algebra of the tensor algebra T (V ⊕ V∗) generated ∗ by monomials vi1 vi2 . . . vik such that each vi belongs either to V or to V . If a monomial is a product of p elements of V with q elements of V∗, then the tensor is said to be contravariant of type p and covariant of type q and of p type q . It is traditional to quotient out this tensor algebra by the relations:
∗ T vαU = T αvU, for any tensors T and U and and v ∈ V and any α ∈ V .
So the relative ordering of elements of V vis `avis elements of V∗ in any tensor monomial is immaterial.
If λ : V → W is a homomorphism of vector spaces, then the adjoint of λ, denoted λ∗ is the map λ∗ : W∗ → V∗ given by the formula:
∗ λ (β)(v) = β(λ(v)), for any v ∈ V. If λ is an isomorphism, then so is λ∗ and then the sum λ ⊕ (λ∗)−1 is an isomorphism, so induces an isomorphism T (λ ⊕ (λ∗)−1) of T (V ⊕ V∗) with T (W⊕W∗). In particular, if λ : V → Rn is a basis, so an isomorphism, every ∗ p tensor T of T (V ⊕ V ) of type q has a unique expression:
i i ...i T = T 1 2 p f . . . f f j1 . . . f jq . j1j2...jq i1 ip
Here the vectors fi ∈ V, i = 1, 2, . . . , n are determined by the formula, λ(fi) = j ∗ ei, i = 1, 2, . . . n and the dual vectors f ∈ V , j = 1, 2, . . . , n are determined by the relations f j(v) = λ(v)j, for any v ∈ V, or, equivalently, by the duality relations f j(f ) = δj. The quantities T i1i2...ip are called the components of i i j1j2...jq T . If U is another tensor, of type r, with components U k1k2...kr , then the s m1m2...ms tensor TU has components:
(TU)i1i2...ip+r = T i1i2...ip U ip+1ip+2...ip+r . j1j2...jq+s j1j2...jq jq+1jj+2...jq+s
5 Tensor contraction and multilinear maps
∗ p If τ = v1v2 . . . vpα1α2 . . . αq is a tensor monomial in T (V ⊕ V ) of type q , ∗ k with vi, i = 1, 2, . . . , p in V and αj, j = 1, 2, . . . q in V , then the l contrac- tion of τ, denoted C k (τ), is the tensor: ( l )
C k (τ) = αl(vk)v1v2 . . . vk−1vk+1 . . . vpα1α2 . . . αl−1αl+1 . . . αq. ( l )