
WHAT ISA TENSOR? As we have seen in class, there are a lot of different ways to understand a tensor. The most fundamental way is to think of a tensor as a vector in a space constructed from taking the tensor product. Let V; W be vector spaces. The tensor product V ⊗ W is the set of elements n X civi ⊗ wi i=1 such that the following holds: (1) (v1 + v2) ⊗ w = v1 ⊗ w + v2 ⊗ w (2) v ⊗ (w1 + w2) = v ⊗ w1 + v ⊗ w2 (3) (cv) ⊗ w = c(v ⊗ w) = v ⊗ (cw) for all vi 2 V and wi 2 W . Given two vectors v; w, we can treat v ⊗ w as the Kronecker product: (v1; v2; v3) ⊗ (w1; w2) = (v1w1; v1w2; v2w1; v2w2; v3w1; v3w2) 2 3 Exercise 1. Show that the vector η below is an element of R ⊗ R but that there does not exist vectors 2 3 v 2 R ; w 2 R such that v ⊗ w = η. η = (0; 1; 1; 2; 0; 1) Solution 1. Suppose the contrary. Because the first entry is zero and R is a field, then either v1 = 0 or w1 = 0. The second entry indicates that v1 6= 0 and the fourth entry indicates w1 6= 0, which is a contradiction. Let fe1; e2; e3g be a basis for V1 and fu1; u2g be a basis for V2. Then a basis for V1 ⊗ V2 is fe1 ⊗ u1; e1 ⊗ u2; e2 ⊗ u1; e2 ⊗ u2; e3 ⊗ u1; e3 ⊗ u2; g 3 2 Exercise 2. Consider the claim mentioned above. Let V1 = R , V2 = R and let both ei and ui be standard basis vectors. Show that the basis fe1 ⊗ u1; e1 ⊗ u2; e2 ⊗ u1; e2 ⊗ u2; e3 ⊗ u1; e3 ⊗ u2; g 6 is the standard basis for the space R . Solution 2. Write out the Kronecker product for each one: • e1 ⊗ u1 = (1; 0; 0; 0; 0; 0) • e1 ⊗ u2 = (0; 1; 0; 0; 0; 0) • e2 ⊗ u1 = (0; 0; 1; 0; 0; 0) • e2 ⊗ u2 = (0; 0; 0; 1; 0; 0) • e3 ⊗ u1 = (0; 0; 0; 0; 1; 0) • e3 ⊗ u2 = (0; 0; 0; 0; 0; 1) Remark 0.1. Given two vector spaces over the same field, km; kn, their tensor product km ⊗ kn =∼ kmn. Similarly, km ⊗ kn =∼ kn ⊗ km. A lot of sloppiness in the treatment of tensors arises from this fact. Remark 0.2. In many resources, the preferred definition of a tensor product is one based on the “universal property.” A universal property is exactly what it sounds like–a property that must hold true regardless of the setting. With the rise of category theory in the 20th century, defining mathematical concepts using its universal property has become the norm, especially in geometry. Such definitions avoid a choice of representation (e.g. a choice of basis) and, as a result, are often heralded as more elegant. While I agree 1 with this view, I think such definitions can appear too mysterious to one just learning the subject. Despite the question of its helpfulness, I present the definition here: The tensor product of vector spaces V and W is a vector space V ⊗ W equipped with a fixed surjective bilinear map B : V × W ! V ⊗ W such that for any vector space S and bilinear map B^ : V × W ! S; there is a unique linear map T : V ⊗ W ! S such that B^ = T ◦ B. This is depicted as follows: B V × W V ⊗ W T B^ S Exercise 3. Use the universal property to show that if the tensor product exists, it must be unique. Solution 3. Let Y and Z both be tensor products of the vector spaces V; W . Then, we can observe the following two diagrams hold: B B^ V × W Y V × W Z T1 T2 B^ B Z Y This means B^ = T1 ◦ B, B = T2 ◦ B^, and, hence, B^ = T1 ◦ T2 ◦ B^ and B = T2 ◦ T1 ◦ B. Thus, T1 and T2 are invertible maps and must be isomorphisms. 0.1 Tensors as Maps. Let us now pay close attention to the subscripts of the vectors. Then the tensor product written previously (v1; v2; v3) ⊗ (w1; w2) = (v1w1; v1w2; v2w1; v2w2; v3w1; v3w2) could be reorganized as a matrix in the obvious way: 0 1 v1w1 v1w2 @ v2w1 v2w2 A v3w1 v3w2 2 3 Notice that this is a linear map whose domain is R and whose range is R . In this way, elements in 3 2 ∼ 2 3 R ⊗ R = L (R ; R ). 3 2 Exercise 4. Explain why this matrix must be rank 1. Does that mean all of R ⊗ R is congruent to only rank 1 matrices? 3 2 Solution 4. Observe that the columns are multiples of each other. Elements of R ⊗ R are all finite sums of rank 1 tensors, which means matrices of higher rank are constructible. Remark 0.3. “Rank” is a dangerous word in algebra, especially when talking about tensors. Above, we are considering the matrix rank. In physics, the rank is often used to denote the order of the tensor, like a vector is “rank 1,” a matrix is “rank 2,” etc. So when you search for “tensor rank” on the internet, you’ll likely find this definition. In algebraic geometry, the word “order” is used to denote this. In algebraic geometry, 2 the tensor rank, or simply the rank, is the minimal number of (matrix) rank 1 tensors needed to construct a tensor. If a tensor M can be constructed by 3 and no fewer elements of the form v ⊗ w, then M has (tensor) rank 3. Unfortunately, the terminology gets even worse with the introduction of the generic rank, sometimes denoted as “grank.” We may discuss this later in the course. There is a very nifty theorem from linear algebra called the Riesz Representation Theorem. It states that 1 one can identify any ~x (from a finite dimensional space ) with a map '~x(·): V ! k via the inner product: '~x(·) = h~x; ·i : Moreover, every linear functional is of this form. In this sense, we can identify the linear operator repre- sented by 0 1 v1w1 v1w2 @ v2w1 v2w2 A v3w1 v3w2 with the map 0 1 v1 @ v2 A hw; ·i v3 Exercise 5. Let v = (2; 1; 3) and w = (−1; 10): Pick two vectors and demonstrate the above claim over R. Solution 5. The inner product over R is the dot product. If v and w are as above, then the matrix is 0 −2 20 1 @ −1 10 A : −3 30 We pick our two vectors to be the standard basis vectors. Then 0 −2 20 1 0 −2 1 0 2 1 1 −1 10 = −1 = 1 (w · e ) @ A 0 @ A @ A 1 −3 30 −3 3 and 0 −2 20 1 0 20 1 0 2 1 0 −1 10 = 10 = 1 (w · e ) @ A 1 @ A @ A 2 −3 30 30 3 ∼ ∼ ∗ This observation is just a rephrasing of V1 ⊗ V2 = L (V2;V1): In particular, it is that V1 ⊗ V2 = V1 ⊗ V2 . ∼ ∗ Exercise 6. Prove that when both V1 and V2 are finite dimensional, V1 ⊗ V2 = V1 ⊗ V2. ∼ Solution 6. It is clear that V1 ⊗ V2 = V2 ⊗ V1 since we can take each basis element ej ⊗ ui of V1 ⊗ V2 and ∼ ∗ map it to the basis element ui ⊗ ej of V2 ⊗ V1. By the above discussion V2 ⊗ V1 = V2 ⊗ V1 . If we employ ∗ ∼ ∗ the same logic with the bases, we get V1 ⊗ V2 = V2 ⊗ V1 . 1This theorem applies to infinite dimensions as well; however, V =∼ V ∗ is guaranteed to hold when V is finite dimensional. Given the scope of this class, we restrict to this case. 3 0.2 Composing Tensors. Tensors can be seen as vectors, as multi-dimensional arrays (e.g. matrices), and as linear maps. Let us focus in for a moment on how we can compose such maps. Let W be a vector space of dimension n, let fβ1; : : : βng be an orthonormal basis of W , and let f'β1 ;:::;'βn g be a corresponding basis of W ∗ such that ( 1 i = j 'β (βj) = : i 0 i 6= j 3 ∗ Exercise 7. Let W = R and let wi = ei, the standard basis. Find 'wi . Show that it is a basis for W . Solution 7. Let w = (w1; w2; w3). Then 'e1 (w) = e1 · w = w1: Similar results hold for e2 and e3. Consider the linear functional f. By the Riesz representation theorem, there exists a vector x 2 W such that f(·) = hx; ·i = ha1e1 + a2e2 + a3e3; ·i = a1 he1; ·i + a2 he2; ·i + a3 he3; ·i = a1'β1 (·) + a2'β2 (·) + a3'β3 (·) Based on the previous discussion, we can take any element in the space V ⊗ U ⊗ W and think of it as a map T : W ! V ⊗ U n X T (·) = ai(vi ⊗ ui)'wi (·) i Similarly, we can think of another element of W ⊗ X ⊗ Y as a map S : X ⊗ Y ! W n X S(·) = biwi'xi⊗yi (·) i Hence, using the maps S and T , we can construct a map R = T ◦ S that can be identified as an element of V ⊗ U ⊗ X ⊗ Y : n n X X T ◦ S = ai(vi ⊗ ui)'wi (·) ◦ biwi'xi⊗yi (·) i i n n X X = aibj(vi ⊗ ui)'wi (wj)'xj ⊗yj (·) i=1 j=1 n n X X = cij(vi ⊗ ui)'xj ⊗yj (·) i=1 j=1 Alternatively, we can write these two elements as matrices in (V ⊗ U) ⊗ W and W ⊗ (X ⊗ Y ) by treating (V ⊗ U) and (X ⊗ Y ) as their corresponding vector spaces.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-