Tensors and Tensor Products for Physicists

Total Page:16

File Type:pdf, Size:1020Kb

Tensors and Tensor Products for Physicists Tensors and Tensor Products for Physicists Nadir Jeevanjee NOTE: This is a rough draft and I would appreciate feedback. Please send any and all questions, comments, criticisms, or suggestions to [email protected]. Last updated 1/10/2007. 0Copyright c 2006 Nadir Jeevanjee 1 Contents Preface 4 1 Vector Spaces 6 1.1 Definition and Examples . 6 1.2 Span, Linear Independence and Bases . 10 1.3 Components . 13 1.4 Linear Operators . 15 1.5 Dual Spaces . 19 1.6 Non-Degenerate Hermitian Forms . 21 1.7 Non-Degenerate Hermitian Forms and Dual Spaces . 26 Chapter 1 Problems . 29 2 Tensors 33 2.1 Definition and Examples . 33 2.2 Change of Basis . 36 2.3 The Tensor Product - Definition and Properties . 43 2.4 Tensor Products of V and V ∗ ...................... 45 2.5 Applications of the Tensor Product in Classical Physics . 48 2.6 Applications of the Tensor Product in Quantum Physics . 50 2.7 Symmetric and Antisymmetric Tensors . 58 Chapter 2 Problems . 68 3 Groups, Lie Groups and Lie Algebras 71 3.1 Groups - Definition and Examples . 72 3.2 Homomorphism and Isomorphism . 82 Chapter 3 Problems . 84 Dirac Dictionary 86 2 Bibliography 87 3 Preface This text is written at the beginning graduate level, and aims to demystify tensors and provide a unified framework for understanding them in all the different contexts in which they arise in physics. The word tensor is ubiquitous in physics (stress ten- sor, moment of inertia tensor, field tensor, metric tensor, tensor product, etc. etc.) and yet tensors are rarely defined carefully (if at all), and the definition usually has to do with transformation properties, making it difficult to get a feel for these ob- jects. Furthermore, physics texts at the beginning graduate level usually only deal with tensors in their component form, so students wonder what the difference is be- tween a second rank tensor and a matrix, and why new, enigmatic terminology is introduced for something they’ve already seen. The irony of this situation is that a proper understanding of tensors doesn’t require much more mathematics than what most students encounter as undergraduates, and the clarity gained pays dividends far outweighing the modest investment. This text introduces just enough linear al- gebra to set the stage for tensors, with plenty of examples to keep the discussion grounded. After laying the necessary linear algebraic foundations, we give the mod- ern (component-free) definition of tensors, followed by applications. Exercises and problems are included, and the exercises in particular should be done as they arise, or at least carefully considered, as they often fill out the text and provide good practice in using the definitions. It should be said that this text aims to be simultaneously intuitive and rigor- ous. Thus, although much of the language (especially in the examples) is informal, almost all the definitions given are precise and are the same as one would find in a pure math text. This may put off the less mathematically inclined reader; I hope, however, that such a reader will work through his or her discomfort and develop the necessary mathematical sophistication, as the results will be well worth it. As for prerequisites, it is assumed the reader has been through the usual undergraduate physics curriculum, including a “mathematical methods for physicists” course (with at least a cursory treatment of vectors and matrices), as well as the standard up- per division courses in classical mechanics, quantum mechanics, and relativity. Any 4 undergraduate versed in those topics, as well as any graduate student in physics, should be able to read this text. To undergraduates who are eager to learn about tensors but haven’t yet completed the standard curriculum, I apologize; many of the examples and practically all of the motivation for the text come from those courses, and to assume no knowledge of those topics would preclude discussion of the many applications that motivated me to write this text. Hopefully, such students will re- turn to this text once they have completed their upper-division coursework, and find it useful then. Besides the aforementioned prerequisites I’ve also indulged in the use of some very basic mathematical shorthand for brevity’s sake; a guide is below. Enjoy! Some Mathematical Shorthand R The set of real numbers C The set of complex numbers Z the set of positive and negative integers ∈ “is an element of”, “an element of”, i.e. 2 ∈ R reads “2 is an element of the real numbers” ∈/ ”is not an element of” ∀ “for all” ⊂ “is a subset of”, “a subset of” ≡ denotes a definition A × B The set {(a, b)} of all ordered pairs where a ∈ A, b ∈ B. Referred to as the cartesian product of sets A and B. Extends in the obvious way to n-fold products A1 × ... × An. Rn R × ... × R | {z } n times Cn C × ... × C | {z } n times 5 Chapter 1 Vector Spaces Since tensors are basically a special class of functions defined on vector spaces, we must have a good foundation in linear algebra before discussing them. In particular, one needs a little bit more linear algebra than is covered in most sophomore or junior level linear algebra/ODE courses. This chapter starts with the familiar material about vectors, bases, linear operators etc. but eventually moves on to slightly more sophisticated topics that are essential for understanding tensors in physics. As we lay this groundwork, hopefully the reader will also find our slightly more abstract viewpoint useful in clarifying the nature of many of the objects in physics he/she has already encountered. 1.1 Definition and Examples We begin with the definition of an abstract vector space. We’re taught as under- graduates to think of vectors as arrows with a head and a tail, or as ordered triples of real numbers, but physics, and especially quantum mechanics, requires a more abstract notion of vectors. Before reading the definition of an abstract vector space, keep in mind that the definition is supposed to distill all the essential features of vectors as we know them (like addition and scalar multiplication) while detaching the notion of a vector space from specific constructs, like ordered n-tuples of real or complex numbers (denoted as Rn and Cn respectively). The mathematical utility of this is that much of what we know about vector spaces depends only on the essential properties of addition and scalar multiplication, not on other properties particular to Rn or Cn. If we work in the abstract framework and then come across other math- ematical objects that don’t look like Rn or Cn but that are abstract vector spaces, then most everything we know about Rn and Cn will apply to these spaces as well. 6 Physics also forces us to use the abstract definition since many quantum mechanical vector spaces are infinite-dimensional and cannot be viewed as Cn or Rn for any n. An added dividend of the abstract approach is that we will learn to think about vector spaces independently of any basis, which will prove very useful. That said, an abstract vector space is a set V (whose elements are called vectors), together with a set of scalars C (for us, C is always R or C) and operations of addition and scalar multiplication that satisfy the following axioms: 1. v + w = w + v for all v, w in V (Commutativity) 2. v + (w + x) = (v + w) + x for all v, w, x in V (Associativity) 3. There exists a unique vector 0 in V such that v + 0 = v ∀v ∈ V 4. ∀v ∈ V there is a unique vector −v such that v + (−v) = 0 5. c(v + w) = cv + cw ∀v, w ∈ V, c ∈ C (Distributivity) 6. 1v = v for all v in V 7. (c1 + c2)v = c1v + c2v for all scalars c1, c2 and vectors v 8. (c1c2)v = c1(c2v) for all scalars c1, c2 and vectors v Some parts of the definition may seem tedious or trivial, but they are just meant to ensure that the addition and scalar multiplication operations behave the way we expect them too. In determining whether a set is a vector space or not, one is usually most concerned with defining addition in such a way that the set is closed under addition and that axioms 3 and 4 are satisfied; most of the other axioms are so natural and obviously satisfied that one, in practice, rarely bothers to check them. That said, let’s look at some examples from physics, most of which will recur throughout the text. Example 1.1 Rn This is the most basic example of a vector space, and the one on which the abstract definition is modeled. Addition and scalar multiplication are defined in the usual way: for v = (v1, v2, . , vn), w = (w1, w2, . , wn) in Rn, we have (v1, v2, . , vn) + (w1, w2, . , wn) = (v1 + w1, v2 + w2, . , vn + wn) (1.1) and c(v1, v2, . , vn) = (cv1, cv2, . , cvn) (1.2) and the reader should check that the axioms are satisfied. These spaces, of course, are basic in physics; R3 is the usual 3 dimensional space we live in, R4 is spacetime 7 in special relativity, and Rn for higher n occurs in classical physics as configuration spaces for multiparticle systems (i.e. R6 is the configuration space in the classic two- body problem, as you need six coordinates to specify the position of two particles in three-dimensional space). Example 1.2 Cn This is another basic example - addition and scalar multiplication are defined as for Rn, and the axioms are again straightforward to verify.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Finite Generation of Symmetric Ideals
    FINITE GENERATION OF SYMMETRIC IDEALS MATTHIAS ASCHENBRENNER AND CHRISTOPHER J. HILLAR In memoriam Karin Gatermann (1965–2005 ). Abstract. Let A be a commutative Noetherian ring, and let R = A[X] be the polynomial ring in an infinite collection X of indeterminates over A. Let SX be the group of permutations of X. The group SX acts on R in a natural way, and this in turn gives R the structure of a left module over the left group ring R[SX ]. We prove that all ideals of R invariant under the action of SX are finitely generated as R[SX ]-modules. The proof involves introducing a certain well-quasi-ordering on monomials and developing a theory of Gr¨obner bases and reduction in this setting. We also consider the concept of an invariant chain of ideals for finite-dimensional polynomial rings and relate it to the finite generation result mentioned above. Finally, a motivating question from chemistry is presented, with the above framework providing a suitable context in which to study it. 1. Introduction A pervasive theme in invariant theory is that of finite generation. A fundamen- tal example is a theorem of Hilbert stating that the invariant subrings of finite- dimensional polynomial algebras over finite groups are finitely generated [6, Corol- lary 1.5]. In this article, we study invariant ideals of infinite-dimensional polynomial rings. Of course, when the number of indeterminates is finite, Hilbert’s basis theo- rem tells us that any ideal (invariant or not) is finitely generated. Our setup is as follows. Let X be an infinite collection of indeterminates, and let SX be the group of permutations of X.
    [Show full text]
  • Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis
    Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis Stephen Becker November 18, 2017(v4); v1 Nov 17 2014 and v2 Jan 21 2016 Supplementary notes to our textbook (Hunter and Nachtergaele). These notes generally follow Kreyszig’s book. The reason for these notes is that this is a simpler treatment that is easier to follow; the simplicity is because we generally do not consider uncountable nets, but rather only sequences (which are countable). I have tried to identify the theorems from both books; theorems/lemmas with numbers like 3.3-7 are from Kreyszig. Note: there are two versions of these notes, with and without proofs. The version without proofs will be distributed to the class initially, and the version with proofs will be distributed later (after any potential homework assignments, since some of the proofs maybe assigned as homework). This version does not have proofs. 1 Basic Definitions NOTE: Kreyszig defines inner products to be linear in the first term, and conjugate linear in the second term, so hαx, yi = αhx, yi = hx, αy¯ i. In contrast, Hunter and Nachtergaele define the inner product to be linear in the second term, and conjugate linear in the first term, so hαx,¯ yi = αhx, yi = hx, αyi. We will follow the convention of Hunter and Nachtergaele, so I have re-written the theorems from Kreyszig accordingly whenever the order is important. Of course when working with the real field, order is completely unimportant. Let X be an inner product space. Then we can define a norm on X by kxk = phx, xi, x ∈ X.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Ch 5: ORTHOGONALITY
    Ch 5: ORTHOGONALITY 5.5 Orthonormal Sets 1. a set fv1; v2; : : : ; vng is an orthogonal set if hvi; vji = 0, 81 ≤ i 6= j; ≤ n (note that orthogonality only makes sense in an inner product space since you need to inner product to check hvi; vji = 0. n For example fe1; e2; : : : ; eng is an orthogonal set in R . 2. fv1; v2; : : : ; vng is an orthogonal set =) v1; v2; : : : ; vn are linearly independent. 3. a set fv1; v2; : : : ; vng is an orthonormal set if hvi; vji = 0 and jjvijj = 1, 81 ≤ i 6= j; ≤ n (i.e. orthogonal and unit length). n For example fe1; e2; : : : ; eng is an orthonormal set in R . 4. given an orthogonal basis for a vector space V , we can always find an orthonormal basis for V by dividing each vector by its length (see Example 2 and 3 page 256) n 5. a space with an orthonormal basis behaves like the euclidean space R with the standard basis (it is easier to work with basis that are orthonormal, just like it is n easier to work with the standard basis versus other bases for R ) 6. if v is a vector in a inner product space V with fu1; u2; : : : ; ung an orthonormal basis, Pn Pn then we can write v = i=1 ciui = i=1hv; uiiui Pn 7. an easy way to find the inner product of two vectors: if u = i=1 aiui and v = Pn i=1 biui, where fu1; u2; : : : ; ung is an orthonormal basis for an inner product space Pn V , then hu; vi = i=1 aibi Pn 8.
    [Show full text]
  • Math 263A Notes: Algebraic Combinatorics and Symmetric Functions
    MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS AARON LANDESMAN CONTENTS 1. Introduction 4 2. 10/26/16 5 2.1. Logistics 5 2.2. Overview 5 2.3. Down to Math 5 2.4. Partitions 6 2.5. Partial Orders 7 2.6. Monomial Symmetric Functions 7 2.7. Elementary symmetric functions 8 2.8. Course Outline 8 3. 9/28/16 9 3.1. Elementary symmetric functions eλ 9 3.2. Homogeneous symmetric functions, hλ 10 3.3. Power sums pλ 12 4. 9/30/16 14 5. 10/3/16 20 5.1. Expected Number of Fixed Points 20 5.2. Random Matrix Groups 22 5.3. Schur Functions 23 6. 10/5/16 24 6.1. Review 24 6.2. Schur Basis 24 6.3. Hall Inner product 27 7. 10/7/16 29 7.1. Basic properties of the Cauchy product 29 7.2. Discussion of the Cauchy product and related formulas 30 8. 10/10/16 32 8.1. Finishing up last class 32 8.2. Skew-Schur Functions 33 8.3. Jacobi-Trudi 36 9. 10/12/16 37 1 2 AARON LANDESMAN 9.1. Eigenvalues of unitary matrices 37 9.2. Application 39 9.3. Strong Szego limit theorem 40 10. 10/14/16 41 10.1. Background on Tableau 43 10.2. KOSKA Numbers 44 11. 10/17/16 45 11.1. Relations of skew-Schur functions to other fields 45 11.2. Characters of the symmetric group 46 12. 10/19/16 49 13. 10/21/16 55 13.1.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Categories with Negation 3
    CATEGORIES WITH NEGATION JAIUNG JUN 1 AND LOUIS ROWEN 2 In honor of our friend and colleague, S.K. Jain. Abstract. We continue the theory of T -systems from the work of the second author, describing both ground systems and module systems over a ground system (paralleling the theory of modules over an algebra). Prime ground systems are introduced as a way of developing geometry. One basic result is that the polynomial system over a prime system is prime. For module systems, special attention also is paid to tensor products and Hom. Abelian categories are replaced by “semi-abelian” categories (where Hom(A, B) is not a group) with a negation morphism. The theory, summarized categorically at the end, encapsulates general algebraic structures lacking negation but possessing a map resembling negation, such as tropical algebras, hyperfields and fuzzy rings. We see explicitly how it encompasses tropical algebraic theory and hyperfields. Contents 1. Introduction 2 1.1. Objectives 3 1.2. Main results 3 2. Background 4 2.1. Basic structures 4 2.2. Symmetrization and the twist action 7 2.3. Ground systems and module systems 8 2.4. Morphisms of systems 10 2.5. Function triples 11 2.6. The role of universal algebra 13 3. The structure theory of ground triples via congruences 14 3.1. The role of congruences 14 3.2. Prime T -systems and prime T -congruences 15 3.3. Annihilators, maximal T -congruences, and simple T -systems 17 4. The geometry of prime systems 17 4.1. Primeness of the polynomial system A[λ] 17 4.2.
    [Show full text]
  • True False Questions from 4,5,6
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. For every 2 × 2 matrix A, there exists a 2 × 2 matrix B such that det(A + B) 6= det A + det B. Solution note: False! Take A to be the zero matrix for a counterexample. 2. If the change of basis matrix SA!B = ~e4 ~e3 ~e2 ~e1 , then the elements of A are the same as the element of B, but in a different order. Solution note: True. The matrix tells us that the first element of A is the fourth element of B, the second element of basis A is the third element of B, the third element of basis A is the second element of B, and the the fourth element of basis A is the first element of B. 6 6 3. Every linear transformation R ! R is an isomorphism. Solution note: False. The zero map is a a counter example. 4. If matrix B is obtained from A by swapping two rows and det A < det B, then A is invertible. Solution note: True: we only need to know det A 6= 0. But if det A = 0, then det B = − det A = 0, so det A < det B does not hold. 5. If two n × n matrices have the same determinant, they are similar. Solution note: False: The only matrix similar to the zero matrix is itself. But many 1 0 matrices besides the zero matrix have determinant zero, such as : 0 0 6.
    [Show full text]
  • C Self-Adjoint Operators and Complete Orthonormal Bases
    C Self-adjoint operators and complete orthonormal bases The concept of the adjoint of an operator plays a very important role in many aspects of linear algebra and functional analysis. Here we will primar­ ily focus on s e ll~adjoint operators and how they can be used to obtain and characterize complete orthonormal sets of vectors in a Hilbert space. The main fact we will present here is that self-adjoint operators typically have orthogonal eigenvectors, and under some conditions, these eigenvectors can form a complete orthonormal basis for the Hilbert space. In particular, we will use this technique to find a variety of orthonormal bases for L2 [a, b] that satisfy different types of boundary conditions. We begin with the algebraic aspects of self-adjoint operators and their eigenvectors. Those algebraic properties are identical in the finite and infinite dimensional cases. The completeness arguments in infinite dimensions are more delicate, we will only state those results without complete proofs. We begin first by defining the adjoint. D efinition C.l. (Finite dimensional or bounded operator case) Let A H I -; H2 be a bounded operator between the Hilbert spaces HI and H 2. The adjomt A' : H2 ---; HI of Xis defined by the req1Lirement J (V2, AVI ) ~ = (A''V2, v])], (C.l) fOT all VI E HI--- and'V2 E H2· Note that for emphasis, we have written (. , .)] and (. , ')2 for the inner products in HI and H2 respectively. It remains to be shown that the relation (C.l) defines an operator A' from A uniquely. We will not do this here in general but will illustrate it with several examples.
    [Show full text]
  • Worksheet and Solutions
    Tuesday, December 6 ∗∗ Orthogonal bases & Orthogonal projections ∗∗ Solutions 1. A basis B is called an orthonormal basis if it is orthogonal and each basis vector has norm equal to 1. (a) Convert the orthogonal basis 80− 1 0 1 0 19 < 1 1 1 = B = @ 1 A , @ 1 A , @1A : 0 −2 1 ; into an orthonormal basis C . Solution. We just scale each basis vector by its length. The new, orthonormal, basis is 8 0−11 0 1 1 0119 < 1 1 1 = C = p @ 1 A , p @ 1 A , p @1A 2 6 3 : 0 −2 1 ; 011 0−41 (b) Find the coordinates of the vectors v1 = @3A and v2 = @ 2 A in the basis C . 5 7 Solution. The formula is the same as for a general orthogonal basis: writing u1, u2, and u3 for the basis vectors, the formula for each coordinate of v1 in the basis C is u · v i 1 = · 2 ui v1. kuik We compute to find 2 p −6 p 9 p u1 · v1 = p = 2, u2 · v1 = p = − 6, u3 · v1 = p = 3 3, 2 6 3 0 p 1 p2 B C so (v1)C = @−p 6A. Similarly, we get 3 3 r 6 p −16 2 5 u1 · v2 = p = 3 2, u2 · v2 = p = −8 , u3 · v2 = p , 2 6 3 3 0 p 1 3p 2 B C so (v2)C = @−8 p2/3A. 5/ 3 Orthonormal bases are very convenient for calculations. 2. (The Gram-Schmidt process) There is a standard process for converting a basis into an 3 orthogonal basis.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]