Geometry Addition of Vectors Dot Product Orthogonal and Orthonormal Vectors

Total Page:16

File Type:pdf, Size:1020Kb

Geometry Addition of Vectors Dot Product Orthogonal and Orthonormal Vectors Geometry CS148, Summer 2010 Siddhartha Chaudhuri 1 Ödegaard and Wennergren, 2D projections of 4D “Julia sets” 2 Addition of Vectors ● “Parallelogram Rule” ̂ u u + v v ● u + (-v) ≡ u - v 4 Dot Product Orthogonal and Orthonormal Vectors ● ● Orthogonal vectors are at right angles to each If θ is the angle between u u and v, then other ● u‧v = 0 θ u‧v = ║u║║v║cos θ v ● Orthonormal vectors are at right angles to each other and each has unit length ║u║cos θ ● A set {ui} of orthonormal vectors has ● u ‧u = 1 if i = j ● This is also the length of the orthogonal projection i j ● of u onto v, times the length of v ui‧uj = 0 if i ≠ j ● Note: u‧u = ║u║2 5 6 Cross Product (only 3D) Position Vector ● If θ is the angle between u and v, then ● Identifies point in space u × v = (║u║║v║sin θ) ŵ ● Interpreted as: tip of vector “arrow” when the other end is placed at a fixed point called the origin of the ● ŵ is a unit vector orthogonal to both u and v space ● “Right-Hand Rule”: – Curl fingers of right hand from u to v ŵ y – Thumb points in direction of ŵ Position u θ x v Origin Area of parallelogram = ║u × v║ 7 8 Vectors: the “Physical” View Cartesian/Euclidean n-space ℝn ● Directions and positions are different! ● Vectors represented by real n‑tuples of coordinates ● “Legal” Operations: (u1, u2, ..., un) ● Direction = Scalar * Direction – 2D: (x, y) u ● Direction = Direction + Direction – 3D: (x, y, z) ● Position = Position + Direction y ● Represents extent along ● Direction = Position – Position orthogonal coordinate axes ● “Illegal” Operations: – Right-handed system: Curl right hand fingers from x axis to x ● Position = Scalar * Position y axis, thumb points along z axis ● 2 2 2 ½ Position = Position + Position ● Length/magnitude:║u║ = (u1 + u2 + … + un ) ● Direction = Position + Direction – From Pythagoras' Theorem ● Position = Position - Position 9 10 Cartesian/Euclidean n-space ℝn Cartesian/Euclidean n-space ℝn ● ● Dot Product: Let u = (u1, u2, ..., un) and v = (v1, v2, ..., vn) ● Addition of vectors: u‧v ≡ (u1v1 + u2v2 + … + unvn) u + v ≡ (u1 + v1, u2 + v2, ..., un + vn) ● Cross Product (remember, 3D only!): ● Multiplication of vector by scalar: u × v ≡ (u2v3 – u3v2, u3v1 – u1v3, u1v2 - u2v1) su ≡ (su1, su2, ..., sun) 11 12 Another Way to Remember the Cross Vectors: the “Mathematical” View Product in ℝ3 ● Vector space: Set of objects (vectors) closed under ● Addition of two vectors ● ● Multiplication of a vector by a scalar (from field ) Let u = (x1, y1, z1) and v = (x2, y2, z2) F ● Necessary properties of vector spaces ● u × v ≡ (y1z2 – z1y2, z1x2 – x1z2, x1y2 - y1x2) ● Commutative addition: x + y = y + x If u and v are 2D instead, this is the ● Associative addition: (x + y) + z = x + (y + z) area of the parallelogram between them ● Existence of additive identity 0: x + 0 = 0 + x – Can show: 0.x = 0 (prove!) ● Existence of additive inverse -x: x + (-x) = 0 13 – Can show: -x = -1.x (prove!) 14 Vectors: the “Mathematical” View Basis and Dimension ● Properties of vector spaces (contd.) ● Linear combination of vectors {ui}: ∑ciui ● Associative scalar multiplication: r(sx) = (rs)x ● {ci}: Scalar coefficients ● Distributive scalar sum: (r + s)x = rx + sx ● {ui} is linearly independent if no ui can be ● Distributive scalar multiplication: r(x + y) = rx + ry expressed as a linear combination of the others ● Scalar multiplication identity: 1.x = x ● Span of {u }: Set of all linear combinations – Note: This says that the identity for multiplication of two i scalars is also the identity for multiplication of a vector by a scalar ● Basis of vector space: Set of linearly independent vectors whose span is the entire space ● Dimension of vector space: Cardinality of basis ● 15 Note: All bases have same cardinality 16 Basis ● ̂ ̂ ̂ Let a vector space have basis B = [u1, u2, ..., un] ● ̂ Any vector u can be written as ∑i = 1...n uiui ̂ ● Or compactly, (u1, u2, ..., un) ● These are the coordinates of u in basis B ̂ ● Note: ̂ ● We've generalized our original notion of coordinates: they're now relative to the selected basis (axes) ● A point has different coordinates in different bases 17 Change of Basis (example in 3D) Change of Basis (example in 3D) ● Let [u, v, n] be a basis B for a vector space ● Let coordinates of u, v, n in another basis B0 be u = (u1, u2, u3) 0 a1 u1 v1 n1 a1 v = (v1, v2, v3) 0 a2 = u2 v2 n2 a2 n = (n1, n2, n3) 0 a u3 v3 n3 a3 [ 3] [ ] [ ] ● Let coordinates of a in B be (a1, a2, a3) 0 0 0 ● Let coordinates of a in B0 be (a 1, a 2, a 3) 19 20 Change of Basis (example in 3D) Metric ● ● If B and B0 are orthonormal, inverse of matrix is Assign non-negative real number d(u, v) to every its transpose (orthogonal matrix) pair of vectors u, v ● Interpret as distance between u and v 0 a1 u1 u2 u3 a1 ● Necessary properties of metric 0 a2 = v1 v2 v3 a2 ● d(u, v) ≥ 0 0 a3 n1 n2 n3 a ● [ ] [ ] [ 3 ] d(u, v) = 0 if and only if u = v (only if bases are orthonormal) ● d(u, v) = d(v, u) ● d(u, v) ≤ d(u, w) + d(w, v) (Triangle Inequality) 21 22 ℝn as Metric Space Inner Product 2 2 2 ½ ● ● Magnitude: L2 norm:║u║ = (u1 + u2 + … + un ) Assign scalar 〈u, v〉 ∈ F to pair of vectors u, v ● F is assumed to be set of real or complex numbers ● Metric d(u, v) defined as ║u - v║ ● Necessary properties of inner product ● ● 〈u, v〉 = 〈v, u〉 (overline denotes complex conjugate) Other common magnitude functions: Lp norms ● 〈su, v〉 = s〈u, v〉 ║u║ = (⎢u ⎢p + ⎢u ⎢p + … + ⎢u ⎢p)1/p 1 2 n ● 〈u + v, w〉 = 〈u, w〉 + 〈v, w〉 ● L1 is “Manhattan” norm ● The dot product is a valid inner product for ℝn ● L∞ is “max” norm 23 24 Transformations of Vectors Translation (Thanks to Pat Hanrahan for this section) ● u' = T(u) ● Why? ● Modeling: – Create objects in convenient coordinates – Multiple instances of prototype shape – Kinematics of linked structures ● Viewing: – Map between window and device coordinates – Virtual camera projections: parallel/perspective ● We'll stick to ℝ2 for now 25 26 Scaling Rotation Uniform Non-uniform (counter-clockwise about origin) 27 28 Reflection Shear 29 30 Composing Transformations Composing Transformations R(45) T(1, 1) R(45) T(1, 1) R(45) T(1, 1) Rotate, then Translate Translate, then Rotate 31 32 Order Matters! World Space and Object Space ● Transformation maps from one to the other ● Construct by composing sequence Object (Local) Space of basic transforms ● Remember: Transforms apply Right‑to‑Left in our notation! World (Global) Space T(1, 1) R(45) ≠ R(45) T(1, 1) 33 34 World Space and Object Space Translation ● Let's look at the example on the right ● Object → World: Object (Local) Space ● Rotate by (ccw), then θ θ translate by t ● World → Object: t ● Translate by -t, then World (Global) Space rotate by -θ x' = x + tx y' = y + ty Make sure you understand this! 35 36 Scaling (and Reflection) CCW Rotation By θ About Origin x' = s x (Negative scaling x' = x cos θ - y sin θ x coefficients give y' = x sin θ + y cos θ y' = sy y reflection) 37 38 Horizontal Shear Vertical Shear x' = x + sy x' = x y' = y y' = sx + y 39 40 Types of 2D Transformations Homogenous Coordinates (2D) ● Linear Transforms: T(u + v) = T(u) + T(v) ● Point (x, y) → (x, y, 1) ● Scaling ● Direction (x, y) → (x, y, 0) ● Rotation ● For any scalar c, (cx, cy, ch) ≡ (x, y, h) ● Shear ● To convert back: ● Reflection ● If h is 0: (x, y, 0) → (x, y) ● Affine Transforms: T(u) = L(u) + a, where L is linear and a is a fixed vector ● If h is non-zero: (x, y, h) → (x / h, y / h) ● ● Translation Note: ● ● Other, e.g. perspective projection Not 3D vector space, just a new representation for 2D ● Legal/illegal operations for directions & positions ● How do we represent these in a common format? 41 automatically distinguished! 42 Translation Scaling (and Reflection) 1 0 t s 0 0 x' = x + t x ' x x x ' x x x x' = sx x y ' = 0 1 t y y y ' = 0 s y 0 y y' = y + t y' = s y y [ 1 ] 0 0 1 [ 1] y [ 1 ] 0 0 1 [1] [ ] 43 [ ] 44 CCW Rotation By θ About Origin Horizontal Shear x ' cos −sin 0 x x ' 1 s 0 x x' = x + sy x' = x cos θ - y sin θ y ' = sin cos 0 y y ' = 0 1 0 y y' = y y' = x sin θ + y cos θ [ 1 ] 0 0 1 [1] [ 1 ] [0 0 1] [1] [ ] 45 46 Vertical Shear What about 3D? ● Very similar: (x, y, z) → (x, y, z, h) ● Look up the formulæ! ● Rotation is a mess ● Common method: – Map axis of rotation to a coordinate axis (similar to change of basis) – Rotate around the coordinate axis – Map back x ' 1 0 0 x x' = x ● Other approaches based on Euler angles and y ' = s 1 0 y quaternions y' = sx + y [ 1 ] [0 0 1] [1] 47 48 Why Use Matrices? Hierarchical Modeling ● Graphics systems maintain a current ● Compute the matrix once transformation matrix (CTM) x' = x cos θ - y sin θ ● All geometry is transformed by the CTM y' = x sin θ + y cos θ ● CTM defines object space in which geometry is specified ● Don't repeatedly evaluate sines and cosines ● Transformation commands are concatenated onto the CTM.
Recommended publications
  • Common Course Outline MATH 257 Linear Algebra 4 Credits
    Common Course Outline MATH 257 Linear Algebra 4 Credits The Community College of Baltimore County Description MATH 257 – Linear Algebra is one of the suggested elective courses for students majoring in Mathematics, Computer Science or Engineering. Included are geometric vectors, matrices, systems of linear equations, vector spaces, linear transformations, determinants, eigenvectors and inner product spaces. 4 Credits: 5 lecture hours Prerequisite: MATH 251 with a grade of “C” or better Overall Course Objectives Upon successfully completing the course, students will be able to: 1. perform matrix operations; 2. use Gaussian Elimination, Cramer’s Rule, and the inverse of the coefficient matrix to solve systems of Linear Equations; 3. find the inverse of a matrix by Gaussian Elimination or using the adjoint matrix; 4. compute the determinant of a matrix using cofactor expansion or elementary row operations; 5. apply Gaussian Elimination to solve problems concerning Markov Chains; 6. verify that a structure is a vector space by checking the axioms; 7. verify that a subset is a subspace and that a set of vectors is a basis; 8. compute the dimensions of subspaces; 9. compute the matrix representation of a linear transformation; 10. apply notions of linear transformations to discuss rotations and reflections of two dimensional space; 11. compute eigenvalues and find their corresponding eigenvectors; 12. diagonalize a matrix using eigenvalues; 13. apply properties of vectors and dot product to prove results in geometry; 14. apply notions of vectors, dot product and matrices to construct a best fitting curve; 15. construct a solution to real world problems using problem methods individually and in groups; 16.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Determinant Notes
    68 UNIT FIVE DETERMINANTS 5.1 INTRODUCTION In unit one the determinant of a 2× 2 matrix was introduced and used in the evaluation of a cross product. In this chapter we extend the definition of a determinant to any size square matrix. The determinant has a variety of applications. The value of the determinant of a square matrix A can be used to determine whether A is invertible or noninvertible. An explicit formula for A–1 exists that involves the determinant of A. Some systems of linear equations have solutions that can be expressed in terms of determinants. 5.2 DEFINITION OF THE DETERMINANT a11 a12 Recall that in chapter one the determinant of the 2× 2 matrix A = was a21 a22 defined to be the number a11a22 − a12 a21 and that the notation det (A) or A was used to represent the determinant of A. For any given n × n matrix A = a , the notation A [ ij ]n×n ij will be used to denote the (n −1)× (n −1) submatrix obtained from A by deleting the ith row and the jth column of A. The determinant of any size square matrix A = a is [ ij ]n×n defined recursively as follows. Definition of the Determinant Let A = a be an n × n matrix. [ ij ]n×n (1) If n = 1, that is A = [a11], then we define det (A) = a11 . n 1k+ (2) If na>=1, we define det(A) ∑(-1)11kk det(A ) k=1 Example If A = []5 , then by part (1) of the definition of the determinant, det (A) = 5.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Jim Lambers MAT 605 Fall Semester 2015-16 Lecture 14 and 15 Notes These notes correspond to Sections 4.4 and 4.5 in the text. Eigenvalues and Eigenvectors In order to compute the matrix exponential eAt for a given matrix A, it is helpful to know the eigenvalues and eigenvectors of A. Definitions and Properties Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real. However, if A is real, any complex eigenvalues must occur in complex-conjugate pairs. The set of eigenvalues of A is called the spectrum of A, and denoted by λ(A). This terminology explains why the magnitude of the largest eigenvalues is called the spectral radius of A.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Basic Multivector Calculus, Proceedings of FAN 2008, Hiroshima, Japan, 23+24 October 2008
    E. Hitzer, Basic Multivector Calculus, Proceedings of FAN 2008, Hiroshima, Japan, 23+24 October 2008. Basic Multivector Calculus Eckhard Hitzer, Department of Applied Physics, Faculty of Engineering, University of Fukui Abstract: We begin with introducing the generalization of real, complex, and quaternion numbers to hypercomplex numbers, also known as Clifford numbers, or multivectors of geometric algebra. Multivectors encode everything from vectors, rotations, scaling transformations, improper transformations (reflections, inversions), geometric objects (like lines and spheres), spinors, and tensors, and the like. Multivector calculus allows to define functions mapping multivectors to multivectors, differentiation, integration, function norms, multivector Fourier transformations and wavelet transformations, filtering, windowing, etc. We give a basic introduction into this general mathematical language, which has fascinating applications in physics, engineering, and computer science. more didactic approach for newcomers to geometric algebra. 1. Introduction G(I) is the full geometric algebra over all vectors in the “Now faith is being sure of what we hope for and certain of what we do not see. This is what the ancients were n-dimensional unit pseudoscalar I e1 e2 ... en . commended for. By faith we understand that the universe was formed at God’s command, so that what is seen was not made 1 A G (I) is the n-dimensional vector sub-space of out of what was visible.” [7] n The German 19th century mathematician H. Grassmann had grade-1 elements in G(I) spanned by e1,e2 ,...,en . For the clear vision, that his “extension theory (now developed to geometric calculus) … forms the keystone of the entire 1 vectors a,b,c An G (I) and scalars , ,, ; structure of mathematics.”[6] The algebraic “grammar” of this universal form of calculus is geometric algebra (or Clifford G(I) has the fundamental properties of algebra).
    [Show full text]
  • MATH 304 Linear Algebra Lecture 24: Scalar Product. Vectors: Geometric Approach
    MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B′ A′ A vector is represented by a directed segment. • Directed segment is drawn as an arrow. • Different arrows represent the same vector if • they are of the same length and direction. Vectors: geometric approach v B A v B′ A′ −→AB denotes the vector represented by the arrow with tip at B and tail at A. −→AA is called the zero vector and denoted 0. Vectors: geometric approach v B − A v B′ A′ If v = −→AB then −→BA is called the negative vector of v and denoted v. − Vector addition Given vectors a and b, their sum a + b is defined by the rule −→AB + −→BC = −→AC. That is, choose points A, B, C so that −→AB = a and −→BC = b. Then a + b = −→AC. B b a C a b + B′ b A a C ′ a + b A′ The difference of the two vectors is defined as a b = a + ( b). − − b a b − a Properties of vector addition: a + b = b + a (commutative law) (a + b) + c = a + (b + c) (associative law) a + 0 = 0 + a = a a + ( a) = ( a) + a = 0 − − Let −→AB = a. Then a + 0 = −→AB + −→BB = −→AB = a, a + ( a) = −→AB + −→BA = −→AA = 0. − Let −→AB = a, −→BC = b, and −→CD = c. Then (a + b) + c = (−→AB + −→BC) + −→CD = −→AC + −→CD = −→AD, a + (b + c) = −→AB + (−→BC + −→CD) = −→AB + −→BD = −→AD. Parallelogram rule Let −→AB = a, −→BC = b, −−→AB′ = b, and −−→B′C ′ = a. Then a + b = −→AC, b + a = −−→AC ′.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Inner Product Spaces Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 2, 2007)
    MAT067 University of California, Davis Winter 2007 Inner Product Spaces Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 2, 2007) The abstract definition of vector spaces only takes into account algebraic properties for the addition and scalar multiplication of vectors. For vectors in Rn, for example, we also have geometric intuition which involves the length of vectors or angles between vectors. In this section we discuss inner product spaces, which are vector spaces with an inner product defined on them, which allow us to introduce the notion of length (or norm) of vectors and concepts such as orthogonality. 1 Inner product In this section V is a finite-dimensional, nonzero vector space over F. Definition 1. An inner product on V is a map ·, · : V × V → F (u, v) →u, v with the following properties: 1. Linearity in first slot: u + v, w = u, w + v, w for all u, v, w ∈ V and au, v = au, v; 2. Positivity: v, v≥0 for all v ∈ V ; 3. Positive definiteness: v, v =0ifandonlyifv =0; 4. Conjugate symmetry: u, v = v, u for all u, v ∈ V . Remark 1. Recall that every real number x ∈ R equals its complex conjugate. Hence for real vector spaces the condition about conjugate symmetry becomes symmetry. Definition 2. An inner product space is a vector space over F together with an inner product ·, ·. Copyright c 2007 by the authors. These lecture notes may be reproduced in their entirety for non- commercial purposes. 2NORMS 2 Example 1. V = Fn n u =(u1,...,un),v =(v1,...,vn) ∈ F Then n u, v = uivi.
    [Show full text]