Introduction to Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Linear Algebra Chapter 1 Introduction to Linear Algebra 1.1 Vector Operations • scalar x: a simple numeric value/ variable, e.g. x = 2:5, x = π, x = 105 • N-dimensional column vector ~v with elements vi: 0 1 v1 B v2 C ~v = B C (1.1) B . C @ . A vN The transpose of ~v, ~v>, is a row vector: > ~v = (v1; v2; : : : ; vN ) (1.2) ~ ~ • Addition of vectors: The sum ~a+b is a vector with elements (~a+b)i = ai+bi • The dot-product (inner product) of two vectors gives a scalar value: ~ >~ ~> X ~a · b = ~a b = b ~a = aibi (1.3) i p pP 2 The norm (length) of vector ~v is given by j~vj = ~v · ~v = i vi ~v Unit/normalized vector v^ of a non-zero vector ~v:v ^ = jvj , jv^j = 1 The dot-product of two vectors has an interesting geometric interpretation: ~a ·~b = j~aj · j~bj · cos(θ) (1.4) Thereby θ is the angle between the two vectors. π ~ Two vectors are orthogonal if θ = 2 , i.e. if ~a · b = 0. The projection of ~a onto ~b (fig. 1.1) is given by: ~a · ^b = j~aj · cos(θ) (1.5) 1 2 CHAPTER 1. INTRODUCTION TO LINEAR ALGEBRA Figure 1.1: Projection of ~a onto ~b. > • Multiplication by a scalar k: k · ~v = (k · v1; k · v2; : : : ; k · vN ) The length of the vector ~v scales with the factor k: s s X 2 2 X 2 jk · ~vj = (k · vi) = k · vi = k · j~vj (1.6) i i Exercise 1.1.1. Calculate all the vector products and the lengths for the vectors: ~v1 = (1; 2; 3);~v2 = (2; 3; 1); ~v3 = (−8; −5; 31). Exercise 1.1.2. Try to explain in your own words why ~v2 · ~v3 = 0 for any 2 vectors ~v1 and ~v2, if ~v3 = ~v1 · j~v2j − ~v2 · ( ~v1 · ~v2). Exercise 1.1.3. Proof that ~a ·~b = j~aj · j~bj · cos(θ), eg. using the law of cosines. 1.1.1 The Linear Neuron Imagine a neuron A receiving input from N sensory neurons. Each synapse has a weight or efficacy wi and the activity of each pre-synaptic neuron is described by the firing rate xi. Synaptic weights with wi > 0 correspond to excitatory synapses, whereas weights with wi < 0 represent inhibitory synapses. In the case of a linear neuron, the firing rate xA of depends linearly on its input, i.e. its firing rate is a weighted sum of its inputs: X xA = w1x1 + w2x2 + ::: + wN xN = wixi (1.7) i If we describe the neuronal inputs and synaptic weights by vectors ~x and ~w, respectively, then we can write eq. 1.7 for the firing rate xA more compactly as dot product: xA = ~w · ~x (1.8) The output of the linear neuron A is zero precisely if the input vector ~x is orthogonal to the weight vector ~w. The set of input vectors that are orthogonal to the weight vector form a so-called hyperplane in the input space. In other words, our linear neuron is a detector which is maximally sensitive to inputs parallel to a particular direction in the input space and minimally sensitive to inputs lying on a (N −1)-dimensional hyperplane orthogonal to this direction. 1.2. LINEAR MAPPINGS OF VECTORS 3 1.2 Linear Mappings of Vectors Consider a function M(~v that maps a N-dimensional vector ~v to a P -dimensional > vector M(~v) = (M1(~v);M2(~v);:::;MP (~v)) . This mapping is linear if and only if: 1. For all scalars k: M(k · ~v) = k · M(~v) 2. For all pairs of vectors ~a and ~b: M(~a +~b) = M(~a) + M(~b) This means that each element of M(~v) is determined by a linear combination of the elements ~v. Hence, for each element Mi(~v) we can find some scalars Mij such that: X Mi(~v) = Mi1v1 + Mi2v2 + ::: + MiN vN = Mijvj (1.9) j We arrange the scalars Mij to a P × N-matrix M and define the product M · ~v of matrix M with column vector ~v by: X (M · ~v)i = Mijvj (1.10) j and the product ~v> · M of matrix M with row vector ~v is given by: > X (~v · M)j = viMij (1.11) i This motivates the definition of matrices and matrix multiplication. Thus, each possible linear function on any vector can be described by multiplying the vector with a corresponding matrix. We say the matrix multiplication of a vector corresponds to a linear transformation of the vector. 1.3 Matrix Operations • A P × N-matrix M has P rows and N columns and elements Mij, where i indicates the row index and j represents the column index: 0 1 M11 M12 ··· M1N BM21 M22 ··· M2N C M = B C (1.12) B . .. C @ . A MP 1 MP 2 ··· MPN > > The transpose of M, M , is the matrix with elements Mij = Mji. I.e. the columns and rows of M are flipped: 0 1 M11 M21 ··· MP 1 B M12 M22 ··· MP 2 C M> = B C (1.13) B . .. C @ . A M1N M2N ··· MPN 4 CHAPTER 1. INTRODUCTION TO LINEAR ALGEBRA • Multiplication by a scalar k: The matrix k · M = M · k has the elements (k · M)ij = k · Mij • Addition of matrices: A + B is a matrix with elements (A + B)ij = Aij + Bij • The matrix-product of M ×N-matrix A with N ×P -matrix B is defined as follows: 0 1 0 1 A11 A12 ··· A1N B11 B12 ··· B1P B A21 A22 ··· A2N C B B21 B22 ··· B2P C A · B = B C · B C B . .. C B . .. C @ . A @ . A AM1 AM2 ··· AMN BN1 BN2 ··· BNP . 0 1 A~ 1 0 1 B A~ C B 2 C ~ ~ ~ = B . C · @ B1 B2 ··· BP A B . C @ . A A~M 0 1 A~1 · B~1 A~1 · B~ 2 ··· A~1 · B~P ~ ~ ~ ~ ~ ~ B A2 · B1 A2 · B2 ··· A2 · BP C = B C B . .. C @ . A A~M · B~1 A~M · B~ 2 ··· A~M · B~P 0 P P P 1 A1iBi1 A1iBi2 ··· A1iBiP Pi Pi Pi B i A2iBi1 i A2iBi2 ··· i A2iBiP C = B C (1.14) B . .. C @ . A P P P i AMiBi1 i AMiBi2 ··· i AMiBiP For each row of A we calculate the dot-product with each column of B. Note, in general the matrix-product is not commutative: AB 6= BA • An N × N-matrix is a square matrix. A square matrix M is called > symmetric if M = M . This means Mij = Mji for all i and j. • The identity matrix 1 is a matrix that is Mii = 1 on the diagonal and Mij = 0; i 6= j otherwise. • The inverse of a square matrix M is a matrix M−1 satisfying: M−1 · M = M · M−1 = 1 (1.15) Note, not all matrices have an inverse, but if the inverse exists, it is unique. If the inverse M−1 exists, the matrix M is called invertible. Exercise 1.3.1. Calculate the following products: A~v, ~v>B, AB and BA for: 01 5 61 04 1 31 > ~v = (1; 1; 1) ; A = @3 2 5A ; B = @2 1 1A 4 1 7 3 1 2 1.4. LINEAR EQUATIONS 5 Exercise 1.3.2. Show that (AB)> = B>A>. Exercise 1.3.3. Show that (A>)−1 = (A−1)>. Exercise 1.3.4. Suppose A and B are both invertible N × N-matrices. Show that (AB)−1 = B−1A−1. 1.4 Linear Equations A central problem of linear algebra is to solve systems of linear equations (SLE) with several unknowns. Simple SLE can be solved by substitution and elimina- tion. For example suppose the following SLE: 2x + 3y = 6 4x + 9y = 15: 1. We solve the top equation for x in terms of y: 3 x = 3 − y 2 2. Then we substitute the expression for x into the bottom equation: 3 4 3 − y + 9y = 15 2 3. Now we solve this equation for y and get y = 1. This in turn we substitute 3 3 into the reduced equation of the first step and we get: x = 3 − 2 · 1 = 2 However, for more complicated SLE with more equations and more unkowns we need a moore systematic approach. A method that is particularly useful and efficient for numerical solutions to SLE is Gaussian elimination. We will discuss the Gaussian elimination algorithm by solving the following SLE: v1 + v2 + v3 = 0 4v1 + 2v2+ v3 = 1 9v1 + 3v2+ v3 = 3 1. Write the SLE in matrix-form M · ~v = ~b and generate the extended coefficient matrix: 0 1 0 1 1 1 0 1 @ M ~b A = @ 4 2 1 1 A 9 3 1 3 2. The goal is to turn M into the identity matrix by • swapping rows • multiplying rows by a scalar value • adding/ subtracting rows from each other 6 CHAPTER 1. INTRODUCTION TO LINEAR ALGEBRA 0 1 0 1 1 1 1 0 R2−4·R1 1 1 1 0 R3−9·R1 @ 4 2 1 1 A −−−−−−! @ 0 −2 −3 1 A 9 3 1 3 0 −6 −8 3 R3−3·R2 0 1 1 1 0 1 R1−R3 0 1 1 0 0 1 − 1 ·R R − 3 ·R 2 2 3 1 2 2 3 1 −−−−−−! @ 0 1 2 − 2 A −−−−−−! @ 0 1 0 − 2 A 0 0 1 0 0 0 1 0 0 1 1 1 0 0 2 R1−R2 1 −−−−−! @ 0 1 0 − 2 A 0 0 1 0 3.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Determinant Notes
    68 UNIT FIVE DETERMINANTS 5.1 INTRODUCTION In unit one the determinant of a 2× 2 matrix was introduced and used in the evaluation of a cross product. In this chapter we extend the definition of a determinant to any size square matrix. The determinant has a variety of applications. The value of the determinant of a square matrix A can be used to determine whether A is invertible or noninvertible. An explicit formula for A–1 exists that involves the determinant of A. Some systems of linear equations have solutions that can be expressed in terms of determinants. 5.2 DEFINITION OF THE DETERMINANT a11 a12 Recall that in chapter one the determinant of the 2× 2 matrix A = was a21 a22 defined to be the number a11a22 − a12 a21 and that the notation det (A) or A was used to represent the determinant of A. For any given n × n matrix A = a , the notation A [ ij ]n×n ij will be used to denote the (n −1)× (n −1) submatrix obtained from A by deleting the ith row and the jth column of A. The determinant of any size square matrix A = a is [ ij ]n×n defined recursively as follows. Definition of the Determinant Let A = a be an n × n matrix. [ ij ]n×n (1) If n = 1, that is A = [a11], then we define det (A) = a11 . n 1k+ (2) If na>=1, we define det(A) ∑(-1)11kk det(A ) k=1 Example If A = []5 , then by part (1) of the definition of the determinant, det (A) = 5.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Eigenvalues and Eigenvectors
    Jim Lambers MAT 605 Fall Semester 2015-16 Lecture 14 and 15 Notes These notes correspond to Sections 4.4 and 4.5 in the text. Eigenvalues and Eigenvectors In order to compute the matrix exponential eAt for a given matrix A, it is helpful to know the eigenvalues and eigenvectors of A. Definitions and Properties Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real. However, if A is real, any complex eigenvalues must occur in complex-conjugate pairs. The set of eigenvalues of A is called the spectrum of A, and denoted by λ(A). This terminology explains why the magnitude of the largest eigenvalues is called the spectral radius of A.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]