Introduction to Linear Algebra Tyrone L. Vincent

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Linear Algebra Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: [email protected] URL: http://egweb.mines.edu/~tvincent Contents Chapter 1. Revew of Vectors and Matricies 1 1. Useful Notation 1 2. Vectors and Matricies 2 3. Basic operations 3 4. Useful Properties of the Basic Operations 6 Chapter 2. Vector Spaces 9 1. Vector Space De…nition 9 2. Linear Independence and Basis 10 3. Change of basis 12 4. Norms, Dot Products, Orthonormal Basis 14 5. QR Decomposition 16 Chapter 3. Projection Theorem 21 Chapter 4. Matrices and Linear Mappings 23 1. Solutions to Systems of Linear Equations 24 Chapter 5. Square Matrices, Eigenvalues and Eigenvectors 25 1. Matrix Exponential 26 2. Other Matrix Functions 26 Appendix A. Appendix A 29 iii CHAPTER 1 Revew of Vectors and Matricies 1. Useful Notation 1.1. Common Abbrivations. In this course, as in most branches of math- ematics, we will often utilize sets of mathematical objects. For example, there is the set of natural numbers, which begins 1; 2; 3; : This set is often denoted N, so that 2 is a member of N but is not. To specify that an object is a member of a set, we use the notation for "is a member of". For example 2 N: Some of the sets we will use are 2 2 R real numbers C complex numbers Rn n dimensional vectors of real numbers m n R m n dimensional real matrices For these common sets, particular notation will be used to identfy members, namely lower case for a scalar or vector, and upper case for a matrix. The following table also includes some common operations x vector or scalar x; y inner product between vectors x and y Ah i matrix AT transpose of A 1 A inverse of A det(A), A determinant of A j j To specify a set, we can also use a bracket notation. For example, to specify E as the set of all positive even numbers, we can say either E = 2; 4; 6; 8; f g when the pattern is clear, or use a : symbol, which means "such that": E = x N : mod(x; 2) = 0 : f 2 g This can be read "The set of natural numbers x, such that x is divisible evenly by 2". When talking about sets, we will often want to say when a property holds for every member of the set, or for at least one. In this case, the symbol ; meaning 8 "for all" and ; meaning "there exists" are useful. For example, suppose I is the set numbers consisting9 of the IQs for people in this class. Then x I x > 110 8 2 1 2 1.REVEWOFVECTORSANDMATRICIES means that all students in this class have IQ greater than 110 while x I : x > 110 9 2 means that at least one student in the class has IQ greater than 110. We will also be concerned with functions. Given a set X and a set Y a function f from X to Y maps an element from X to and element of Y and is denoted f : X Y: ! The set X is called the domain, and f(x) is assumed to be de…ned for every x X: The range, or image of f is the set of y for which f(x) = y for some x : 2 Range(f) = y Y : x X such that y = f(x) : f 2 9 2 g If Range(f) = Y; then f is called "onto". If there is only one x X such that y = f(x); then f is called "one to one". 2 2. Vectors and Matricies You are probably already familiar with vectors and matrices from previous courses in mathmatics or physics. We will …nd matrices and vectors very useful when representing dynamic systems mathematically, however, we will need to be able to manipulate and understand these objects at a fairly deep level. Some texts use bold face for vectors and matrices, but ours does not, and I will not use that convention here, or during class. I will however, use lower case letters for vectors and upper case letters for matrices. A vector of n tuple real (or sometimes complex) numbers is represented as: x1 x2 x = 2 . 3 . 6 7 6 x 7 6 n 7 4 5 n So that x is a vector, and xi are each scalars. We will use the notation xi R to n 2 show that x is a length p vector of real numbers (or xi C if the elements of xi are complex.) Sometimes we will want to index vectors as2 well, which can sometimes be confusing: Is xi the vector xi or the ith element of the vector x? To make the di¤erence clear, we will reserve the notation [x]i to indicate the ith element of x: As an example, consider the following illustration of addition and scalar multiplication for vectors: [x1]1 + [x2]1 [x1]1 [x1]2 + [x2]2 [x1]2 x1 + x2 = 2 . 3 x1 = 2 . 3 . 6 7 6 7 6 [x ] + [x ] 7 6 [x ] 7 6 1 n 2 n 7 6 1 n 7 A matrix is an m n4array of scalars:5 4 5 a11 a12 a1n a21 a22 a2n A = 2 . . 3 . .. 6 7 6 am1 am2 amn 7 6 7 m 4n 5 We use the notation A R indicate that A is a m n matrix. Addition and scalar multiplication are2 de…ned the same way as for vectors. 3.BASICOPERATIONS 3 3. Basic operations You should already be faimilar with most of the basic operations on vectors and matricies listed in this section. m n T n m 3.1. Transpose. Given a matrix A R ; the transpose A R is found by ‡ipping all terms along the diagonal.2 That is, if 2 a11 a12 a1n a21 a22 a2n A = 2 . . 3 . .. 6 7 6 am1 am2 amn 7 6 7 then 4 5 a11 a21 an1 a12 a22 an2 T A = 2 . . 3 . .. 6 7 6 a1m a2m anm 7 6 7 Note that if the matrix is not4 square (m = n); then the5 “shape” of the matrix 6 changes. We can also use transpose on a vector x Rn by considering it to be an n by 1 matrix. In this case, xT is the 1 by n matrix2: T x = [x]1 [x]2 [x]n 3.2. Inner (dot) product. In three dimensional space, we are familiar with vectors as indicating direction. The inner product is an operation that allows us to tell if two vectors are pointing in a similar direction. We will use the notation x; y for inner product between x and y: In other courses, you may have seen this calledh i the dot product with notation x y: The notation used here is more common in signal processing and control systems. The inner product of x; y Rn is de…ned to be the sum of product of the elements 2 n x; y = [x]i[y]i h i i=1 X = xT y Recall that if x and y are vectors, the angle between them can be found using the formula x; y cos = h i x; x y; y h i h i Note that the inner product satis…es thep following rules (inherited from transpose) x + y; z = x; z + y; z h i h i h i y; z = y; z h i h i 3.3. Matrix-vector multiplication. Suppose we have an m n matrix A a11 a12 a1n a21 a22 a2n A = 2 . . 3 . .. 6 7 6 am1 am2 amn 7 6 7 4 5 4 1.REVEWOFVECTORSANDMATRICIES and a length n vector x1: Note that the number of columns of A are the same as the length of x1: Multiplication of A and x1 is de…ned as follows: x2 = Ax1 (3.1) where [x2]1 = a11[x1]1 + a12[x1]2 + + a1n[x1]n (3.2a) [x2]2 = a21[x1]1 + a22[x1]2 + + a2n[x1]n (3.2b) . (3.2c) [x2]m = am1[x1]1 + am2[x1]2 + + amn[x1]n (3.2d) Note that the result x2 is a length m vector (the number of rows of A). The notation (3.1) is a compact representation of the system of linear algebraic equations (3.2). Note that A de…nes a mapping from Rn to Rm: Thus, we can write A : Rn Rm: This mapping is linear. ! We can also consider a matrix to be a group of vectors. For example, if we group the vectors x1; x2; ; xn into a matrix M = x1 x2 xp and de…ne the vector 1 2 a = 2 . 3 . 6 7 6 7 6 p 7 Then all linear combinations of x1; x2; 4 ; xp5are given by y = Ma = 1x1 + 2x2 + + pxp 3.4. Matrix-Matrix multiplication. If matrix A : Rn Rm; and matrix B : Rm Rp; we can …nd mapping C: Rn Rp which is the! composition of A and B ! ! C = BA [c]ij = [b]ik [a]kj k X That is, the i; j element of C is the dot product of the ith row of B with the jth column of A: The dimension of C is p n: This can also be though of as B mapping a column of A at a time: That is, the …rst column of C; [c] 1 is B[a] 1;B times the …rst column of A: Clearly, two matricies can be multiplied only if they have compatible dimensions.
Recommended publications
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • Geometric Polarimetry − Part I: Spinors and Wave States
    1 Geometric Polarimetry Part I: Spinors and − Wave States David Bebbington, Laura Carrea, and Ernst Krogager, Member, IEEE Abstract A new approach to polarization algebra is introduced. It exploits the geometric properties of spinors in order to represent wave states consistently in arbitrary directions in three dimensional space. In this first expository paper of an intended series the basic derivation of the spinorial wave state is seen to be geometrically related to the electromagnetic field tensor in the spatio-temporal Fourier domain. Extracting the polarization state from the electromagnetic field requires the introduction of a new element, which enters linearly into the defining relation. We call this element the phase flag and it is this that keeps track of the polarization reference when the coordinate system is changed and provides a phase origin for both wave components. In this way we are able to identify the sphere of three dimensional unit wave vectors with the Poincar´esphere. Index Terms state of polarization, geometry, covariant and contravariant spinors and tensors, bivectors, phase flag, Poincar´esphere. I. INTRODUCTION arXiv:0804.0745v1 [physics.optics] 4 Apr 2008 The development of applications in radar polarimetry has been vigorous over the past decade, and is anticipated to continue with ambitious new spaceborne remote sensing missions (for example TerraSAR-X [1] and TanDEM-X [2]). As technical capabilities increase, and new application areas open up, innovations in data analysis often result. Because polarization data are This work was supported by the Marie Curie Research Training Network AMPER (Contract number HPRN-CT-2002-00205). D. Bebbington and L.
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices
    Appl. Math. Inf. Sci. 9, No. 2L, 319-324 (2015) 319 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/092L06 A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices Waldemar Hołubowski1, Dariusz Kurzyk1,∗ and Tomasz Trawi´nski2 1 Institute of Mathematics, Silesian University of Technology, Kaszubska 23, Gliwice 44–100, Poland 2 Mechatronics Division, Silesian University of Technology, Akademicka 10a, Gliwice 44–100, Poland Received: 6 Jul. 2014, Revised: 7 Oct. 2014, Accepted: 8 Oct. 2014 Published online: 1 Apr. 2015 Abstract: We propose an effective method to find the inverse of symmetric block arrowhead matrices which often appear in areas of applied science and engineering such as head-positioning systems of hard disk drives or kinematic chains of industrial robots. Block arrowhead matrices can be considered as generalisation of arrowhead matrices occurring in physical problems and engineering. The proposed method is based on LDLT decomposition and we show that the inversion of the large block arrowhead matrices can be more effective when one uses our method. Numerical results are presented in the examples. Keywords: matrix inversion, block arrowhead matrices, LDLT decomposition, mechatronic systems 1 Introduction thermal and many others. Considered subsystems are highly differentiated, hence formulation of uniform and simple mathematical model describing their static and A square matrix which has entries equal zero except for dynamic states becomes problematic. The process of its main diagonal, a one row and a column, is called the preparing a proper mathematical model is often based on arrowhead matrix. Wide area of applications causes that the formulation of the equations associated with this type of matrices is popular subject of research related Lagrangian formalism [9], which is a convenient way to with mathematics, physics or engineering, such as describe the equations of mechanical, electromechanical computing spectral decomposition [1], solving inverse and other components.
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]
  • Approximating the Exponential from a Lie Algebra to a Lie Group
    MATHEMATICS OF COMPUTATION Volume 69, Number 232, Pages 1457{1480 S 0025-5718(00)01223-0 Article electronically published on March 15, 2000 APPROXIMATING THE EXPONENTIAL FROM A LIE ALGEBRA TO A LIE GROUP ELENA CELLEDONI AND ARIEH ISERLES 0 Abstract. Consider a differential equation y = A(t; y)y; y(0) = y0 with + y0 2 GandA : R × G ! g,whereg is a Lie algebra of the matricial Lie group G. Every B 2 g canbemappedtoGbythematrixexponentialmap exp (tB)witht 2 R. Most numerical methods for solving ordinary differential equations (ODEs) on Lie groups are based on the idea of representing the approximation yn of + the exact solution y(tn), tn 2 R , by means of exact exponentials of suitable elements of the Lie algebra, applied to the initial value y0. This ensures that yn 2 G. When the exponential is difficult to compute exactly, as is the case when the dimension is large, an approximation of exp (tB) plays an important role in the numerical solution of ODEs on Lie groups. In some cases rational or poly- nomial approximants are unsuitable and we consider alternative techniques, whereby exp (tB) is approximated by a product of simpler exponentials. In this paper we present some ideas based on the use of the Strang splitting for the approximation of matrix exponentials. Several cases of g and G are considered, in tandem with general theory. Order conditions are discussed, and a number of numerical experiments conclude the paper. 1. Introduction Consider the differential equation (1.1) y0 = A(t; y)y; y(0) 2 G; with A : R+ × G ! g; where G is a matricial Lie group and g is the underlying Lie algebra.
    [Show full text]
  • Structured Factorizations in Scalar Product Spaces
    STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES D. STEVEN MACKEY¤, NILOUFER MACKEYy , AND FRANC»OISE TISSEURz Abstract. Let A belong to an automorphism group, Lie algebra or Jordan algebra of a scalar product. When A is factored, to what extent do the factors inherit structure from A? We answer this question for the principal matrix square root, the matrix sign decomposition, and the polar decomposition. For general A, we give a simple derivation and characterization of a particular generalized polar decomposition, and we relate it to other such decompositions in the literature. Finally, we study eigendecompositions and structured singular value decompositions, considering in particular the structure in eigenvalues, eigenvectors and singular values that persists across a wide range of scalar products. A key feature of our analysis is the identi¯cation of two particular classes of scalar products, termed unitary and orthosymmetric, which serve to unify assumptions for the existence of structured factorizations. A variety of di®erent characterizations of these scalar product classes is given. Key words. automorphism group, Lie group, Lie algebra, Jordan algebra, bilinear form, sesquilinear form, scalar product, inde¯nite inner product, orthosymmetric, adjoint, factorization, symplectic, Hamiltonian, pseudo-orthogonal, polar decomposition, matrix sign function, matrix square root, generalized polar decomposition, eigenvalues, eigenvectors, singular values, structure preservation. AMS subject classi¯cations. 15A18, 15A21, 15A23, 15A57,
    [Show full text]
  • Linear Algebra - Part II Projection, Eigendecomposition, SVD
    Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Punit Shah's slides) 2019 Linear Algebra, Part II 2019 1 / 22 Brief Review from Part 1 Matrix Multiplication is a linear tranformation. Symmetric Matrix: A = AT Orthogonal Matrix: AT A = AAT = I and A−1 = AT L2 Norm: s X 2 jjxjj2 = xi i Linear Algebra, Part II 2019 2 / 22 Angle Between Vectors Dot product of two vectors can be written in terms of their L2 norms and the angle θ between them. T a b = jjajj2jjbjj2 cos(θ) Linear Algebra, Part II 2019 3 / 22 Cosine Similarity Cosine between two vectors is a measure of their similarity: a · b cos(θ) = jjajj jjbjj Orthogonal Vectors: Two vectors a and b are orthogonal to each other if a · b = 0. Linear Algebra, Part II 2019 4 / 22 Vector Projection ^ b Given two vectors a and b, let b = jjbjj be the unit vector in the direction of b. ^ Then a1 = a1 · b is the orthogonal projection of a onto a straight line parallel to b, where b a = jjajj cos(θ) = a · b^ = a · 1 jjbjj Image taken from wikipedia. Linear Algebra, Part II 2019 5 / 22 Diagonal Matrix Diagonal matrix has mostly zeros with non-zero entries only in the diagonal, e.g. identity matrix. A square diagonal matrix with diagonal elements given by entries of vector v is denoted: diag(v) Multiplying vector x by a diagonal matrix is efficient: diag(v)x = v x is the entrywise product. Inverting a square diagonal matrix is efficient: 1 1 diag(v)−1 = diag [ ;:::; ]T v1 vn Linear Algebra, Part II 2019 6 / 22 Determinant Determinant of a square matrix is a mapping to a scalar.
    [Show full text]
  • Singles out a Specific Basis
    Quantum Information and Quantum Noise Gabriel T. Landi University of Sao˜ Paulo July 3, 2018 Contents 1 Review of quantum mechanics1 1.1 Hilbert spaces and states........................2 1.2 Qubits and Bloch’s sphere.......................3 1.3 Outer product and completeness....................5 1.4 Operators................................7 1.5 Eigenvalues and eigenvectors......................8 1.6 Unitary matrices.............................9 1.7 Projective measurements and expectation values............ 10 1.8 Pauli matrices.............................. 11 1.9 General two-level systems....................... 13 1.10 Functions of operators......................... 14 1.11 The Trace................................ 17 1.12 Schrodinger’s¨ equation......................... 18 1.13 The Schrodinger¨ Lagrangian...................... 20 2 Density matrices and composite systems 24 2.1 The density matrix........................... 24 2.2 Bloch’s sphere and coherence...................... 29 2.3 Composite systems and the almighty kron............... 32 2.4 Entanglement.............................. 35 2.5 Mixed states and entanglement..................... 37 2.6 The partial trace............................. 39 2.7 Reduced density matrices........................ 42 2.8 Singular value and Schmidt decompositions.............. 44 2.9 Entropy and mutual information.................... 50 2.10 Generalized measurements and POVMs................ 62 3 Continuous variables 68 3.1 Creation and annihilation operators................... 68 3.2 Some important
    [Show full text]
  • Unit 6: Matrix Decomposition
    Unit 6: Matrix decomposition Juan Luis Melero and Eduardo Eyras October 2018 1 Contents 1 Matrix decomposition3 1.1 Definitions.............................3 2 Singular Value Decomposition6 3 Exercises 13 4 R practical 15 4.1 SVD................................ 15 2 1 Matrix decomposition 1.1 Definitions Matrix decomposition consists in transforming a matrix into a product of other matrices. Matrix diagonalization is a special case of decomposition and is also called diagonal (eigen) decomposition of a matrix. Definition: there is a diagonal decomposition of a square matrix A if we can write A = UDU −1 Where: • D is a diagonal matrix • The diagonal elements of D are the eigenvalues of A • Vector columns of U are the eigenvectors of A. For symmetric matrices there is a special decomposition: Definition: given a symmetric matrix A (i.e. AT = A), there is a unique decomposition of the form A = UDU T Where • U is an orthonormal matrix • Vector columns of U are the unit-norm orthogonal eigenvectors of A • D is a diagonal matrix formed by the eigenvalues of A This special decomposition is known as spectral decomposition. Definition: An orthonormal matrix is a square matrix whose columns and row vectors are orthogonal unit vectors (orthonormal vectors). Orthonormal matrices have the property that their transposed matrix is the inverse matrix. 3 Proposition: if a matrix Q is orthonormal, then QT = Q−1 Proof: consider the row vectors in a matrix Q: 0 1 u1 B . C T u j ::: ju Q = @ . A and Q = 1 n un 0 1 0 1 0 1 u1 hu1; u1i ::: hu1; uni 1 ::: 0 T B .
    [Show full text]
  • LU Decomposition - Wikipedia, the Free Encyclopedia
    LU decomposition - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/LU_decomposition From Wikipedia, the free encyclopedia In linear algebra, LU decomposition (also called LU factorization) is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or calculate the determinant of a matrix. LU decomposition can be viewed as a matrix form of Gaussian elimination. LU decomposition was introduced by mathematician Alan Turing [1] 1 Definitions 2 Existence and uniqueness 3 Positive definite matrices 4 Explicit formulation 5 Algorithms 5.1 Doolittle algorithm 5.2 Crout and LUP algorithms 5.3 Theoretical complexity 6 Small example 7 Sparse matrix decomposition 8 Applications 8.1 Solving linear equations 8.2 Inverse matrix 8.3 Determinant 9 See also 10 References 11 External links Let A be a square matrix. An LU decomposition is a decomposition of the form where L and U are lower and upper triangular matrices (of the same size), respectively. This means that L has only zeros above the diagonal and U has only zeros below the diagonal. For a matrix, this becomes: An LDU decomposition is a decomposition of the form where D is a diagonal matrix and L and U are unit triangular matrices, meaning that all the entries on the diagonals of L and U LDU decomposition of a Walsh matrix 1 of 7 1/11/2012 5:26 PM LU decomposition - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/LU_decomposition are one.
    [Show full text]