Chapter 6 Direct Methods for Solving Linear Systems

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 6 Direct Methods for Solving Linear Systems Chapter 6 Direct Methods for Solving Linear Systems Per-Olof Persson [email protected] Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Direct Methods for Linear Systems Consider solving a linear system of the form: 퐸1 ∶ 푎11푥1 + 푎12푥2 + ⋯ + 푎1푛푥푛 = 푏1, 퐸2 ∶ 푎21푥1 + 푎22푥2 + ⋯ + 푎2푛푥푛 = 푏2, ⋮ 퐸푛 ∶ 푎푛1푥1 + 푎푛2푥2 + ⋯ + 푎푛푛푥푛 = 푏푛, for 푥1, … , 푥푛. Direct methods give an answer in a fixed number of steps, subject only to round-off errors. We use three row operations to simplify the linear system: 1. Multiply Eq. 퐸푖 by 휆 ≠ 0: (휆퐸푖) → (퐸푖) 2. Multiply Eq. 퐸푗 by 휆 and add to Eq. 퐸푖: (퐸푖 + 휆퐸푗) → (퐸푖) 3. Exchange Eq. 퐸푖 and Eq. 퐸푗: (퐸푖) ↔ (퐸푗) Gaussian Elimination Gaussian Elimination with Backward Substitution Reduce a linear system to triangular form by introducing zeros using the row operations (퐸푖 + 휆퐸푗) → (퐸푖) Solve the triangular form using backward-substitution Row Exchanges If a pivot element on the diagonal is zero, the reduction to triangular form fails Find a nonzero element below the diagonal and exchange the two rows Definition An 푛 × 푚 matrix is a rectangular array of elements with 푛 rows and 푚 columns in which both value and position of an element is important Operation Counts Count the number of arithmetic operations performed Use the formulas 푚 푚(푚 + 1) 푚 푚(푚 + 1)(2푚 + 1) ∑ 푗 = , ∑ 푗2 = 푗=1 2 푗=1 6 Reduction to Triangular Form Multiplications/divisions: 푛−1 2푛3 + 3푛2 − 5푛 ∑(푛 − 푖)(푛 − 푖 + 2) = ⋯ = 푖=1 6 Additions/subtractions: 푛−1 푛3 − 푛 ∑(푛 − 푖)(푛 − 푖 + 1) = ⋯ = 푖=1 3 Operation Counts Backward Substitution Multiplications/divisions: 푛−1 푛2 + 푛 1 + ∑((푛 − 푖) + 1) = 푖=1 2 Additions/subtractions: 푛−1 푛2 − 푛 ∑((푛 − 푖 − 1) + 1) = 푖=1 2 Operation Counts Gaussian Elimination Total Operation Count Multiplications/divisions: 푛3 푛 + 푛2 − 3 3 Additions/subtractions: 푛3 푛2 5푛 + − 3 2 6 Partial Pivoting (푘) In Gaussian elimination, if a pivot element 푎푘푘 is small (푘) compared to an element 푎푗푘 below, the multiplier 푎(푘) 푚 = 푗푘 푗푘 (푘) 푎푘푘 will be large, resulting in round-off errors Partial pivoting finds the smallest 푝 ≥ 푘 such that (푘) (푘) |푎푝푘 | = max |푎푖푘 | 푘≤푖≤푛 and interchanges the rows (퐸푘) ↔ (퐸푝) Scaled Partial Pivoting If there are large variations in magnitude of the elements within a row, scaled partial pivoting can be used Define a scale factor 푠푖 for each row 푠푖 = max |푎푖푗| 1≤푗≤푛 At step 푖, find 푝 such that |푎푝푖| |푎 | = max 푘푖 푠푝 푖≤푘≤푛 푠푘 and interchange the rows (퐸푖) ↔ (퐸푝) Linear Algebra Definition Two matrices 퐴 and 퐵 are equal if they have the same number of rows and columns 푛 × 푚 and if 푎푖푗 = 푏푖푗. Definition If 퐴 and 퐵 are 푛 × 푚 matrices, the sum 퐴 + 퐵 is the 푛 × 푚 matrix with entries 푎푖푗 + 푏푖푗. Definition If 퐴 is 푛 × 푚 and 휆 a real number, the scalar multiplication 휆퐴 is the 푛 × 푚 matrix with entries 휆푎푖푗. Properties Theorem Let 퐴, 퐵, 퐶 be 푛 × 푚 matrices, 휆, 휇 real numbers. (a) 퐴 + 퐵 = 퐵 + 퐴 (b) (퐴 + 퐵) + 퐶 = 퐴 + (퐵 + 퐶) (c) 퐴 + 0 = 0 + 퐴 = 퐴 (d) 퐴 + (−퐴) = −퐴 + 퐴 = 0 (e) 휆(퐴 + 퐵) = 휆퐴 + 휆퐵 (f) (휆 + 휇)퐴 = 휆퐴 + 휇퐴 (g) 휆(휇퐴) = (휆휇)퐴 (h) 1퐴 = 퐴 Matrix Multiplication Definition Let 퐴 be 푛 × 푚 and 퐵 be 푚 × 푝. The matrix product 퐶 = 퐴퐵 is the 푛 × 푝 matrix with entries 푚 푐푖푗 = ∑ 푎푖푘푏푘푗 = 푎푖1푏1푗 + 푎푖2푏2푗 + ⋯ + 푎푖푚푏푚푗 푘=1 Special Matrices Definition A square matrix has 푚 = 푛 A diagonal matrix 퐷 = [푑푖푗] is square with 푑푖푗 = 0 when 푖 ≠ 푗 The identity matrix of order 푛, 퐼푛 = [훿푖푗], is diagonal with 1, if 푖 = 푗, 훿푖푗 = { 0, if 푖 ≠ 푗. Definition An upper-triangular 푛 × 푛 matrix 푈 = [푢푖푗] has 푢푖푗 = 0, if 푖 = 푗 + 1, … , 푛. A lower-triangular 푛 × 푛 matrix 퐿 = [푙푖푗] has 푙푖푗 = 0, if 푖 = 1, … , 푗 − 1. Properties Theorem Let 퐴 be 푛 × 푚, 퐵 be 푚 × 푘, 퐶 be 푘 × 푝, 퐷 be 푚 × 푘, and 휆 a real number. (a) 퐴(퐵퐶) = (퐴퐵)퐶 (b) 퐴(퐵 + 퐷) = 퐴퐵 + 퐴퐷 (c) 퐼푚퐵 = 퐵 and 퐵퐼푘 = 퐵 (d) 휆(퐴퐵) = (휆퐴)퐵 = 퐴(휆퐵) Matrix Inversion Definition An 푛 × 푛 matrix 퐴 is nonsingular or invertible if 푛 × 푛 퐴−1 exists with 퐴퐴−1 = 퐴−1퐴 = 퐼 The matrix 퐴−1 is called the inverse of 퐴 A matrix without an inverse is called singular or noninvertible Theorem For any nonsingular 푛 × 푛 matrix 퐴, (a) 퐴−1 is unique (b) 퐴−1 is nonsingular and (퐴−1)−1 = 퐴 (c) If 퐵 is nonsingular 푛 × 푛, then (퐴퐵)−1 = 퐵−1퐴−1 Matrix Transpose Definition 푡 The transpose of 푛 × 푚 퐴 = [푎푖푗] is 푚 × 푛 퐴 = [푎푗푖] A square matrix 퐴 is called symmetric if 퐴 = 퐴푡 Theorem (a) (퐴푡)푡 = 퐴 (b) (퐴 + 퐵)푡 = 퐴푡 + 퐵푡 (c) (퐴퐵)푡 = 퐵푡퐴푡 (d) if 퐴−1 exists, then (퐴−1)푡 = (퐴푡)−1 Determinants Definition (a) If 퐴 = [푎] is a 1 × 1 matrix, then det 퐴 = 푎 (b) If 퐴 is 푛 × 푛, the minor 푀푖푗 is the determinant of the (푛 − 1) × (푛 − 1) submatrix deleting row 푖 and column 푗 of 퐴 푖+푗 (c) The cofactor 퐴푖푗 associated with 푀푖푗 is 퐴푖푗 = (−1) 푀푖푗 (d) The determinant of 푛 × 푛 matrix 퐴 for 푛 > 1 is 푛 푛 푖+푗 det 퐴 = ∑ 푎푖푗퐴푖푗 = ∑(−1) 푎푖푗푀푖푗 푗=1 푗=1 or 푛 푛 푖+푗 det 퐴 = ∑ 푎푖푗퐴푖푗 = ∑(−1) 푎푖푗푀푖푗 푖=1 푖=1 Properties Theorem (a) If any row or column of 퐴 has all zeros, then det 퐴 = 0 (b) If 퐴 has two rows or two columns equal, then det 퐴 = 0 ̃ ̃ (c) If 퐴 comes from (퐸푖) ↔ (퐸푗) on 퐴, then det 퐴 = − det 퐴 ̃ ̃ (d) If 퐴 comes from (휆퐸푖) ↔ (퐸푖) on 퐴, then det 퐴 = 휆 det 퐴 ̃ (e) If 퐴 comes from (퐸푖 + 휆퐸푗) ↔ (퐸푖) on 퐴, with 푖 ≠ 푗, then det 퐴̃ = det 퐴 (f) If 퐵 is also 푛 × 푛, then det 퐴퐵 = det 퐴 det 퐵 (g) det 퐴푡 = det 퐴 (h) When 퐴−1 exists, det 퐴−1 = (det 퐴)−1 (i) If 퐴 is upper/lower triangular or diagonal, then 퐴 = ∏푛 푎 det 푖=1 푖푖 Linear Systems and Determinants Theorem The following statements are equivalent for any 푛 × 푛 matrix 퐴: (a) The equation 퐴x = 0 has the unique solution x = 0 (b) The system 퐴x = b has a unique solution for any b (c) The matrix 퐴 is nonsingular; that is, 퐴−1 exists (d) det 퐴 ≠ 0 (e) Gaussian elimination with row interchanges can be performed on the system 퐴x = b for any b LU Factorization The 푘th Gaussian transformation matrix is defined by 1 0 ⋯ ⋯ 0 ⎡0 ⋱ ⋱ ⋮ ⎤ ⎢ ⎥ ⎢ ⋮ ⋱ ⋱ ⋱ ⋮ ⎥ ⎢ ⋮ 0 ⋱ ⋱ ⋮ ⎥ 푀 (푘) = ⎢ ⋮ ⋮ −푚 ⋱ ⋱ ⋮ ⎥ ⎢ 푘+1,푘 ⎥ ⎢ ⋮ ⋮ ⋮ 0 ⋱ ⋮ ⎥ ⎢ ⋮ ⋮ ⋮ ⋮ ⋱ ⋱ 0⎥ ⎣0 ⋯ 0 −푚푛,푘 0 ⋯ 0 1⎦ LU Factorization Gaussian elimination can be written as (1) (1) (1) 푎11 푎12 ⋯ 푎1푛 ⎡ (2) ⎤ (푛) (푛−1) (1) ⎢ 0 푎22 ⋱ ⋮ ⎥ 퐴 = 푀 ⋯ 푀 퐴 = ⎢ (푛−1) ⎥ ⎢ ⋮ ⋱ ⋱ 푎푛−1,푛⎥ (푛) ⎣ 0 ⋯ 0 푎푛푛 ⎦ LU Factorization Reversing the elimination steps gives the inverses: 1 0 ⋯ ⋯ 0 ⎡0 ⋱ ⋱ ⋮ ⎤ ⎢ ⎥ ⎢ ⋮ ⋱ ⋱ ⋱ ⋮ ⎥ ⎢ ⋮ 0 ⋱ ⋱ ⋮ ⎥ 퐿(푘) = [푀 (푘)]−1 = ⎢ ⋮ ⋮ 푚 ⋱ ⋱ ⋮ ⎥ ⎢ 푘+1,푘 ⎥ ⎢ ⋮ ⋮ ⋮ 0 ⋱ ⋮ ⎥ ⎢ ⋮ ⋮ ⋮ ⋮ ⋱ ⋱ 0⎥ ⎣0 ⋯ 0 푚푛,푘 0 ⋯ 0 1⎦ and we have 퐿푈 = 퐿(1) ⋯ 퐿(푛−1) ⋯ 푀 (푛−1) ⋯ 푀 (1)퐴 = [푀 (1)]−1 ⋯ [푀 (푛−1)]−1 ⋯ 푀 (푛−1) ⋯ 푀 (1)퐴 = 퐴 LU Factorization Theorem If Gaussian elimination can be performed on the linear system 퐴x = b without row interchanges, 퐴 can be factored into the product of lower-triangular 퐿 and upper-triangular 푈 as 퐴 = 퐿푈, (푖) (푖) where 푚푗푖 = 푎푗푖 /푎푖푖 : 푎(1) 푎(1) ⋯ 푎(1) ⎡ 11 12 1푛 ⎤ 1 0 ⋯ 0 0 푎(2) ⋱ ⋮ ⎡푚 1 ⋱ ⋮ ⎤ 푈 = ⎢ 22 ⎥ , 퐿 = ⎢ 21 ⎥ ⎢ (푛−1) ⎥ ⎢ ⋮ ⋱ ⋱ 0⎥ ⎢ ⋮ ⋱ ⋱ 푎푛−1,푛⎥ (푛) ⎣푚 ⋯ 푚 1⎦ ⎣ 0 ⋯ 0 푎푛푛 ⎦ 푛1 푛,푛−1 Permutation Matrices Suppose 푘1, … , 푘푛 is a permutation of 1, … , 푛. The permutation matrix 푃 = (푝푖푗) is defined by 1, if 푗 = 푘푖, 푝푖푗 = { 0, otherwise. (i) 푃 퐴 permutes the rows of 퐴: 푎푘 1 ⋯ 푎푘 푛 ⎡ 1 1 ⎤ 푃 퐴 = ⎢ ⋮ ⋱ ⋮ ⎥ 푎 ⋯ 푎 ⎣ 푘푛1 푘푛푛⎦ (ii) 푃 −1 exists and 푃 −1 = 푃 푡 Gaussian elimination with row interchanges then becomes: 퐴 = 푃 −1퐿푈 = (푃 푡퐿)푈 Diagonally Dominant Matrices Definition The 푛 × 푛 matrix 퐴 is said to be strictly diagonally dominant when |푎푖푖| > ∑ |푎푖푗| 푗≠푖 Theorem A strictly diagonally dominant matrix 퐴 is nonsingular, Gaussian elimination can be performed on 퐴x = b without row interchanges, and the computations will be stable. Positive Definite Matrices Definition A matrix 퐴 is positive definite if it is symmetric and if x푡퐴x > 0 for every x ≠ 0. Theorem If 퐴 is an 푛 × 푛 positive definite matrix, then (a) 퐴 has an inverse (b) 푎푖푖 > 0 (c) max1≤푘,푗≤푛 |푎푘푗| ≤ max1≤푖≤푛 |푎푖푖| 2 (d) (푎푖푗) < 푎푖푖푎푗푗 for 푖 ≠ 푗 Principal Submatrices Definition A leading principal submatrix of a matrix 퐴 is a matrix of the form 푎11 푎12 ⋯ 푎1푘 ⎡푎 푎 ⋯ 푎 ⎤ 퐴 = ⎢ 21 22 2푘⎥ 푘 ⎢ ⋮ ⋮ ⋮ ⎥ ⎣푎푘1 푎푘2 ⋯ 푎푘푘⎦ for some 1 ≤ 푘 ≤ 푛. Theorem A symmetric matrix 퐴 is positive definite if and only if each ofits leading principal submatrices has a positive determinant. SPD and Gaussian Elimination Theorem The symmetric matrix 퐴 is positive definite if and only if Gaussian elimination without row interchanges can be done on 퐴x = b with all pivot elements positive, and the computations are then stable.
Recommended publications
  • MATH 2030: MATRICES Introduction to Linear Transformations We Have
    MATH 2030: MATRICES Introduction to Linear Transformations We have seen that we may describe matrices as symbol with simple algebraic properties like matrix multiplication, addition and scalar addition. In the particular case of matrix-vector multiplication, i.e., Ax = b where A is an m × n matrix and x; b are n×1 matrices (column vectors) we may represent this as a transformation on the space of column vectors, that is a function F (x) = b , where x is the independent variable and b the dependent variable. In this section we will give a more rigorous description of this idea and provide examples of such matrix transformations, which will lead to the idea of a linear transformation. To begin we look at a matrix-vector multiplication to give an idea of what sort of functions we are working with 21 0 3 1 A = 2 −1 ; v = : 4 5 −1 3 4 then matrix-vector multiplication yields 2 1 3 Av = 4 3 5 −1 We have taken a 2 × 1 matrix and produced a 3 × 1 matrix. More generally for any x we may describe this transformation as a matrix equation y 21 0 3 2 x 3 x 2 −1 = 2x − y : 4 5 y 4 5 3 4 3x + 4y From this product we have found a formula describing how A transforms an arbi- 2 3 trary vector in R into a new vector in R . Expressing this as a transformation TA we have 2 x 3 x T = 2x − y : A y 4 5 3x + 4y From this example we can define some helpful terminology.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Support Graph Preconditioners for Sparse Linear Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Texas A&M University SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2004 Major Subject: Computer Science SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved as to style and content by: Vivek Sarin Paul Nelson (Chair of Committee) (Member) N. K. Anand Valerie E. Taylor (Member) (Head of Department) December 2004 Major Subject: Computer Science iii ABSTRACT Support Graph Preconditioners for Sparse Linear Systems. (December 2004) Radhika Gupta, B.E., Indian Institute of Technology, Bombay; M.S., Georgia Institute of Technology, Atlanta Chair of Advisory Committee: Dr. Vivek Sarin Elliptic partial differential equations that are used to model physical phenomena give rise to large sparse linear systems. Such systems can be symmetric positive definite and can be solved by the preconditioned conjugate gradients method. In this thesis, we develop support graph preconditioners for symmetric positive definite matrices that arise from the finite element discretization of elliptic partial differential equations. An object oriented code is developed for the construction, integration and application of these preconditioners. Experimental results show that the advantages of support graph preconditioners are retained in the proposed extension to the finite element matrices. iv To my parents v ACKNOWLEDGMENTS I would like to express sincere thanks to my advisor, Dr.
    [Show full text]
  • Wronskian Solutions to the Kdv Equation Via B\" Acklund
    Wronskian solutions to the KdV equation via B¨acklund transformation Qi-fei Xuan∗, Mei-ying Ou, Da-jun Zhang† Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China October 27, 2018 Abstract In the paper we discuss the B¨acklund transformation of the KdV equation between solitons and solitons, between negatons and negatons, between positons and positons, between rational solution and rational solution, and between complexitons and complexitons. We investigate the conditions that Wronskian entries satisfy for the bilinear B¨acklund transformation of the KdV equation. By choosing suitable Wronskian entries and the parameter in the bilinear B¨acklund transformation, we obtain transformations between many kinds of solutions. Keywords: the KdV equation, Wronskian solution, bilinear form, B¨acklund transformation 1 Introduction The Wronskian can be considered as a bridge connecting with many classical methods in soliton theory. This is not only because soliton solutions in Wronskian form can be obtained from the Darboux transformation[1], Sato theory[2, 3] and Wronskian technique[4]-[10], but also because the exponential polynomial for N-solitons derived from Hirota method[11, 12] and the matrix form given by the Inverse Scattering Transform[13, 14] can be transformed into a Wronskian by extracting some exponential factors. The special structure of a Wronskian contributes simple arXiv:0706.3487v1 [nlin.SI] 24 Jun 2007 forms of its derivatives, and this admits solution verification by direct substituting Wronskians into a bilinear soliton equation or a bilinear B¨acklund transformation(BT). This approach is re- ferred to as Wronskian technique[4]. In the approach a bilinear soliton equation is some algebraic identity provided that Wronskian entry vector satisfies some differential equation set which we call Wronskian condition.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Linear Algebra with Exercises B
    Linear Algebra with Exercises B Fall 2017 Kyoto University Ivan Ip These notes summarize the definitions, theorems and some examples discussed in class. Please refer to the class notes and reference books for proofs and more in-depth discussions. Contents 1 Abstract Vector Spaces 1 1.1 Vector Spaces . .1 1.2 Subspaces . .3 1.3 Linearly Independent Sets . .4 1.4 Bases . .5 1.5 Dimensions . .7 1.6 Intersections, Sums and Direct Sums . .9 2 Linear Transformations and Matrices 11 2.1 Linear Transformations . 11 2.2 Injection, Surjection and Isomorphism . 13 2.3 Rank . 14 2.4 Change of Basis . 15 3 Euclidean Space 17 3.1 Inner Product . 17 3.2 Orthogonal Basis . 20 3.3 Orthogonal Projection . 21 i 3.4 Orthogonal Matrix . 24 3.5 Gram-Schmidt Process . 25 3.6 Least Square Approximation . 28 4 Eigenvectors and Eigenvalues 31 4.1 Eigenvectors . 31 4.2 Determinants . 33 4.3 Characteristic polynomial . 36 4.4 Similarity . 38 5 Diagonalization 41 5.1 Diagonalization . 41 5.2 Symmetric Matrices . 44 5.3 Minimal Polynomials . 46 5.4 Jordan Canonical Form . 48 5.5 Positive definite matrix (Optional) . 52 5.6 Singular Value Decomposition (Optional) . 54 A Complex Matrix 59 ii Introduction Real life problems are hard. Linear Algebra is easy (in the mathematical sense). We make linear approximations to real life problems, and reduce the problems to systems of linear equations where we can then use the techniques from Linear Algebra to solve for approximate solutions. Linear Algebra also gives new insights and tools to the original problems.
    [Show full text]
  • DRS: Diagonal Dominant Reduction for Lattice-Based Signature Version 2
    DRS: Diagonal dominant Reduction for lattice-based Signature Version 2 Thomas PLANTARD, Arnaud SIPASSEUTH, Cédric DUMONDELLE, Willy SUSILO Institute of Cybersecurity and Cryptology School of Computing and Information Technology University of Wollongong Australia 1 Background Denition 1. We call lattice a discrete subgroup of Rn. We say a lattice is an integer lattice when it is a subgroup of Zn. A basis of the lattice is a basis as a Z − module. In our work we only consider full-rank integer lattices (unless specied otherwise), i.e such that their basis can be represented by a n × n non-singular integer matrix. It is important to note that just like in classical linear algebra, a lattice has an innity of basis. In fact, if B is a basis of L, then so is UB for any unimodular matrix U (U can be seen as the set of linear operations over Zn on the rows of B that do not aect the determinant). Denition 2 (Minima). We note λi(L) the i−th minimum of a lattice L. It is the radius of the smallest zero-centered ball containing at least i linearly independant elements of L. λi+1(L) Denition 3 (Lattice gap). We note δi(L) the ratio and call that a lattice gap. When mentioned λi(L) without index and called "the" gap, the index is implied to be i = 1. Denition 4. We say a lattice is a diagonally dominant type lattice (of dimension n) if it admits a basis B of the form a diagonal dominant matrix as in [1], i.e Pn 8i 2 [1; n];Bi;i ≥ j=1;i6=j jBi;jj We can also see a diagonally dominant matrix B as a sum B = D + R where D is diagonal and Di;i > kRik1.
    [Show full text]
  • Computing Rational Forms of Integer Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector J. Symbolic Computation (2002) 34, 157–172 doi:10.1006/jsco.2002.0554 Available online at http://www.idealibrary.com on Computing Rational Forms of Integer Matrices MARK GIESBRECHT† AND ARNE STORJOHANN‡ School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 n×n A new algorithm is presented for finding the Frobenius rational form F ∈ Z of any n×n 4 3 A ∈ Z which requires an expected number of O(n (log n + log kAk) + n (log n + log kAk)2) word operations using standard integer and matrix arithmetic (where kAk = maxij |Aij |). This substantially improves on the fastest previously known algorithms. The algorithm is probabilistic of the Las Vegas type: it assumes a source of random bits but always produces the correct answer. Las Vegas algorithms are also presented for computing a transformation matrix to the Frobenius form, and for computing the rational Jordan form of an integer matrix. c 2002 Elsevier Science Ltd. All rights reserved. 1. Introduction In this paper we present new algorithms for exactly computing the Frobenius and rational Jordan normal forms of an integer matrix which are substantially faster than those previously known. We show that the Frobenius form F ∈ Zn×n of any A ∈ Zn×n can be computed with an expected number of O(n4(log n+log kAk)+n3(log n+log kAk)2) word operations using standard integer and matrix arithmetic. Here and throughout this paper, kAk = maxij |Aij|.
    [Show full text]
  • 2 Rotation Kinematics
    2 Rotation Kinematics Consider a rigid body with a fixed point. Rotation about the fixed point is the only possible motion of the body. We represent the rigid body by a body coordinate frame B, that rotates in another coordinate frame G, as is shown in Figure 2.1. We develop a rotation calculus based on trans- formation matrices to determine the orientation of B in G, and relate the coordinates of a body point P in both frames. Z ZP z B zP G P r yP y YP X x P P Y X x FIGURE 2.1. A rotated body frame B in a fixed global frame G,aboutafixed point at O. 2.1 Rotation About Global Cartesian Axes Consider a rigid body B with a local coordinate frame Oxyz that is origi- nally coincident with a global coordinate frame OXY Z.PointO of the body B is fixed to the ground G and is the origin of both coordinate frames. If the rigid body B rotates α degrees about the Z-axis of the global coordi- nate frame, then coordinates of any point P of the rigid body in the local and global coordinate frames are related by the following equation G B r = QZ,α r (2.1) R.N. Jazar, Theory of Applied Robotics, 2nd ed., DOI 10.1007/978-1-4419-1750-8_2, © Springer Science+Business Media, LLC 2010 34 2. Rotation Kinematics where, X x Gr = Y Br= y (2.2) ⎡ Z ⎤ ⎡ z ⎤ ⎣ ⎦ ⎣ ⎦ and cos α sin α 0 − QZ,α = sin α cos α 0 .
    [Show full text]
  • Matrix Notation for Graphical Transformations
    Appendix Matrix notation for A graphical transformations This appendix is not intended as an introduction to matrix notation but serves to supplement the principles given in Chapters 3 and 4 concerning transformations by using the shorthand of matrix notations. A.I TWO-DIMENSIONAL TRANSFORMATIONS A generalized point in 2-D can be represented by its x and Y co-ordinates as p = [xv YI], where [xv yd is a 1 x 2 matrix. Similarly a line in 2-D can be represented by its end points as a 2 x 2 matrix: L = [:~ ~:]. The general 2-D point (x, y) can be transferred into the point (xv YI) as Xl = ax + by (A. 1) YI = ex + dy, where the transform T can be expressed in matrix form as: Equation (A.l) can now be expressed in matrix form as (XII YI) = (X, y) [~ ~] I (A.2) i.e. Xl can be found by multiplying the row vector (x, y) by the first column [~ :] and YI by multiplying (x, y) by the second column [: ~]. Translation I ~ Similarly a pair of matrix transformations can be multiplied together to give one combined (or concatenated) form. Thus if and then the concatenated transform or T = [(ae + ef) (ag + eh)] (be + df) (bg + dh) T = [~ ~l where j = ae + ej, i.e. j is formed by taking the sum of the products of the first row of TI with the first column of T2; k = be + dj, i.e. k is formed by taking the sum of the products of the second row of TI with the first column of T2; 1 = ag + eh, i.e.
    [Show full text]
  • Linear Algebra and Geometric Transformations in 2D
    Linear algebra and geometric transformations in 2D Computer Graphics CSE 167 Lecture 2 CSE 167: Computer Graphics • Linear algebra – Vectors – Matrices • Points as vectors • Geometric transformations in 2D – Homogeneous coordinates CSE 167, Winter 2018 2 Vectors • Represent magnitude and direction in multiple dimensions • Examples – Translation of a point – Surface normal vectors (vectors orthogonal to surface) CSE 167, Winter 2018 3 Based on slides courtesy of Jurgen Schulze Vectors and arithmetic Examples using Vectors are 3‐vectors column vectors Vectors must be the same length CSE 167, Winter 2018 4 Magnitude of a vector • The magnitude of a vector is its norm Example using 3‐vector • A vector if magnitude 1 is called a unit vector • A vector can be unitized by dividing by its norm CSE 167, Winter 2018 5 Dot product of two vectors Angle between two vectors CSE 167, Winter 2018 6 Cross product of two 3‐vectors • The cross product of two 3‐vectors a and b results in another 3‐vector that is orthogonal (using right hand rule) to the two vectors CSE 167, Winter 2018 7 Cross product of two 3‐vectors CSE 167, Winter 2018 8 Matrices • 2D array of numbers A = CSE 167, Winter 2018 9 Matrix addition • Matrices must be the same size • Matrix subtraction is similar CSE 167, Winter 2018 10 Matrix‐scalar multiplication CSE 167, Winter 2018 11 Matrix‐matrix multiplication CSE 167, Winter 2018 12 Matrix‐vector multiplication • Same as matrix‐matrix multiplication – Example: 3x3 matrix multiplied with 3‐vector CSE 167, Winter 2018 13 Transpose • AT
    [Show full text]
  • Composing Transformations
    Composing Transformations Typically you need a sequence of transformations to ppyjosition your objects e.g., a combination of rotations and translations The order you apply transformations matters! e.g. rotation andld translations are not commutative Translate (5,0) and then Rotate 60 degree OR Rotate 60 degree and then translate (5, 0)?? Rotate and then translate !! Composing Transformations - Notation Below we will use the following convention to explain transformations = Matrix applied to left of vector Column vector as a point I am not concerned with how the matrix/vector is stored here – just focused on mathematics (but for your information , OpenGL fixed function pipeline stores matrices in column major order, i.e., m[0][0] = m11, m[0][1] = m21, m[0][2] = m31, … ) Composing Transformations - Concatenation There are two ways to concatenate transformation matrices Pre- and Post -multiplication Pre-multiplication is to multiply the new matrix (B) to the left of the existingg()g matrix (A) to get the result ( ()C) C = B * A Post-multiplication is to multiply the new matrix (B) to the right of the existing matrix (B) C = A * B Which one yypou choose depends on what y ou do OpenGL fixed function pipeline uses post-multiplication. I will explain why (and this is what we will use) Two ways to think of a sequence of transformations How you think of it determines how you should concatenate the transformation matrices together (pre- or post-multiplication) Both will work but sometimes one is more convenient than the other Transform
    [Show full text]