Linear Algebra Gaussian Elimination Norms Numerical Stability

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra Gaussian Elimination Norms Numerical Stability Thomas Finley, [email protected] Norms Determinant For A Rm n, if rank(A) = n, then AT A is SPD. ∈ × A vector norm function · : Rn R satisfies: The determinant det : Rn n R satisfies: Linear Algebra / / → × → Basic Linear Algebra Subroutines 1. x 0, and x = 0 x = %0. 1. det(AB) = det(A) det(B). 2 2 A subspace is a set S Rn such that 0 S and x, y / / ≥ / / ⇔ 0. Scalar ops, like x + y . O(1) flops, O(1) data. γx γ x γ R x Rn ff R ⊆ ∈ ∀ ∈ 2. = | | · for all , and all . 2. det(A) = 0 i A is singular. 1. Vector ops, like !y = ax + y. O(n) flops, O(n) data. S, α, β . αx + βy S. / / / / ∈ Rn ∈ ∈ ∈ n 3. x + y x + y , for all x, y . 3. det(L) = ℓ1,1ℓ2,2 ··· ℓn,n for triangular L. T The span of {v1,..., vk} is the set of all vectors in R / / ≤ / / / / ∈ T 2. Matrix-vector ops, like rank-one update A = A + xy . Common norms include: 4. det(A) = det(A ). 2 2 that are linear combinations of v1,..., vk. T s O(n ) flops, O(n ) data. 1. x 1 = |x1| + |x2| + ··· + |xn| To compute det(A) factor A = P LU. det(P ) = ( 1) 2 A basis B of subspace S, B = {v1,..., vk} S has / / 2 2 2 − 3. Matrix-matrix ops, like C = C + AB. O(n ) data, ⊂ 2. x 2 = x1 + x2 + ··· + xn where s is the number of swaps, det(L) = 1. When com- 3 Span(B) = S and all vi linearly independent. / / 1 O(n ) flops. ! p p p puting det(U) watch out for overflow! 3. x = lim (|x1| + ··· + |xn| ) = max |xi| Use the highest BLAS level possible. Operators are ar- The dimension of S is |B| for a basis B of S. ∞ p i=1..n / / →∞ Ax ! Orthogonal Matrices chitecture tuned, e.g., data processed in cache-sized bites. For subspaces S, T with S T , dim(S) dim(T ), and An induced matrix norm is A ! = sup ' ' . It x=0 x ! Rn n ⊆ ≤ / / & ' ' For Q × , these statements are equivalent: further if dim(S) = dim(T ), then S = T . satisfies the three properties of norms. T∈ T Linear Least Squares Rn Rm n m n 1. Q Q = QQ = I (i.e., Q is orthogonal) A linear transformation T : has x, y x R ,A R × , Ax ! A ! x !. Suppose we have points (u1, v1),..., (u5, v5) that we want Rn R → ∀ ∈ ∀ ∈ ∈ / / ≤ / / / / 2. The · 2 for each row and column of Q. The inner 2 , α, β .T (αx + βy) = αT (x) + βT (y). Further, AB ! A ! B !, called submultiplicativity. / / to fit a quadratic curve au + bu + c through. We want to m∈n / / ≤ / / / / product of any row (or column) with another is 0. A R × such that x .T (x) Ax. T 2 a b a 2 b 2, called Cauchy-Schwarz inequality. n solve for u u1 1 v1 ∃ ∈ ∀ ≡ n m m ≤ / / / / 3. For all x R , Qx 2 = x 2. 1 a For two linear transformations T : R R , S : R 1. A = max n |a | (max row sum). ∈ / / / / . → → i=1,...,m j=1 i,j A matrix Q Rm n with m > n has orthonormal columns . b = . Rp, S T S(T (x)) is linear transformation. (T (x) / /∞ m × . 2. A 1 = maxj=1,...,n "i=1 |ai,j| (max column sum). ∈ T 2 c ◦ ≡ ≡ / / 3 2 if the columns are orthonormal, and Q Q = I. u5 u5 1 v5 Ax) (S(y) B) (S T )(x) BAx. 3. A 2 is hard: it takes" O(n ), not O(n ) operations. ∧ ≡ ⇒ ◦ ≡ / / The product of orthogonal matrices is orthogonal. The matrix’s row space is the span of its rows, its column n m 2 This is overdetermined so an exact solution is out. Instead, 4. A F = i=1 j=1 ai,j. · F often replaces · 2. For orthogonal Q, QA 2 = A 2 and AQ 2 = A 2. space or range is the span of its columns, and its rank is / / # / / / / / / / / / / / / find the least squares solution x that minimizes Ax b 2. " " / − / the dimension of either of these spaces. Numerical Stability QR-factorization For the method of normal equations, solve for x in m n Six sources of error in scientific computing: modeling er- Rm n T T For A R × , rank(A) min(m, n). A has full row (or For any A × with m n, we can factor A = QR, A Ax = A b by using Cholesky factorization. This takes ∈ ≤ rors, measurement or data errors, blunders, discretization ∈Rm m ≥ T 3 column) rank if rank(A) = m (or n). where Q × is orthogonal, and R = [ R1 0 ] 2 n ∈ ∈ mn + 3 + O(mn) flops. It is conditionally but not back- n n or truncation errors, convergence tolerance, and rounding Rm n ff T A diagonal matrix D R × has d = 0 for j = k. The × is upper triangular. rank(A) = n i R1 is invertible. wards stable: A A doubles the condition number. j,k exponent ∈ , errors. For single and double: T diagonal identity matrix I has ij,j = 1. Q’s first n (or last m n) columns form an orthonormal Alternatively, factor A = QR. Let c = [ c1 c2 ] = e − T The upper (or lower) bandwidth of A is max |i j| among ± d1.d2d3 ··· dt β t = 24, e ∈ {−126,..., 127} basis for span(A) (or nullspace(A )). T 1 × T Q b. The least squares solution is x = R1− c1. − &'$% t = 53, e ∈ {−1022,..., 1023} 2vv i, j where i j (or i j) such that A = 0. sign mantissa base A Householder reflection is H = I T . H is symmet- i,j − v v If rank(A) = r and r < n (rank deficient), factor A = ≥ ≤ , $%&' $ %& ' $%&' |ˆx x| ric and orthogonal. Explicit H.H. QR-factorization is: Σ T T T A matrix with lower bandwidth 1 is upper Hessenberg. The relative error in ˆx approximating x is |−x| . U V , let y = V x and c = U b. Then, min Ax n n t+1 / − For A, B R , B is A’s inverse if AB = BA = I. If ff 1: for k = 1 : n do r 2 m 2 ci × Unit roundo or machine epsilon is ǫmach = β− . b 2 = min (σiyi ci) + c , so yi = . For ∈ 1 2: / i=1 − i=r+1 i σi such a B exists, A is invertible or nonsingular. B = A− . Arithmetic operations have relative error bounded by ǫmach. v = A(k : m, k) ± A(k : m, k) 2e1 # i = r + 1 : n, "yi is arbitrary. " 1 / 2vvT / The inverse of A is A− = [x1, ··· , xn] where Axi = ei. E.g., consider z = x y with input x, y. This program has 3: A(k : m, k : n) = I A(k : m, k : n) − vT v For A Rn n the following are equivalent: A is nonsin- three roundoff errors.z ˆ = ((1 + δ )x (1 + δ )y) (1 + δ ), ( − ) Singular Value Decomposition × 1 2 3 4: end for m n T ∈ − For any A R × , we can express A = UΣV such gular, rank(A) = n, Ax = b has a solution x for any b, if where δ1, δ2, δ3 [ ǫmach, ǫmach]. We get HnHn 1 ··· H1A = R, so then, Q = H1H2 ··· Hn. R∈m m Rn n ∈ − 2 that U × and V × are orthogonal, and Ax = 0 then x = 0. |z zˆ| |(δ1+δ3)x (δ2+δ3)y+O(ǫ )| −2 2 3 − = − mach This takes 2mn n + O(mn) flops. Σ ∈ Rm∈n Rm n x Rn x 0 |z| |x y| − 3 = diag(σ1, ··· , σp) × where p = min(m, n) and The nullspace of A × is { : A = }. − Givens requires 50% more flops. Preferable for sparse A. ∈ Rm n ∈ ∈ T The bad case is where δ1 = ǫmach, δ2 = ǫmach, δ3 = 0: σ1 σ2 ··· σp 0. The σi are singular values. For A × , Range(A) and Nullspace(A ) are − The Gram-Schmidt produces a skinny/reduced QR- ≥ ≥ ≥ ≥ ∈ |z zˆ| |x+y| 1. Matrix 2-norm, where A 2 = σ1. orthogonal complements, i.e., x Range(A), y − = ǫmach m n |z| |x y| factorization A = Q R , where Q R has or- / / 1 σ1 T T ∈ m ∈ − 1 1 1 × 2. The condition number κ (A) = A A = , or Nullspace(A ) x y = 0, and for all p R , p = x + y ∈ 2 2 − 2 σn Inaccuracy if |x+y| |x y| called catastrophic calcellation. thonormal columns. The Gram-Schmidt algorithm is: / / / σ/1 ⇒ ∈ rectangular condition number κ2(A) = . Note for unique x and y. ≫ − Left Looking Right Looking σmin(m,n) n n Conditioning & Backwards Stability T 2 For a permutation matrix P R , PA permutes the that κ2(A A) = κ2(A) . ∈ × 1: for k = 1 : n do 1: Q = A rows of A, AP the columns of A. P 1 = P T . A problem instance is ill conditioned if the solution is sen- 3. For a rank k approximation to A, let Σ = − 2: q = a 2: for k = 1 : n do k sitive to perturbations of the data. For example, sin 1 is k k diag(σ , ··· , σ , 0T ). Then A = UΣ V T . rank(A ) 3: for j = 1 : k 1 do 3: R(k, k) = q 1 k k k k Gaussian Elimination well conditioned, but sin 12392193 is ill conditioned. k 2 ff σ ≤ 4: −T 4: / / k and rank(Ak) = k i k > 0. Among rank k or lower GE produces a factorization A = LU, GEPP PA = LU.
Recommended publications
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • Improving the Accuracy of Computed Eigenvalues and Eigenvectors*
    SIAM J. NUMER. ANAL. 1983 Society for Industrial and Applied Mathematics Vol. 20, No. 1, February 1983 0036-1429/83/2001-0002 $01.25/0 IMPROVING THE ACCURACY OF COMPUTED EIGENVALUES AND EIGENVECTORS* J. J. DONGARRA,t C. B. MOLER AND J. H. WILKINSON Abstract. This paper describes and analyzes several variants of a computational method for improving the numerical accuracy of, and for obtaining numerical bounds on, matrix eigenvalues and eigenvectors. The method, which is essentially a numerically stable implementation of Newton's method, may be used to "fine tune" the results obtained from standard subroutines such as those in EISPACK [Lecture Notes in Computer Science 6, 51, Springer-Verlag, Berlin, 1976, 1977]. Extended precision arithmetic is required in the computation of certain residuals. Introduction. The calculation of an eigenvalue , and the corresponding eigenvec- tor x (here after referred to as an eigenpair) of a matrix A involves the solution of the nonlinear system of equations (A AI)x O. Starting from an approximation h and , a sequence of iterates may be determined using Newton's method or variants of it. The conditions on and guaranteeing convergence have been treated extensively in the literature. For a particularly lucid account the reader is referred to the book by Rail [3]. In a recent paper Wilkinson [7] describes an algorithm for determining error bounds for a computed eigenpair based on these mathematical concepts. Considerations of numerical stability were an essential feature of that paper and indeed were its main raison d'etre. In general this algorithm provides an improved eigenpair and error bounds for it; unless the eigenpair is very ill conditioned the improved eigenpair is usually correct to the precision of the computation used in the main body of the algorithm.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Numerical Analysis Notes for Math 575A
    Numerical Analysis Notes for Math 575A William G. Faris Program in Applied Mathematics University of Arizona Fall 1992 Contents 1 Nonlinear equations 5 1.1 Introduction . 5 1.2 Bisection . 5 1.3 Iteration . 9 1.3.1 First order convergence . 9 1.3.2 Second order convergence . 11 1.4 Some C notations . 12 1.4.1 Introduction . 12 1.4.2 Types . 13 1.4.3 Declarations . 14 1.4.4 Expressions . 14 1.4.5 Statements . 16 1.4.6 Function definitions . 17 2 Linear Systems 19 2.1 Shears . 19 2.2 Reflections . 24 2.3 Vectors and matrices in C . 27 2.3.1 Pointers in C . 27 2.3.2 Pointer Expressions . 28 3 Eigenvalues 31 3.1 Introduction . 31 3.2 Similarity . 31 3.3 Orthogonal similarity . 34 3.3.1 Symmetric matrices . 34 3.3.2 Singular values . 34 3.3.3 The Schur decomposition . 35 3.4 Vector and matrix norms . 37 3.4.1 Vector norms . 37 1 2 CONTENTS 3.4.2 Associated matrix norms . 37 3.4.3 Singular value norms . 38 3.4.4 Eigenvalues and norms . 39 3.4.5 Condition number . 39 3.5 Stability . 40 3.5.1 Inverses . 40 3.5.2 Iteration . 40 3.5.3 Eigenvalue location . 41 3.6 Power method . 43 3.7 Inverse power method . 44 3.8 Power method for subspaces . 44 3.9 QR method . 46 3.10 Finding eigenvalues . 47 4 Nonlinear systems 49 4.1 Introduction . 49 4.2 Degree . 51 4.2.1 Brouwer fixed point theorem .
    [Show full text]
  • Numerically Stable Generation of Correlation Matrices and Their Factors ∗
    BIT 0006-3835/00/4004-0640 $15.00 2000, Vol. 40, No. 4, pp. 640–651 c Swets & Zeitlinger NUMERICALLY STABLE GENERATION OF CORRELATION MATRICES AND THEIR FACTORS ∗ ‡ PHILIP I. DAVIES† and NICHOLAS J. HIGHAM Department of Mathematics, University of Manchester, Manchester, M13 9PL, England email: [email protected], [email protected] Abstract. Correlation matrices—symmetric positive semidefinite matrices with unit diagonal— are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigen- values (which must be nonnegative and sum to the dimension of the matrix). A popular algorithm of Bendel and Mickey takes a matrix having the specified eigenvalues and uses a finite sequence of Givens rotations to introduce 1s on the diagonal. We give im- proved formulae for computing the rotations and prove that the resulting algorithm is numerically stable. We show by example that the formulae originally proposed, which are used in certain existing Fortran implementations, can lead to serious instability. We also show how to modify the algorithm to generate a rectangular matrix with columns of unit 2-norm. Such a matrix represents a correlation matrix in factored form, which can be preferable to representing the matrix itself, for example when the correlation matrix is nearly singular to working precision. Key words: Random correlation matrix, Bendel–Mickey algorithm, eigenvalues, sin- gular value decomposition, test matrices, forward error bounds, relative error bounds, IMSL, NAG Library, Jacobi method. AMS subject classification: 65F15. 1 Introduction. An important class of symmetric positive semidefinite matrices is those with unit diagonal.
    [Show full text]
  • Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
    nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination.
    [Show full text]
  • Model Problems in Numerical Stability Theory for Initial Value Problems*
    SIAM REVIEW () 1994 Society for Industrial and Applied Mathematics Vol. 36, No. 2, pp. 226-257, June 1994 004 MODEL PROBLEMS IN NUMERICAL STABILITY THEORY FOR INITIAL VALUE PROBLEMS* A.M. STUART AND A.R. HUMPHRIES Abstract. In the past numerical stability theory for initial value problems in ordinary differential equations has been dominated by the study of problems with simple dynamics; this has been motivated by the need to study error propagation mechanisms in stiff problems, a question modeled effectively by contractive linear or nonlinear problems. While this has resulted in a coherent and self-contained body of knowledge, it has never been entirely clear to what extent this theory is relevant for problems exhibiting more complicated dynamics. Recently there have been a number of studies of numerical stability for wider classes of problems admitting more complicated dynamics. This on-going work is unified and, in particular, striking similarities between this new developing stability theory and the classical linear and nonlinear stability theories are emphasized. The classical theories of A, B and algebraic stability for Runge-Kutta methods are briefly reviewed; the dynamics of solutions within the classes of equations to which these theories apply--linear decay and contractive problemsw are studied. Four other categories of equationsmgradient, dissipative, conservative and Hamiltonian systemsmare considered. Relationships and differences between the possible dynamics in each category, which range from multiple competing equilibria to chaotic solutions, are highlighted. Runge-Kutta schemes that preserve the dynamical structure of the underlying problem are sought, and indications of a strong relationship between the developing stability theory for these new categories and the classical existing stability theory for the older problems are given.
    [Show full text]
  • Geometric Algebra and Covariant Methods in Physics and Cosmology
    GEOMETRIC ALGEBRA AND COVARIANT METHODS IN PHYSICS AND COSMOLOGY Antony M Lewis Queens' College and Astrophysics Group, Cavendish Laboratory A dissertation submitted for the degree of Doctor of Philosophy in the University of Cambridge. September 2000 Updated 2005 with typo corrections Preface This dissertation is the result of work carried out in the Astrophysics Group of the Cavendish Laboratory, Cambridge, between October 1997 and September 2000. Except where explicit reference is made to the work of others, the work contained in this dissertation is my own, and is not the outcome of work done in collaboration. No part of this dissertation has been submitted for a degree, diploma or other quali¯cation at this or any other university. The total length of this dissertation does not exceed sixty thousand words. Antony Lewis September, 2000 iii Acknowledgements It is a pleasure to thank all those people who have helped me out during the last three years. I owe a great debt to Anthony Challinor and Chris Doran who provided a great deal of help and advice on both general and technical aspects of my work. I thank my supervisor Anthony Lasenby who provided much inspiration, guidance and encouragement without which most of this work would never have happened. I thank Sarah Bridle for organizing the useful lunchtime CMB discussion group from which I learnt a great deal, and the other members of the Cavendish Astrophysics Group for interesting discussions, in particular Pia Mukherjee, Carl Dolby, Will Grainger and Mike Hobson. I gratefully acknowledge ¯nancial support from PPARC. v Contents Preface iii Acknowledgements v 1 Introduction 1 2 Geometric Algebra 5 2.1 De¯nitions and basic properties .
    [Show full text]
  • Stability Analysis for Systems of Differential Equations
    Stability Analysis for Systems of Differential Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: February 8, 2003 Last Modified: March 2, 2008 Contents 1 Introduction 2 2 Physical Stability 2 3 Numerical Stability 4 3.1 Stability for Single-Step Methods..................................4 3.2 Stability for Multistep Methods...................................5 3.3 Choosing a Stable Step Size.....................................7 4 The Simple Pendulum 7 4.1 Numerical Solution of the ODE...................................8 4.2 Physical Stability for the Pendulum................................ 13 4.3 Numerical Stability of the ODE Solvers.............................. 13 1 1 Introduction In setting up a physical simulation involving objects, a primary step is to establish the equations of motion for the objects. These equations are formulated as a system of second-order ordinary differential equations that may be converted to a system of first-order equations whose dependent variables are the positions and velocities of the objects. Such a system is of the generic form x_ = f(t; x); t ≥ 0; x(0) = x0 (1) where x0 is a specified initial condition for the system. The components of x are the positions and velocities of the objects. The function f(t; x) includes the external forces and torques of the system. A computer implementation of the physical simulation amounts to selecting a numerical method to approximate the solution to the system of differential equations.
    [Show full text]
  • Algorithmic Structure for Geometric Algebra Operators and Application to Quadric Surfaces Stephane Breuils
    Algorithmic structure for geometric algebra operators and application to quadric surfaces Stephane Breuils To cite this version: Stephane Breuils. Algorithmic structure for geometric algebra operators and application to quadric surfaces. Operator Algebras [math.OA]. Université Paris-Est, 2018. English. NNT : 2018PESC1142. tel-02085820 HAL Id: tel-02085820 https://pastel.archives-ouvertes.fr/tel-02085820 Submitted on 31 Mar 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ÉCOLE DOCTORALE MSTIC THÈSE DE DOCTORAT EN INFORMATIQUE Structure algorithmique pour les opérateurs d’Algèbres Géométriques et application aux surfaces quadriques Auteur: Encadrants: Stéphane Breuils Dr. Vincent NOZICK Dr. Laurent FUCHS Rapporteurs: Prof. Dominique MICHELUCCI Prof. Pascal SCHRECK Examinateurs: Dr. Leo DORST Prof. Joan LASENBY Prof. Raphaëlle CHAINE Soutenue le 17 Décembre 2018 iii Thèse effectuée au Laboratoire d’Informatique Gaspard-Monge, équipe A3SI, dans les locaux d’ESIEE Paris LIGM, UMR 8049 École doctorale Paris-Est Cité Descartes, Cité Descartes, Bâtiment Copernic-5, bd Descartes 6-8 av. Blaise Pascal Champs-sur-Marne Champs-sur-Marne, 77 454 Marne-la-Vallée Cedex 2 77 455 Marne-la-Vallée Cedex 2 v Abstract Geometric Algebra is considered as a very intuitive tool to deal with geometric problems and it appears to be increasingly efficient and useful to deal with computer graphics problems.
    [Show full text]
  • The Making of a Geometric Algebra Package in Matlab Computer Science Department University of Waterloo Research Report CS-99-27
    The Making of a Geometric Algebra Package in Matlab Computer Science Department University of Waterloo Research Report CS-99-27 Stephen Mann∗, Leo Dorsty, and Tim Boumay [email protected], [email protected], [email protected] Abstract In this paper, we describe our development of GABLE, a Matlab implementation of the Geometric Algebra based on `p;q (where p + q = 3) and intended for tutorial purposes. Of particular note are the C matrix representation of geometric objects, effective algorithms for this geometry (inversion, meet and join), and issues in efficiency and numerics. 1 Introduction Geometric algebra extends Clifford algebra with geometrically meaningful operators, and its purpose is to facilitate geometrical computations. Present textbooks and implementation do not always convey this geometrical flavor or the computational and representational convenience of geometric algebra, so we felt a need for a computer tutorial in which representation, computation and visualization are combined to convey both the intuition and the techniques of geometric algebra. Current software packages are either Clifford algebra only (such as CLICAL [9] and CLIFFORD [1]) or do not include graphics [6], so we decide to build our own. The result is GABLE (Geometric Algebra Learning Environment) a hands-on tutorial on geometric algebra that should be accessible to the second year student in college [3]. The GABLE tutorial explains the basics of Geometric Algebra in an accessible manner. It starts with the outer product (as a constructor of subspaces), then treats the inner product (for perpendilarity), and moves via the geometric product (for invertibility) to the more geometrical operators such as projection, rotors, meet and join, and end with to the homogeneous model of Euclidean space.
    [Show full text]