Lectures on Numerical Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Lectures on Numerical Linear Algebra Lectures on Numerical Linear Algebra Yunkai Zhou Department of Mathematics Southern Methodist University Dallas, Texas 75075 [email protected] Spring, 2015 Acknowledgements The lecture slides benefit from the following resources Prof. Per-Olof Persson’s course materials for MIT 18.335 (Thanks Per-Olof for the tex files of slides) Prof. Yousef Saad’s book on iterative methods (Thanks Yousef for the tex files of course materials) Several well-known textbooks on NLA by J. Demmel, Trefethen and Bau, G. W. Stewart, C. D. Meyer, Golub and Van Loan Several other books on matrix analysis and matrix computations Several books/papers on theory and applications of LA The Wikipedia website The open source software: Linux, LATEX, beamer, pstricks The instructor greatly acknowledges NSF for grants CMMI-0727194 and OCI-0749074. Y. Zhou Math-6316/CS-7366, SMU 2/230 Lecture Outline Some Background on Linear Algebra Elementary matrices Vector norms, matrix norms Inner product, orthogonality, orthogonal matrices Matrix Decompositions: Overview Singular Value Decomposition (SVD) Schur Decomposition, Variational Principle, Pseudo Inverse Variational Principle Pseudo-inverse (Generalized Inverse) QR Decomposition: Projection, Reflection, Rotation Linear Least Squares Problems Linear Equations: GE/LU, Cholesky Conditioning and Stability Floating Points Eigenvalue Problems: Concepts and Algorithms Y. Zhou Math-6316/CS-7366, SMU 3/230 Basic Linear Algebra Spaces: Rn, Cn, Rn×n, Cn×n, Rm×n, Cm×n (by default, Rn = Rn×1, Cn = Cn×1) (Real: Rn, Rn×n, Rm×n; Complex: Cn, Cn×n, Cm×n) Y. Zhou Math-6316/CS-7366, SMU 4/230 Basic Linear Algebra Spaces: Rn, Cn, Rn×n, Cn×n, Rm×n, Cm×n (by default, Rn = Rn×1, Cn = Cn×1) (Real: Rn, Rn×n, Rm×n; Complex: Cn, Cn×n, Cm×n) Vectors: v Rn (length-n column real vector) v ∈ Cn (length-n column complex vector) w ∈ R1×n (length-n row real vector) w ∈ C1×n (length-n row complex vector) ∈ (We use column vector as the default, so a vector means a column vector) Y. Zhou Math-6316/CS-7366, SMU 4/230 Basic Linear Algebra Spaces: Rn, Cn, Rn×n, Cn×n, Rm×n, Cm×n (by default, Rn = Rn×1, Cn = Cn×1) (Real: Rn, Rn×n, Rm×n; Complex: Cn, Cn×n, Cm×n) Vectors: v Rn (length-n column real vector) v ∈ Cn (length-n column complex vector) w ∈ R1×n (length-n row real vector) w ∈ C1×n (length-n row complex vector) ∈ (We use column vector as the default, so a vector means a column vector) Special vectors: Length-n basis vector: ei ei : (all elements equal to 0 except the i-th element equals to 1) n Length-n vector of all-ones: 1 =[1, 1, , 1]T = e ··· i n Xi=1 Y. Zhou Math-6316/CS-7366,| SMU {z } 4/230 Basic Linear Algebra Matrices: (element-wise) An m n matrix A Rm×n (or A Cm×n) × ∈ ∈ A = ai,j where a R (or C), i = 1, 2,..., m, j = 1, 2,..., n. i,j ∈ Y. Zhou Math-6316/CS-7366, SMU 5/230 Basic Linear Algebra Matrices: (element-wise) An m n matrix A Rm×n (or A Cm×n) × ∈ ∈ A = ai,j where a R (or C), i = 1, 2,..., m, j = 1, 2,..., n. i,j ∈ Matrices: (vector-wise) An m n matrix A Rm×n (or A Cm×n) × ∈ ∈ A = a1, a2, , an ··· where a Rm (or Cm), i = 1, 2,..., n. i ∈ Y. Zhou Math-6316/CS-7366, SMU 5/230 Basic Linear Algebra Transpose: T A =[a ]m n A =[a ]n m i,j × ⇐⇒ j,i × Example: a11 a12 T a11 a21 a31 A = a21 a22 A = ⇐⇒ a12 a22 a32 a31 a32 Y. Zhou Math-6316/CS-7366, SMU 6/230 Basic Linear Algebra Transpose: T A =[a ]m n A =[a ]n m i,j × ⇐⇒ j,i × Example: a11 a12 T a11 a21 a31 A = a21 a22 A = ⇐⇒ a12 a22 a32 a31 a32 Adjoint (conjugate transpose) : H ∗ A =[a ]m n A = A =[a¯ ]n m i,j × ⇐⇒ j,i × Example: a11 a12 H a¯11 a¯21 a¯31 A = a21 a22 A = ⇐⇒ a¯12 a¯22 a¯32 a31 a32 Y. Zhou Math-6316/CS-7366, SMU 6/230 Basic Linear Algebra A is symmetric: if A = AT (usually it refers to “real” symmetric, it can also be “complex” symmetric) A is hermitian: if A = AH (or A = A∗) Vector-wise notation: a Cm aT C1×m ∈ ⇐⇒ ∈ T a1 aT m×n T 2 n×m A = a1, a2, , an C A = . C ··· ∈ ⇐⇒ . ∈ aT n Y. Zhou Math-6316/CS-7366, SMU 7/230 Basic Linear Algebra Let b =(b ) Rm, A =(a ) Rm×n, x =(x ) Rn i ∈ i,j ∈ i ∈ Matrix-vector product b = Ax n Element-wise bi = ai,j xj , i = 1, 2,..., m j=1 Xn Vector-wise b = aj xj Xj=1 Any A Cm×n is a linear mapping from Cn to Cm, meaning that ∈ A(x + y) = Ax + Ay, x, y Cn ∀ ∈ A(αx) = αAx, α C ∀ ∈ Conversely, any linear mapping in finite dimensional space can be expressed as a matrix-vector product Y. Zhou Math-6316/CS-7366, SMU 8/230 Basic Linear Algebra Let b =(b ) Rm, A =(a ) Rm×n, x =(x ) Rn i ∈ j ∈ i ∈ Matrix-vector product b = Ax Vector-wise n b = aj xj Xj=1 = x1[a1] + x2[a2] + xn[an] ··· b is a linear combination of the columns of A Any column of A can be picked out by choosing a specific x, e.g. aj = A(:, j) = Aej Any row of of A can be picked out by matrix-vector product, e.g. T A(i, :) = ei A Y. Zhou Math-6316/CS-7366, SMU 9/230 Basic Linear Algebra Let A =(a ) Rm×n, B =(b ) Rn×k , C =(c ) Rm×k j ∈ j ∈ j ∈ Matrix-matrix product C = AB Vector-wise (compare columns in C = AB) [c1, c2,..., ck ] = A[b1, b2,..., bk ] n = cj = Abj = ak bk j ⇒ , Xk=1 Each cj is a linear combination of the columns of A Y. Zhou Math-6316/CS-7366, SMU 10/230 Basic Linear Algebra Subroutines (BLAS) – Standardized interface for simple vector and matrix operations – The building block of LAPACK (as the one used in Matlab) – Optimized implementations for specific machines provided by manufacturers History: – BLAS1 (1970s) Vector operations: β = x Ty, y = βx + y – BLAS2 (mid 1980s) Matrix-vector operations: y = Ax + y – BLAS3 (late 1980s) Matrix-matrix operations: C = AB + C Careful cache-aware implementations give close to peak performance for BLAS3 operations High level algorithms (Gaussian elimination, etc) use BLAS but no other machine dependent code – Performance and portability Y. Zhou Math-6316/CS-7366, SMU 11/230 Memory Hierarchy and (BLAS) Modern computers use a memory hierarchy: From fast/expensive to cheap/slow: Registers, L1 cache, L2 cache, (L3 cache ...) local memory, remote memory, secondary memory Fast algorithms perform many operations on each memory block to minimize memory access (cache reuse) Only BLAS3 has potential for very high performance BLAS Memory Refs Flops Flops/Memory Ref Level 1 (y = βx + y) 3n 2n 2/3 Level 2 (y = Ax + y) n2 2n2 2 Level 3 (C = AB + C) 4n2 2n3 n/2 Flop — floating points operations, here each +, , , /, √ counts as one flop, with no distinction between real and complex. − ∗ Y. Zhou Math-6316/CS-7366, SMU 12/230 BLAS implementations Vendor provided: — Intel Math Kernel Library (MKL) — AMD Core Math Library (ACML) — Sun Performance Library — SGI Scientific Computing Software Library Automatically Tuned Linear Algebra Software (ATLAS) — Analyzes hardware to produce BLAS libraries for any platform — Used in MATLAB, precompiled libraries freely available — Sometimes outperforms vendor libraries GOTO BLAS (mainly for Intel processors) — Manually optimized assembly code, (fastest implementation for Intel processors) Y. Zhou Math-6316/CS-7366, SMU 13/230 Basic Linear Algebra Examples of matrix-matrix product: Outer product: (rank-1) For a =(a ) Cm, b =(b ) Cn, (a , b C) i ∈ i ∈ i i ∈ H m×n ab =[ab¯1, ab¯2, , ab¯n]=(a b ) C ··· i j ∈ Outer product: (rank k) For U =[u ] Cm×k ,≤V =[v ] Cn×k , (u Cm, v Cn) j ∈ j ∈ j ∈ j ∈ H v1 v H k H 2 H Cm×n UV =[u1, u2, , uk ] . = uj vj ··· . ∈ j=1 v H X k Rank-k SVD is a representative rank-k outer product. H k H A = UΣV = j=1 σj uj vj Y. Zhou Math-6316/CS-7366,P SMU 14/230 Basic Linear Algebra Examples of matrix-matrix product: A Cm×n ∈ Right multiply by an upper triangular matrix: B = AR Let R =(r ) Cn×n be upper triangular, ij ∈ r11 r1n j ···. B = AR =[a1, a2, , an] . = b = a r ··· . ⇒ j ℓ ℓj r ℓ=1 nn X (bj is a linear combination of only the first j columns of A) Right multiply by a lower triangular matrix: B = AL, L Cn×n (b is a linear combination of only the last n j + 1 columns∈ of A j − Left multiply by an upper triangular matrix: B = RA, R Cm×m (i-th row of B is a linear combination of last m i + 1 rows∈ of A) − Left multiply by a lower triangular matrix: B = LA, L Cm×m (i-th row of B is a linear combination of only the first ∈i rows of A) Y. Zhou Math-6316/CS-7366, SMU 15/230 Basic Linear Algebra: Range, Nullspace m×n The range or column space of A =[a1, a2,..., an] C : ∈ range(A) = span a1, a2,..., an { } = All linear combinations of the columns of A = Ax x Cn { |∀ ∈ } The nullspace of A Cm×n: (also written as kernel space ker(A)) ∈ null(A)= x Ax = 0 { | } Relation between range(AH) and null(A) null(A)=(range(AH))⊥ Equivalently, Rank-nullity theorem: rank(A)+ dim(null(A)) = n Y.
Recommended publications
  • Improving the Accuracy of Computed Eigenvalues and Eigenvectors*
    SIAM J. NUMER. ANAL. 1983 Society for Industrial and Applied Mathematics Vol. 20, No. 1, February 1983 0036-1429/83/2001-0002 $01.25/0 IMPROVING THE ACCURACY OF COMPUTED EIGENVALUES AND EIGENVECTORS* J. J. DONGARRA,t C. B. MOLER AND J. H. WILKINSON Abstract. This paper describes and analyzes several variants of a computational method for improving the numerical accuracy of, and for obtaining numerical bounds on, matrix eigenvalues and eigenvectors. The method, which is essentially a numerically stable implementation of Newton's method, may be used to "fine tune" the results obtained from standard subroutines such as those in EISPACK [Lecture Notes in Computer Science 6, 51, Springer-Verlag, Berlin, 1976, 1977]. Extended precision arithmetic is required in the computation of certain residuals. Introduction. The calculation of an eigenvalue , and the corresponding eigenvec- tor x (here after referred to as an eigenpair) of a matrix A involves the solution of the nonlinear system of equations (A AI)x O. Starting from an approximation h and , a sequence of iterates may be determined using Newton's method or variants of it. The conditions on and guaranteeing convergence have been treated extensively in the literature. For a particularly lucid account the reader is referred to the book by Rail [3]. In a recent paper Wilkinson [7] describes an algorithm for determining error bounds for a computed eigenpair based on these mathematical concepts. Considerations of numerical stability were an essential feature of that paper and indeed were its main raison d'etre. In general this algorithm provides an improved eigenpair and error bounds for it; unless the eigenpair is very ill conditioned the improved eigenpair is usually correct to the precision of the computation used in the main body of the algorithm.
    [Show full text]
  • Numerical Analysis Notes for Math 575A
    Numerical Analysis Notes for Math 575A William G. Faris Program in Applied Mathematics University of Arizona Fall 1992 Contents 1 Nonlinear equations 5 1.1 Introduction . 5 1.2 Bisection . 5 1.3 Iteration . 9 1.3.1 First order convergence . 9 1.3.2 Second order convergence . 11 1.4 Some C notations . 12 1.4.1 Introduction . 12 1.4.2 Types . 13 1.4.3 Declarations . 14 1.4.4 Expressions . 14 1.4.5 Statements . 16 1.4.6 Function definitions . 17 2 Linear Systems 19 2.1 Shears . 19 2.2 Reflections . 24 2.3 Vectors and matrices in C . 27 2.3.1 Pointers in C . 27 2.3.2 Pointer Expressions . 28 3 Eigenvalues 31 3.1 Introduction . 31 3.2 Similarity . 31 3.3 Orthogonal similarity . 34 3.3.1 Symmetric matrices . 34 3.3.2 Singular values . 34 3.3.3 The Schur decomposition . 35 3.4 Vector and matrix norms . 37 3.4.1 Vector norms . 37 1 2 CONTENTS 3.4.2 Associated matrix norms . 37 3.4.3 Singular value norms . 38 3.4.4 Eigenvalues and norms . 39 3.4.5 Condition number . 39 3.5 Stability . 40 3.5.1 Inverses . 40 3.5.2 Iteration . 40 3.5.3 Eigenvalue location . 41 3.6 Power method . 43 3.7 Inverse power method . 44 3.8 Power method for subspaces . 44 3.9 QR method . 46 3.10 Finding eigenvalues . 47 4 Nonlinear systems 49 4.1 Introduction . 49 4.2 Degree . 51 4.2.1 Brouwer fixed point theorem .
    [Show full text]
  • Numerically Stable Generation of Correlation Matrices and Their Factors ∗
    BIT 0006-3835/00/4004-0640 $15.00 2000, Vol. 40, No. 4, pp. 640–651 c Swets & Zeitlinger NUMERICALLY STABLE GENERATION OF CORRELATION MATRICES AND THEIR FACTORS ∗ ‡ PHILIP I. DAVIES† and NICHOLAS J. HIGHAM Department of Mathematics, University of Manchester, Manchester, M13 9PL, England email: [email protected], [email protected] Abstract. Correlation matrices—symmetric positive semidefinite matrices with unit diagonal— are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigen- values (which must be nonnegative and sum to the dimension of the matrix). A popular algorithm of Bendel and Mickey takes a matrix having the specified eigenvalues and uses a finite sequence of Givens rotations to introduce 1s on the diagonal. We give im- proved formulae for computing the rotations and prove that the resulting algorithm is numerically stable. We show by example that the formulae originally proposed, which are used in certain existing Fortran implementations, can lead to serious instability. We also show how to modify the algorithm to generate a rectangular matrix with columns of unit 2-norm. Such a matrix represents a correlation matrix in factored form, which can be preferable to representing the matrix itself, for example when the correlation matrix is nearly singular to working precision. Key words: Random correlation matrix, Bendel–Mickey algorithm, eigenvalues, sin- gular value decomposition, test matrices, forward error bounds, relative error bounds, IMSL, NAG Library, Jacobi method. AMS subject classification: 65F15. 1 Introduction. An important class of symmetric positive semidefinite matrices is those with unit diagonal.
    [Show full text]
  • Model Problems in Numerical Stability Theory for Initial Value Problems*
    SIAM REVIEW () 1994 Society for Industrial and Applied Mathematics Vol. 36, No. 2, pp. 226-257, June 1994 004 MODEL PROBLEMS IN NUMERICAL STABILITY THEORY FOR INITIAL VALUE PROBLEMS* A.M. STUART AND A.R. HUMPHRIES Abstract. In the past numerical stability theory for initial value problems in ordinary differential equations has been dominated by the study of problems with simple dynamics; this has been motivated by the need to study error propagation mechanisms in stiff problems, a question modeled effectively by contractive linear or nonlinear problems. While this has resulted in a coherent and self-contained body of knowledge, it has never been entirely clear to what extent this theory is relevant for problems exhibiting more complicated dynamics. Recently there have been a number of studies of numerical stability for wider classes of problems admitting more complicated dynamics. This on-going work is unified and, in particular, striking similarities between this new developing stability theory and the classical linear and nonlinear stability theories are emphasized. The classical theories of A, B and algebraic stability for Runge-Kutta methods are briefly reviewed; the dynamics of solutions within the classes of equations to which these theories apply--linear decay and contractive problemsw are studied. Four other categories of equationsmgradient, dissipative, conservative and Hamiltonian systemsmare considered. Relationships and differences between the possible dynamics in each category, which range from multiple competing equilibria to chaotic solutions, are highlighted. Runge-Kutta schemes that preserve the dynamical structure of the underlying problem are sought, and indications of a strong relationship between the developing stability theory for these new categories and the classical existing stability theory for the older problems are given.
    [Show full text]
  • Geometric Algebra and Covariant Methods in Physics and Cosmology
    GEOMETRIC ALGEBRA AND COVARIANT METHODS IN PHYSICS AND COSMOLOGY Antony M Lewis Queens' College and Astrophysics Group, Cavendish Laboratory A dissertation submitted for the degree of Doctor of Philosophy in the University of Cambridge. September 2000 Updated 2005 with typo corrections Preface This dissertation is the result of work carried out in the Astrophysics Group of the Cavendish Laboratory, Cambridge, between October 1997 and September 2000. Except where explicit reference is made to the work of others, the work contained in this dissertation is my own, and is not the outcome of work done in collaboration. No part of this dissertation has been submitted for a degree, diploma or other quali¯cation at this or any other university. The total length of this dissertation does not exceed sixty thousand words. Antony Lewis September, 2000 iii Acknowledgements It is a pleasure to thank all those people who have helped me out during the last three years. I owe a great debt to Anthony Challinor and Chris Doran who provided a great deal of help and advice on both general and technical aspects of my work. I thank my supervisor Anthony Lasenby who provided much inspiration, guidance and encouragement without which most of this work would never have happened. I thank Sarah Bridle for organizing the useful lunchtime CMB discussion group from which I learnt a great deal, and the other members of the Cavendish Astrophysics Group for interesting discussions, in particular Pia Mukherjee, Carl Dolby, Will Grainger and Mike Hobson. I gratefully acknowledge ¯nancial support from PPARC. v Contents Preface iii Acknowledgements v 1 Introduction 1 2 Geometric Algebra 5 2.1 De¯nitions and basic properties .
    [Show full text]
  • Stability Analysis for Systems of Differential Equations
    Stability Analysis for Systems of Differential Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: February 8, 2003 Last Modified: March 2, 2008 Contents 1 Introduction 2 2 Physical Stability 2 3 Numerical Stability 4 3.1 Stability for Single-Step Methods..................................4 3.2 Stability for Multistep Methods...................................5 3.3 Choosing a Stable Step Size.....................................7 4 The Simple Pendulum 7 4.1 Numerical Solution of the ODE...................................8 4.2 Physical Stability for the Pendulum................................ 13 4.3 Numerical Stability of the ODE Solvers.............................. 13 1 1 Introduction In setting up a physical simulation involving objects, a primary step is to establish the equations of motion for the objects. These equations are formulated as a system of second-order ordinary differential equations that may be converted to a system of first-order equations whose dependent variables are the positions and velocities of the objects. Such a system is of the generic form x_ = f(t; x); t ≥ 0; x(0) = x0 (1) where x0 is a specified initial condition for the system. The components of x are the positions and velocities of the objects. The function f(t; x) includes the external forces and torques of the system. A computer implementation of the physical simulation amounts to selecting a numerical method to approximate the solution to the system of differential equations.
    [Show full text]
  • Algorithmic Structure for Geometric Algebra Operators and Application to Quadric Surfaces Stephane Breuils
    Algorithmic structure for geometric algebra operators and application to quadric surfaces Stephane Breuils To cite this version: Stephane Breuils. Algorithmic structure for geometric algebra operators and application to quadric surfaces. Operator Algebras [math.OA]. Université Paris-Est, 2018. English. NNT : 2018PESC1142. tel-02085820 HAL Id: tel-02085820 https://pastel.archives-ouvertes.fr/tel-02085820 Submitted on 31 Mar 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ÉCOLE DOCTORALE MSTIC THÈSE DE DOCTORAT EN INFORMATIQUE Structure algorithmique pour les opérateurs d’Algèbres Géométriques et application aux surfaces quadriques Auteur: Encadrants: Stéphane Breuils Dr. Vincent NOZICK Dr. Laurent FUCHS Rapporteurs: Prof. Dominique MICHELUCCI Prof. Pascal SCHRECK Examinateurs: Dr. Leo DORST Prof. Joan LASENBY Prof. Raphaëlle CHAINE Soutenue le 17 Décembre 2018 iii Thèse effectuée au Laboratoire d’Informatique Gaspard-Monge, équipe A3SI, dans les locaux d’ESIEE Paris LIGM, UMR 8049 École doctorale Paris-Est Cité Descartes, Cité Descartes, Bâtiment Copernic-5, bd Descartes 6-8 av. Blaise Pascal Champs-sur-Marne Champs-sur-Marne, 77 454 Marne-la-Vallée Cedex 2 77 455 Marne-la-Vallée Cedex 2 v Abstract Geometric Algebra is considered as a very intuitive tool to deal with geometric problems and it appears to be increasingly efficient and useful to deal with computer graphics problems.
    [Show full text]
  • The Making of a Geometric Algebra Package in Matlab Computer Science Department University of Waterloo Research Report CS-99-27
    The Making of a Geometric Algebra Package in Matlab Computer Science Department University of Waterloo Research Report CS-99-27 Stephen Mann∗, Leo Dorsty, and Tim Boumay [email protected], [email protected], [email protected] Abstract In this paper, we describe our development of GABLE, a Matlab implementation of the Geometric Algebra based on `p;q (where p + q = 3) and intended for tutorial purposes. Of particular note are the C matrix representation of geometric objects, effective algorithms for this geometry (inversion, meet and join), and issues in efficiency and numerics. 1 Introduction Geometric algebra extends Clifford algebra with geometrically meaningful operators, and its purpose is to facilitate geometrical computations. Present textbooks and implementation do not always convey this geometrical flavor or the computational and representational convenience of geometric algebra, so we felt a need for a computer tutorial in which representation, computation and visualization are combined to convey both the intuition and the techniques of geometric algebra. Current software packages are either Clifford algebra only (such as CLICAL [9] and CLIFFORD [1]) or do not include graphics [6], so we decide to build our own. The result is GABLE (Geometric Algebra Learning Environment) a hands-on tutorial on geometric algebra that should be accessible to the second year student in college [3]. The GABLE tutorial explains the basics of Geometric Algebra in an accessible manner. It starts with the outer product (as a constructor of subspaces), then treats the inner product (for perpendilarity), and moves via the geometric product (for invertibility) to the more geometrical operators such as projection, rotors, meet and join, and end with to the homogeneous model of Euclidean space.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Numerical Stability of Some Fast Algorithms for Structured Matrices
    Numerical Stability of Some Fast Algorithms for Structured Matrices Dedicated to Gene Golub on the occasion of his 65-th birthday Richard P. Brent Computer Sciences Laboratory Australian National University Canberra, ACT 0200, Australia [email protected] Abstract. We consider the numerical stability/instability of fast algo- rithms for solving systems of linear equations or linear least squares problems with a low displacement-rank structure. For example, the ma- trices involved may be Toeplitz or Hankel. In particular, we consider algorithms which incorporate pivoting without destroying the structure, such as the Gohberg-Kailath-Olshevsky (GKO) algorithm, and describe some recent results on the stability of these algorithms. We also com- pare these results with the corresponding stability results for algorithms based on the semi-normal equations and for the well known algorithms of Schur/Bareiss and Levinson. 1 Introduction It is well known that systems of n linear equations with a low displacement rank (e.g. Toeplitz or Hankel matrices) can be solved in O(n2) arithmetic operations1. For positive definite Toeplitz matrices the first O(n2) algorithms were introduced by Kolmogorov [30], Wiener [43] and Levinson [31]. These algorithms are related to recursions of Szeg¨o[40] for polynomials orthogonal on the unit circle. Another class of O(n2) algorithms, e.g. the Bareiss algorithm [2], are related to Schur’s algorithm for finding the continued fraction representation of a holomorphic function in the unit disk [34]. This class can be generalized to cover unsymmetric matrices and more general “low displacement rank” matrices [28]. In this paper we consider the numerical stability of some of these algorithms.
    [Show full text]
  • Numerical Stability & Numerical Smoothness of Ordinary Differential
    NUMERICAL STABILITY & NUMERICAL SMOOTHNESS OF ORDINARY DIFFERENTIAL EQUATIONS Kaitlin Reddinger A Thesis Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS August 2015 Committee: Tong Sun, Advisor So-Hsiang Chou Kimberly Rogers ii ABSTRACT Tong Sun, Advisor Although numerically stable algorithms can be traced back to the Babylonian period, it is be- lieved that the study of numerical methods for ordinary differential equations was not rigorously developed until the 1700s. Since then the field has expanded - first with Leonhard Euler’s method and then with the works of Augustin Cauchy, Carl Runge and Germund Dahlquist. Now applica- tions involving numerical methods can be found in a myriad of subjects. With several centuries worth of diligent study devoted to the crafting of well-conditioned problems, it is surprising that one issue in particular - numerical stability - continues to cloud the analysis and implementation of numerical approximation. According to professor Paul Glendinning from the University of Cambridge, “The stability of solutions of differential equations can be a very difficult property to pin down. Rigorous mathemat- ical definitions are often too prescriptive and it is not always clear which properties of solutions or equations are most important in the context of any particular problem. In practice, different definitions are used (or defined) according to the problem being considered. The effect of this confusion is that there are more than 57 varieties of stability to choose from” [10]. Although this quote is primarily in reference to nonlinear problems, it can most certainly be applied to the topic of numerical stability in general.
    [Show full text]
  • Solving Nonlinear Governing Equations of Motion Using Matlab and Simulink in First Dynamics Course
    2006-936: SOLVING NONLINEAR GOVERNING EQUATIONS OF MOTION USING MATLAB AND SIMULINK IN FIRST DYNAMICS COURSE Ali Mohammadzadeh, Grand Valley State University ALI R. MOHAMMADZADEH is currently assistant professor of Engineering at School of Engineering at Grand Valley State University. He received his B.S. in Mechanical Engineering from Sharif University of Technology And his M.S. and Ph.D. both in Mechanical Engineering from the University of Michigan at Ann Arbor. His research area of interest is fluid-structure interaction. Salim Haidar, Grand Valley State University SALIM M.HAIDAR is currently associate professor of Mathematics at Grand Valley State University. He received his B.S. in Mathematics with a Minor in Physics from St. Vincent College, and his M.S. and Ph.D. in Applied Mathematics from Carnegie-Mellon University. His research studies are in applied nonlinear analysis: partial differential equations, optimization, numerical analysis and continuum mechanics Page 11.1141.1 Page © American Society for Engineering Education, 2006 Solving Nonlinear Governing Equations of Motion Using MATLAB and SIMULINK in First Dynamics Course Overview Students in first dynamics courses deal with some dynamical problems in which the governing equations of motion are simultaneous, second order systems of non-linear ordinary differential equations. There are no known quantitative methods or closed-form solution to these systems of non-linear differential equations. To teach students analytic methods and solution techniques to this category of dynamical problems, the authors devised a model example in which the motion of a particle on a rough cylindrical surface is considered. The authors then formulate qualitative and quantitative solution methods and employ both MATLAB and SIMULINK to these analytic methods and solution techniques to arrive at the stable, numerical solution to the proposed example.
    [Show full text]