Numerical Linear Algebra and Matrix Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Numerical Linear Algebra and Matrix Analysis Numerical Linear Algebra and Matrix Analysis Higham, Nicholas J. 2015 MIMS EPrint: 2015.103 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 1 exploitation of matrix structure (such as sparsity, sym- Numerical Linear Algebra and metry, and definiteness), and the design of algorithms y Matrix Analysis to exploit evolving computer architectures. Nicholas J. Higham Throughout the article, uppercase letters are used for matrices and lower case letters for vectors and scalars. Matrices are ubiquitous in applied mathematics. Matrices and vectors are assumed to be complex, unless ∗ Ordinary differential equations (ODEs) and partial dif- otherwise stated, and A = (aji) denotes the conjugate ferential equations (PDEs) are solved numerically by transpose of A = (aij ). An unsubscripted norm k · k finite difference or finite element methods, which lead denotes a general vector norm and the corresponding to systems of linear equations or matrix eigenvalue subordinate matrix norm. Particular norms used here problems. Nonlinear equations and optimization prob- are the 2-norm k · k2 and the Frobenius norm k · kF . lems are typically solved using linear or quadratic The notation “i = 1: n” means that the integer variable models, which again lead to linear systems. i takes on the values 1; 2; : : : ; n. Solving linear systems of equations is an ancient task, undertaken by the Chinese around 1AD, but the study 1 Nonsingularity and Conditioning of matrices per se is relatively recent, originating with Arthur Cayley’s 1858 “A Memoir on the Theory of Matri- Nonsingularity of a matrix is a key requirement in many ces”. Early research on matrices was largely theoret- problems, such as in the solution of n linear equations ical, with much attention focused on the development in n unknowns. For some classes of matrices, nonsingu- of canonical forms, but in the 20th century the practi- larity is guaranteed. A good example is the diagonally cal value of matrices started to be appreciated. Heisen- dominant matrices. The matrix A 2 Cn×n is strictly berg used matrix theory as a tool in the development diagonally dominant by rows if of quantum mechanics in the 1920s. Early proponents X of the systematic use of matrices in applied mathemat- jaij j < jaiij; i = 1: n ics included Frazer, Duncan, and Collar, whose 1938 j6=i book Elementary Matrices and Some Applications to and strictly diagonally dominant by columns if A∗ is Dynamics and Differential Equations emphasized the strictly diagonally dominant by rows. Any matrix that important role of matrices in differential equations and is strictly diagonally dominant by rows or columns mechanics. The continued growth of matrices in appli- is nonsingular (a proof can be obtained by applying cations, together with the advent of mechanical and Gershgorin’s theorem in section 5.1). then digital computing devices, allowing ever larger Since data is often subject to uncertainty we wish problems to be solved, created the need for greater to gauge the sensitivity of problems to perturbations, understanding of all aspects of matrices from theory which is done using condition numbers. An appropriate to computation. condition number for the matrix inverse is This article treats two closely related topics: matrix k(A + ∆A)−1 − A−1k analysis, which is the theory of matrices with a focus lim sup : "!0 "kA−1k on aspects relevant to other areas of mathematics, and k∆Ak6"kAk numerical linear algebra (also called matrix computa- This expression turns out to equal κ(A) = kAkkA−1k, tions), which is concerned with the construction and which is called the condition number of A with respect analysis of algorithms for solving matrix problems as to inversion. This condition number occurs in many well as related topics such as problem sensitivity and contexts. For example, suppose A is contaminated rounding error analysis. by errors and we perform a similarity transformation −1 −1 −1 Important themes that are discussed in this article X (A + E)X = X AX + F. Then kFk = kX EXk 6 include the matrix factorization paradigm, the use of κ(X)kEk and this bound is attainable for some E. Hence unitary transformations for their numerical stability, the errors can be multiplied by a factor as large as κ(X). We therefore prefer to carry out similarity and †. Author’s final version, before copy editing and cross-referencing, other transformations with matrices that are well con- of: N. J. Higham. Numerical linear algebra and matrix analysis. In N. J. ditioned, that is, ones for which κ(X) is close to its Higham, M. R. Dennis, P. Glendinning, P. A. Martin, F. Santosa, and lower bound of 1. By contrast, a matrix for which is J. Tanner, editors, The Princeton Companion to Applied Mathematics, κ pages 263–281. Princeton University Press, Princeton, NJ, USA, 2015. large is called ill conditioned. For any unitary matrix X, 2 κ2(X) = 1, so in numerical linear algebra transforma- angular systems. It is then clear how to solve efficiently tions by unitary or orthogonal matrices are preferred several systems Axi = bi, i = 1: r , with different right- and usually lead to numerically stable algorithms. hand sides but the same coefficient matrix A: compute In practice we often need an estimate of the matrix the LU factors once and then re-use them to solve for condition number number κ(A) but do not wish to go each xi in turn. to the expense of computing A−1 in order to obtain This matrix factorization1 viewpoint dates from it. Fortunately, there are algorithms that can cheaply around the 1940s and has been extremely successful produce a reliable estimate of κ(A) once a factorization in matrix computations. In general, a factorization is a of A has been computed. representation of a matrix as a product of “simpler” Note that the determinant, det(A), is rarely com- matrices. Factorization is a tool that can be used to puted in numerical linear algebra. Its magnitude gives solve a variety of problems, as we will see below. no useful information about the conditioning of A, not Two particular benefits of factorizations are unity least because of its extreme behavior under scaling: and modularity. GE, for example, can be organized det(αA) = αn det(A). in several different ways, corresponding to different orderings of the three nested loops that it comprises, 2 Matrix Factorizations as well as the use of different blockings of the matrix elements. Yet all of them compute the same LU factor- The method of Gaussian elimination (GE) for solving ization, carrying out the same mathematical operations a nonsingular linear system Ax = b of n equations in a different order. Without the unifying concept of a in n unknowns reduces the matrix A to upper trian- factorization, reasoning about these GE variants would gular form and then solves for x by substitution. GE be difficult. is typically described by writing down the equations Modularity refers to the way that a factorization (k+1) (k) (k) (k) (k) aij = aij − aik akj =akk (and similarly for b) that breaks a problem down into separate tasks, which can (1) (1) be analyzed or programmed independently. To carry describe how the starting matrix A = A = (aij ) changes on each of the n − 1 steps of the elimina- out a rounding error analysis of GE we can analyze the tion in its progress towards upper triangular form U. LU factorization and the solution of the triangular sys- Working at the element level in this way leads to a pro- tems by substitution separately and then put the analy- fusion of symbols, superscripts, and subscripts that ses together. The rounding error analysis of substitu- tend to obscure the mathematical structure and hin- tion can be re-used in the many other contexts in which der insights being drawn into the underlying process. triangular systems arise. One of the key developments in the last century was An important example of the use of LU factoriza- the recognition that it is much more profitable to work tion is in iterative refinement. Suppose we have used at the matrix level. Thus the basic equation above is GE to obtain a computed solution xb to Ax = b in (k+1) (k) floating-point arithmetic. If we form r = b − Ax and written as A = MkA , where Mk agrees with the b identity matrix except below the diagonal in the kth solve Ae = r , then in exact arithmetic y = xb + e is (k) (k) the true solution. In computing e we can reuse the LU column, where its (i, k) element is mik = −aik =akk , i = k + 1: n. Recurring the matrix equation gives factors of A, so obtaining y from xb is inexpensive. In (n) practice, the computation of r , e, and y is subject to U := A = Mn−1 :::M1A. Taking the Mk matrices over to the left-hand side leads, after some calculations, to rounding errors so the computed yb is not equal to x. the equation A = LU, where L is unit lower triangular, But under suitable assumptions yb will be an improved approximation and we can iterate this refinement pro- with (i, k) element mik. The prefix “unit” means that L cess. Iterative refinement is particularly effective if has ones on the diagonal. r can be computed using extra precision. GE is therefore equivalent to factorizing the matrix Two other key factorizations are: A as the product of a lower triangular matrix and an upper triangular matrix—something that is not at all • Cholesky factorization: for Hermitian positive def- obvious from the element-level equations.
Recommended publications
  • Special Western Canada Linear Algebra Meeting (17W2668)
    Special Western Canada Linear Algebra Meeting (17w2668) Hadi Kharaghani (University of Lethbridge), Shaun Fallat (University of Regina), Pauline van den Driessche (University of Victoria) July 7, 2017 – July 9, 2017 1 History of this Conference Series The Western Canada Linear Algebra meeting, or WCLAM, is a regular series of meetings held roughly once every 2 years since 1993. Professor Peter Lancaster from the University of Calgary initiated the conference series and has been involved as a mentor, an organizer, a speaker and participant in all of the ten or more WCLAM meetings. Two broad goals of the WCLAM series are a) to keep the community abreast of current research in this discipline, and b) to bring together well-established and early-career researchers to talk about their work in a relaxed academic setting. This particular conference brought together researchers working in a wide-range of mathematical fields including matrix analysis, combinatorial matrix theory, applied and numerical linear algebra, combinatorics, operator theory and operator algebras. It attracted participants from most of the PIMS Universities, as well as Ontario, Quebec, Indiana, Iowa, Minnesota, North Dakota, Virginia, Iran, Japan, the Netherlands and Spain. The highlight of this meeting was the recognition of the outstanding and numerous contributions to the areas of research of Professor Peter Lancaster. Peter’s work in mathematics is legendary and in addition his footprint on mathematics in Canada is very significant. 2 Presentation Highlights The conference began with a presentation from Prof. Chi-Kwong Li surveying the numerical range and connections to dilation. During this lecture the following conjecture was stated: If A 2 M3 has no reducing eigenvalues, then there is a T 2 B(H) such that the numerical range of T is contained in that of A, and T has no dilation of the form I ⊗ A.
    [Show full text]
  • Matrix Analysis (Lecture 1)
    Matrix Analysis (Lecture 1) Yikun Zhang∗ March 29, 2018 Abstract Linear algebra and matrix analysis have long played a fundamen- tal and indispensable role in fields of science. Many modern research projects in applied science rely heavily on the theory of matrix. Mean- while, matrix analysis and its ramifications are active fields for re- search in their own right. In this course, we aim to study some fun- damental topics in matrix theory, such as eigen-pairs and equivalence relations of matrices, scrutinize the proofs of essential results, and dabble in some up-to-date applications of matrix theory. Our lecture will maintain a cohesive organization with the main stream of Horn's book[1] and complement some necessary background knowledge omit- ted by the textbook sporadically. 1 Introduction We begin our lectures with Chapter 1 of Horn's book[1], which focuses on eigenvalues, eigenvectors, and similarity of matrices. In this lecture, we re- view the concepts of eigenvalues and eigenvectors with which we are familiar in linear algebra, and investigate their connections with coefficients of the characteristic polynomial. Here we first outline the main concepts in Chap- ter 1 (Eigenvalues, Eigenvectors, and Similarity). ∗School of Mathematics, Sun Yat-sen University 1 Matrix Analysis and its Applications, Spring 2018 (L1) Yikun Zhang 1.1 Change of basis and similarity (Page 39-40, 43) Let V be an n-dimensional vector space over the field F, which can be R, C, or even Z(p) (the integers modulo a specified prime number p), and let the list B1 = fv1; v2; :::; vng be a basis for V.
    [Show full text]
  • 500 Natural Sciences and Mathematics
    500 500 Natural sciences and mathematics Natural sciences: sciences that deal with matter and energy, or with objects and processes observable in nature Class here interdisciplinary works on natural and applied sciences Class natural history in 508. Class scientific principles of a subject with the subject, plus notation 01 from Table 1, e.g., scientific principles of photography 770.1 For government policy on science, see 338.9; for applied sciences, see 600 See Manual at 231.7 vs. 213, 500, 576.8; also at 338.9 vs. 352.7, 500; also at 500 vs. 001 SUMMARY 500.2–.8 [Physical sciences, space sciences, groups of people] 501–509 Standard subdivisions and natural history 510 Mathematics 520 Astronomy and allied sciences 530 Physics 540 Chemistry and allied sciences 550 Earth sciences 560 Paleontology 570 Biology 580 Plants 590 Animals .2 Physical sciences For astronomy and allied sciences, see 520; for physics, see 530; for chemistry and allied sciences, see 540; for earth sciences, see 550 .5 Space sciences For astronomy, see 520; for earth sciences in other worlds, see 550. For space sciences aspects of a specific subject, see the subject, plus notation 091 from Table 1, e.g., chemical reactions in space 541.390919 See Manual at 520 vs. 500.5, 523.1, 530.1, 919.9 .8 Groups of people Add to base number 500.8 the numbers following —08 in notation 081–089 from Table 1, e.g., women in science 500.82 501 Philosophy and theory Class scientific method as a general research technique in 001.4; class scientific method applied in the natural sciences in 507.2 502 Miscellany 577 502 Dewey Decimal Classification 502 .8 Auxiliary techniques and procedures; apparatus, equipment, materials Including microscopy; microscopes; interdisciplinary works on microscopy Class stereology with compound microscopes, stereology with electron microscopes in 502; class interdisciplinary works on photomicrography in 778.3 For manufacture of microscopes, see 681.
    [Show full text]
  • Social Choice Theory Christian List
    1 Social Choice Theory Christian List Social choice theory is the study of collective decision procedures. It is not a single theory, but a cluster of models and results concerning the aggregation of individual inputs (e.g., votes, preferences, judgments, welfare) into collective outputs (e.g., collective decisions, preferences, judgments, welfare). Central questions are: How can a group of individuals choose a winning outcome (e.g., policy, electoral candidate) from a given set of options? What are the properties of different voting systems? When is a voting system democratic? How can a collective (e.g., electorate, legislature, collegial court, expert panel, or committee) arrive at coherent collective preferences or judgments on some issues, on the basis of its members’ individual preferences or judgments? How can we rank different social alternatives in an order of social welfare? Social choice theorists study these questions not just by looking at examples, but by developing general models and proving theorems. Pioneered in the 18th century by Nicolas de Condorcet and Jean-Charles de Borda and in the 19th century by Charles Dodgson (also known as Lewis Carroll), social choice theory took off in the 20th century with the works of Kenneth Arrow, Amartya Sen, and Duncan Black. Its influence extends across economics, political science, philosophy, mathematics, and recently computer science and biology. Apart from contributing to our understanding of collective decision procedures, social choice theory has applications in the areas of institutional design, welfare economics, and social epistemology. 1. History of social choice theory 1.1 Condorcet The two scholars most often associated with the development of social choice theory are the Frenchman Nicolas de Condorcet (1743-1794) and the American Kenneth Arrow (born 1921).
    [Show full text]
  • Mathematics and Materials
    IAS/PARK CITY MATHEMATICS SERIES Volume 23 Mathematics and Materials Mark J. Bowick David Kinderlehrer Govind Menon Charles Radin Editors American Mathematical Society Institute for Advanced Study Society for Industrial and Applied Mathematics 10.1090/pcms/023 Mathematics and Materials IAS/PARK CITY MATHEMATICS SERIES Volume 23 Mathematics and Materials Mark J. Bowick David Kinderlehrer Govind Menon Charles Radin Editors American Mathematical Society Institute for Advanced Study Society for Industrial and Applied Mathematics Rafe Mazzeo, Series Editor Mark J. Bowick, David Kinderlehrer, Govind Menon, and Charles Radin, Volume Editors. IAS/Park City Mathematics Institute runs mathematics education programs that bring together high school mathematics teachers, researchers in mathematics and mathematics education, undergraduate mathematics faculty, graduate students, and undergraduates to participate in distinct but overlapping programs of research and education. This volume contains the lecture notes from the Graduate Summer School program 2010 Mathematics Subject Classification. Primary 82B05, 35Q70, 82B26, 74N05, 51P05, 52C17, 52C23. Library of Congress Cataloging-in-Publication Data Names: Bowick, Mark J., editor. | Kinderlehrer, David, editor. | Menon, Govind, 1973– editor. | Radin, Charles, 1945– editor. | Institute for Advanced Study (Princeton, N.J.) | Society for Industrial and Applied Mathematics. Title: Mathematics and materials / Mark J. Bowick, David Kinderlehrer, Govind Menon, Charles Radin, editors. Description: [Providence] : American Mathematical Society, [2017] | Series: IAS/Park City math- ematics series ; volume 23 | “Institute for Advanced Study.” | “Society for Industrial and Applied Mathematics.” | “This volume contains lectures presented at the Park City summer school on Mathematics and Materials in July 2014.” – Introduction. | Includes bibliographical references. Identifiers: LCCN 2016030010 | ISBN 9781470429195 (alk. paper) Subjects: LCSH: Statistical mechanics–Congresses.
    [Show full text]
  • Msc in Applied Mathematics
    MSc in Applied Mathematics Title: Past, Present and Challenges in Yang-Mills Theory Author: José Luis Tejedor Advisor: Sebastià Xambó Descamps Department: Departament de Matemàtica Aplicada II Academic year: 2011-2012 PAST, PRESENT AND CHALLENGES IN YANG-MILLS THEORY Jos´eLuis Tejedor June 11, 2012 Abstract In electrodynamics the potentials are not uniquely defined. The electromagnetic field is un- changed by gauge transformations of the potential. The electromagnestism is a gauge theory with U(1) as gauge group. C. N. Yang and R. Mills extended in 1954 the concept of gauge theory for abelian groups to non-abelian groups to provide an explanation for strong interactions. The idea of Yang-Mills was criticized by Pauli, as the quanta of the Yang-Mills field must be massless in order to maintain gauge invariance. The idea was set aside until 1960, when the concept of particles acquiring mass through symmetry breaking in massless theories was put forward. This prompted a significant restart of Yang-Mills theory studies that proved succesful in the formula- tion of both electroweak unification and quantum electrodynamics (QCD). The Standard Model combines the strong interaction with the unified electroweak interaction through the symmetry group SU(3) ⊗ SU(2) ⊗ U(1). Modern gauge theory includes lattice gauge theory, supersymme- try, magnetic monopoles, supergravity, instantons, etc. Concerning the mathematics, the field of Yang-Mills theories was included in the Clay Mathematics Institute's list of \Millenium Prize Problems". This prize-problem focuses, especially, on a proof of the conjecture that the lowest excitations of a pure four-dimensional Yang-Mills theory (i.e.
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • Mathematics (MATH) 1
    Mathematics (MATH) 1 MATH 103 College Algebra and Trigonometry MATHEMATICS (MATH) Prerequisites: Appropriate score on the Math Placement Exam; or grade of P, C, or better in MATH 100A. MATH 100A Intermediate Algebra Notes: Credit for both MATH 101 and 103 is not allowed; credit for both Prerequisites: Appropriate score on the Math Placement Exam. MATH 102 and MATH 103 is not allowed; students with previous credit in Notes: Credit earned in MATH 100A will not count toward degree any calculus course (Math 104, 106, 107, or 208) may not earn credit for requirements. this course. Description: Review of the topics in a second-year high school algebra Description: First and second degree equations and inequalities, absolute course taught at the college level. Includes: real numbers, 1st and value, functions, polynomial and rational functions, exponential and 2nd degree equations and inequalities, linear systems, polynomials logarithmic functions, trigonometric functions and identities, laws of and rational expressions, exponents and radicals. Heavy emphasis on sines and cosines, applications, polar coordinates, systems of equations, problem solving strategies and techniques. graphing, conic sections. Credit Hours: 3 Credit Hours: 5 Max credits per semester: 3 Max credits per semester: 5 Max credits per degree: 3 Max credits per degree: 5 Grading Option: Graded with Option Grading Option: Graded with Option Prerequisite for: MATH 100A; MATH 101; MATH 103 Prerequisite for: AGRO 361, GEOL 361, NRES 361, SOIL 361, WATS 361; MATH 101 College Algebra AGRO 458, AGRO 858, NRES 458, NRES 858, SOIL 458; ASCI 340; Prerequisites: Appropriate score on the Math Placement Exam; or grade CHEM 105A; CHEM 109A; CHEM 113A; CHME 204; CRIM 300; of P, C, or better in MATH 100A.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • "Mathematics in Science, Social Sciences and Engineering"
    Research Project "Mathematics in Science, Social Sciences and Engineering" Reference person: Prof. Mirko Degli Esposti (email: [email protected]) The Department of Mathematics of the University of Bologna invites applications for a PostDoc position in "Mathematics in Science, Social Sciences and Engineering". The appointment is for one year, renewable to a second year. Candidates are expected to propose and perform research on innovative aspects in mathematics, along one of the following themes: Analysis: 1. Functional analysis and abstract equations: Functional analytic methods for PDE problems; degenerate or singular evolution problems in Banach spaces; multivalued operators in Banach spaces; function spaces related to operator theory.FK percolation and its applications 2. Applied PDEs: FK percolation and its applications, mathematical modeling of financial markets by PDE or stochastic methods; mathematical modeling of the visual cortex in Lie groups through geometric analysis tools. 3. Evolutions equations: evolutions equations with real characteristics, asymptotic behavior of hyperbolic systems. 4. Qualitative theory of PDEs and calculus of variations: Sub-Riemannian PDEs; a priori estimates and solvability of systems of linear PDEs; geometric fully nonlinear PDEs; potential analysis of second order PDEs; hypoellipticity for sums of squares; geometric measure theory in Carnot groups. Numerical Analysis: 1. Numerical Linear Algebra: matrix equations, matrix functions, large-scale eigenvalue problems, spectral perturbation analysis, preconditioning techniques, ill- conditioned linear systems, optimization problems. 2. Inverse Problems and Image Processing: regularization and optimization methods for ill-posed integral equation problems, image segmentation, deblurring, denoising and reconstruction from projection; analysis of noise models, medical applications. 3. Geometric Modelling and Computer Graphics: Curves and surface modelling, shape basic functions, interpolation methods, parallel graphics processing, realistic rendering.
    [Show full text]
  • Arxiv:1805.04488V5 [Math.NA]
    GENERALIZED STANDARD TRIPLES FOR ALGEBRAIC LINEARIZATIONS OF MATRIX POLYNOMIALS∗ EUNICE Y. S. CHAN†, ROBERT M. CORLESS‡, AND LEILI RAFIEE SEVYERI§ Abstract. We define generalized standard triples X, Y , and L(z) = zC1 − C0, where L(z) is a linearization of a regular n×n −1 −1 matrix polynomial P (z) ∈ C [z], in order to use the representation X(zC1 − C0) Y = P (z) which holds except when z is an eigenvalue of P . This representation can be used in constructing so-called algebraic linearizations for matrix polynomials of the form H(z) = zA(z)B(z)+ C ∈ Cn×n[z] from generalized standard triples of A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using ℓ the coefficients of the expression 1 = Pk=0 ekφk(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. Key words. Standard triple, regular matrix polynomial, polynomial bases, companion matrix, colleague matrix, comrade matrix, algebraic linearization, linearization of matrix polynomials. AMS subject classifications. 65F15, 15A22, 65D05 1. Introduction. A matrix polynomial P (z) ∈ Fm×n[z] is a polynomial in the variable z with coef- ficients that are m by n matrices with entries from the field F. We will use F = C, the field of complex k numbers, in this paper.
    [Show full text]
  • Randnla: Randomized Numerical Linear Algebra
    review articles DOI:10.1145/2842602 generation mechanisms—as an algo- Randomization offers new benefits rithmic or computational resource for the develop ment of improved algo- for large-scale linear algebra computations. rithms for fundamental matrix prob- lems such as matrix multiplication, BY PETROS DRINEAS AND MICHAEL W. MAHONEY least-squares (LS) approximation, low- rank matrix approxi mation, and Lapla- cian-based linear equ ation solvers. Randomized Numerical Linear Algebra (RandNLA) is an interdisci- RandNLA: plinary research area that exploits randomization as a computational resource to develop improved algo- rithms for large-scale linear algebra Randomized problems.32 From a foundational per- spective, RandNLA has its roots in theoretical computer science (TCS), with deep connections to mathemat- Numerical ics (convex analysis, probability theory, metric embedding theory) and applied mathematics (scientific computing, signal processing, numerical linear Linear algebra). From an applied perspec- tive, RandNLA is a vital new tool for machine learning, statistics, and data analysis. Well-engineered implemen- Algebra tations have already outperformed highly optimized software libraries for ubiquitous problems such as least- squares,4,35 with good scalability in par- allel and distributed environments. 52 Moreover, RandNLA promises a sound algorithmic and statistical foundation for modern large-scale data analysis. MATRICES ARE UBIQUITOUS in computer science, statistics, and applied mathematics. An m × n key insights matrix can encode information about m objects ˽ Randomization isn’t just used to model noise in data; it can be a powerful (each described by n features), or the behavior of a computational resource to develop discretized differential operator on a finite element algorithms with improved running times and stability properties as well as mesh; an n × n positive-definite matrix can encode algorithms that are more interpretable in the correlations between all pairs of n objects, or the downstream data science applications.
    [Show full text]