Householder Symposium XVII Book of Abstracts

Total Page:16

File Type:pdf, Size:1020Kb

Householder Symposium XVII Book of Abstracts Householder Symposium XVII Book of Abstracts Zeuthen, Germany June 1-6, 2008 2 Householder Symposium XVII June 1-6, 2008, Zeuthen, Germany The Householder Symposium on Numerical Linear Algebra in Zeuthen, Germany, is the seventeenth in a series that was initiated by Alston S. Householder (1904-1993), Director of the Mathematics Division of Oak Ridge National Laboratory and Ford Professor at the University of Tennessee. These international meetings were devoted to matrix computations and linear algebra and were held in Gatlinburg, Tennessee. The last “Gatlinburg” conference was held in 1969 on the occasion of Householder’s retirement. At the time, it was decided to continue the meetings but vary the place. Since then meetings have been held at three year intervals in a variety of venues. The conferences are run in tandem by the permanent Householder Committee and a Lo- cal Organizing Committee. This year the chair of the Householder Committee is Angelika Bunse-Gerstner (University of Bremen), and the Local Organizing Committee consists of J¨org Liesen, Volker Mehrmann and Reinhard Nabben (all TU Berlin). The complete list of the Householder Symposia is as follows: Year Place Local organizers I 1961 Gatlinburg, USA A. S. Householder II 1963 Gatlinburg, USA A. S. Householder, F. W. J. Olver III 1964 Gatlinburg, USA A. S. Householder IV 1969 Gatlinburg, USA A. S. Householder V 1972 Los Alamos, USA R. S. Varga VI 1974 Hopfen am See, BRD F. L. Bauer VII 1977 Asilomar, USA G. H. Golub VIII 1981 Oxford, England L. Fox, J. H. Wilkinson IX 1984 Waterloo, Canada J. A. George X 1987 Fairfield Glade, USA G. W. Stewart, R. C. Ward XI 1990 Tylosand, Sweden A.˚ Bj¨orck XII 1993 Lake Arrowhead, USA T. F. Chan, G. H. Golub XIII 1996 Pontresina, Switzerland W. Gander, M. H. Gutknecht XIV 1999 Whistler, B.C., Canada J. M. Varah XV 2002 Peebles, Scotland P. Knight, A. Ramage, A. Wathen XVI 2005 Seven Springs, USA J. Barlow, D. Szyld, H. Zha XVII 2008 Zeuthen, Germany J. Liesen, V. Mehrmann, R. Nabben The meeting is also the occasion for the award of the Householder Prize for the best thesis in Numerical Linear Algebra. This prize was established at the Gatlinburg Symposium in 1969 to recognize the outstanding contributions of Alston S. Householder. This year the chair of the Householder Prize Committee is Charles Van Loan (Cornell University). 3 Local Organizing Committee J¨orgLiesen (TU Berlin) Volker Mehrmann (TU Berlin) Reinhard Nabben (TU Berlin) Program Committee Angelika Bunse-Gerstner (Chair, University of Bremen) Tony Chan (UCLA) Alan Edelman (MIT) Nick Higham (University of Manchester) Ilse Ipsen (North Carolina State University) Roy Mathias (University of Birmingham) Michael Overton (Courant Institute, New York University) ZdenˇekStrakoˇs(Czech Academy of Sciences) Paul Van Dooren (Catholic University of Louvain) Charles Van Loan (Cornell University) Householder Prize Committee James Demmel (University of California, Berkeley) Howard Elman (University of Maryland) Sabine Van Huffel (K.U. Leuven) Volker Mehrmann (TU Berlin) Charles Van Loan (Chair, Cornell University) Sponsorship We are grateful to the following organizations for their sponsorship of the meeting. Walter de Gruyter GmbH & Co. KG Deutsche Forschungsgemeinschaft Gesellschaft f¨ur Angewandte Mathematik und Mechanik e.V. International Linear Algebra Society The MathWorks, Inc. The Numerical Algorithms Group Ltd (NAG) National Science Foundation (NSF) Technische Universit¨atBerlin Wiley InterScience 4 List of Abstracts Pierre-Antoine Absil Differential-Geometric Foundations of Jacobi-Davidson Methods . 12 Athanasios C. Antoulas Model reduction by means of fast frequency sweep . 14 Mario Arioli Matrix square root and interpolation spaces . 17 Owe Axelsson An additive subspace correction method combined with an ADI-type splitting 18 Zhaojun Bai Robust and Efficient Methods for Multi-Length Scale Matrix Computations and Applications in Quantum Mechanical Simulations . 18 Ulrike Baur Control-oriented Model Reduction for Parabolic Systems . 19 Peter Benner Solving Algebraic Riccati Equations for Stabilization of Incompressible Flows 21 Michele Benzi Localization Phenomena in Matrix Functions: Theory, Algorithms, Applica- tions . 23 Timo Betcke Improving computed eigenpairs of polynomial eigenvalue problems by scaling and iterative refinement . 26 David Bindel Error Bounds and Error Estimates for Nonlinear Eigenvalue Problems . 28 Ake˚ Bj¨orck Band Householder/Lanczos Methods for Least Squares with Multiple Right Hand Sides . 28 Matthias Bollh¨ofer Algebraic multigrid for Helmholtz equations . 29 Shreemayee Bora Structured eigenvalue condition number and backward error of a class of poly- nomial eigenvalue problems . 30 Karen Braman Middle Deflations in the QR Algorithm . 33 Angelika Bunse-Gerstner Model reduction for large scale MIMO-systems via rational tangential Hermite interpolation . 36 Xiao-Wen Chang Solving Integer Linear Least Squares Problems . 36 John M. Conroy The Linear Algebra of a Multi-document Summarization System . 38 Achiya Dax From Eigenvalues to Singular Values: The Rectangular Quotients Equality and Related Issues . 40 5 Eric de Sturler Matrix Analysis Problems arising in Fast Implementations of the Quantum Monte Carlo Method . 41 James Demmel Communication avoiding algorithms for linear algebra . 44 Inderjit S. Dhillon The Log-Determinant Divergence and Its Applications . 44 Sue Dollar Sparse direct methods need not suffer under dense rows . 45 Marco Donatelli The Antireflective Algebra with Applications to Image Deblurring . 45 Froil´anM. Dopico An Orthogonal and Symmetric High Relative Accuracy Algorithm for the Sym- metric Eigenproblem . 48 Petros Drineas The Column Subset Selection Problem . 50 Iain Duff Practical Rank Determination for Square and Rectangular Sparse Matrices . 52 Ioana Dumitriu Fast Linear Algebra is Stable . 54 Alan Edelmann Mathematical Software Going Forward . 55 Lars Eld´en Solving Ill-Posed Cauchy Problems using Rational Krylov Methods . 55 Howard C. Elman Preconditioners for Eigenvalue Problems Arising in Linear Stability Analysis 57 Mark Embree Damped Mechanical Systems: Spectra, Pseudospectra, Structured Perturba- tions . 59 Yogi A. Erlangga Multilevel Krylov-Multigrid Method for the 2D Indefinite Helmholtz Equation 60 Oliver G. Ernst A Posteriori Error Estimators for Krylov Subspace Approximations of Matrix Functions . 63 Heike Faßbender A structured polynomial eigenproblem arising in the analysis of time delay systems and related polynomial eigenproblems . 64 Melina A. Freitag Inexact Preconditioned Shift-and-Invert Arnoldi’s Method and Implicit Restarts for Eigencomputations . 66 Andreas Frommer Algebraic Multigrid Methods for Laplacians of Graphs . 67 Efstratios Gallopoulos Linear Algebra Tools for Link Analysis and Dimensionality Reduction . 69 Martin J. Gander An Optimal Complexity Algorithm for Non-Matching Grid Projections . 70 6 Luc Giraud Convergence in backward error of relaxed GMRES . 72 Robert Granat Parallel Eigenvalue Reordering in Real Schur Forms . 73 Serge Gratton Quasi-Newton formula, matrix nearness problems and preconditioning . 76 Anne Greenbaum Upper and Lower Bounds on Norms of Functions of Matrices . 78 Chen Greif A Conjugate Gradient Method for Skew-Symmetric Matrices . 79 Luka Grubiˇsi´c Numerical linear algebra for spectral theory of block operator matrices . 79 Serkan Gugercin Structured perturbation theory for inexact Krylov projection methods in model reduction . 80 Chun-Hua Guo Convergence Rate of the Cyclic Reduction Algorithm for a Nonlinear Matrix Equation . 81 Nicholas J. Higham Computing the Fr´echet Derivative of the Matrix Exponential, with an appli- cation to Condition Number Estimation . 83 Michiel Hochstenbach On the computation of real and purely imaginary eigenvalues of large sparse matrices . 84 Thomas Huckle Frobenius Norm Minimization and Probing for Preconditioning . 85 Bruno Iannazzo Rational Matrix Iterations in Numerical Linear Algebra . 86 Ilse Ipsen Perturbation Bounds for Determinants and Characteristic Polynomials . 87 Charles Johnson A question about Eigenvalues of Tridiagonal Matrices and Motivation for it . 88 Bo K˚agstr¨om Product Eigenvalue Problems in Applications: Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues . 88 Misha E. Kilmer A Multilevel, Modified Total Least Norm Approach to Signal and Image De- blurring . 90 Andrew V. Knyazev Rayleigh-Ritz majorization error bounds with applications to FEM and sub- space iterations . 91 Plamen Koev Computing Eigenvalues of Random Matrices . 92 Christos Kravvaritis On the Growth Factor for Hadamard Matrices . 92 Daniel Kressner Computation of Structured Pseudospectra . 95 7 James V. Lambers Robust Computation of Off-Diagonal Elements of Functions of Matrices . 96 Julien Langou Reduce factorizations . 99 J¨org Liesen On nonsymmetric saddle point matrices that allow conjugate gradient iterations . 100 Niloufer Mackey Theory and Algorithms for Structured Matrix Problems . 101 D. Steven Mackey Linearizations of Singular Matrix Polynomials and the Recovery of Minimal Indices . 104 Michael W. Mahoney Statistical Leverage and Improved Matrix Algorithms . 106 Carla D. Martin A Tensor SVD . 107 Nicola Mastronardi On Things of Little Interest: How to Compute Low Paying Annuities . 108 Roy Mathias Generalising Results about the Hermitian Eigenvalue Problems to the Quadratic Eigenvalue Problem . 109 Karl Meerbergen Recycling Ritz vector for the solution of parameterized linear systems . 110 Christian Mehl Hamiltonian
Recommended publications
  • Fast Solution of Sparse Linear Systems with Adaptive Choice of Preconditioners Zakariae Jorti
    Fast solution of sparse linear systems with adaptive choice of preconditioners Zakariae Jorti To cite this version: Zakariae Jorti. Fast solution of sparse linear systems with adaptive choice of preconditioners. Gen- eral Mathematics [math.GM]. Sorbonne Université, 2019. English. NNT : 2019SORUS149. tel- 02425679v2 HAL Id: tel-02425679 https://tel.archives-ouvertes.fr/tel-02425679v2 Submitted on 15 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Sorbonne Université École doctorale de Sciences Mathématiques de Paris Centre Laboratoire Jacques-Louis Lions Résolution rapide de systèmes linéaires creux avec choix adaptatif de préconditionneurs Par Zakariae Jorti Thèse de doctorat de Mathématiques appliquées Dirigée par Laura Grigori Co-supervisée par Ani Anciaux-Sedrakian et Soleiman Yousef Présentée et soutenue publiquement le 03/10/2019 Devant un jury composé de: M. Tromeur-Dervout Damien, Professeur, Université de Lyon, Rapporteur M. Schenk Olaf, Professeur, Università della Svizzera italiana, Rapporteur M. Hecht Frédéric, Professeur, Sorbonne Université, Président du jury Mme. Emad Nahid, Professeur, Université Paris-Saclay, Examinateur M. Vasseur Xavier, Ingénieur de recherche, ISAE-SUPAERO, Examinateur Mme. Grigori Laura, Directrice de recherche, Inria Paris, Directrice de thèse Mme. Anciaux-Sedrakian Ani, Ingénieur de recherche, IFPEN, Co-encadrante de thèse M.
    [Show full text]
  • Efficient “Black-Box” Multigrid Solvers for Convection-Dominated Problems
    EFFICIENT \BLACK-BOX" MULTIGRID SOLVERS FOR CONVECTION-DOMINATED PROBLEMS A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Science 2011 Glyn Owen Rees School of Computer Science 2 Contents Declaration 12 Copyright 13 Acknowledgement 14 1 Introduction 15 2 The Convection-Di®usion Problem 18 2.1 The continuous problem . 18 2.2 The Galerkin approximation method . 22 2.2.1 The ¯nite element method . 26 2.3 The Petrov-Galerkin (SUPG) approximation . 29 2.4 Properties of the discrete operator . 34 2.5 The Navier-Stokes problem . 37 3 Methods for Solving Linear Algebraic Systems 42 3.1 Basic iterative methods . 43 3.1.1 The Jacobi method . 45 3.1.2 The Gauss-Seidel method . 46 3.1.3 Convergence of splitting iterations . 48 3.1.4 Incomplete LU (ILU) factorisation . 49 3.2 Krylov methods . 56 3.2.1 The GMRES method . 59 3.2.2 Preconditioned GMRES method . 62 3.3 Multigrid method . 67 3.3.1 Geometric multigrid method . 67 3.3.2 Algebraic multigrid method . 75 3.3.3 Parallel AMG . 80 3.3.4 Literature summary of multigrid . 81 3.3.5 Multigrid preconditioning of Krylov solvers . 84 3 3.4 A new tILU smoother . 85 4 Two-Dimensional Case Studies 89 4.1 The di®usion problem . 90 4.2 Geometric multigrid preconditioning . 95 4.2.1 Constant uni-directional wind . 95 4.2.2 Double glazing problem - recirculating wind . 106 4.2.3 Combined uni-directional and recirculating wind .
    [Show full text]
  • Chapter 7 Iterative Methods for Large Sparse Linear Systems
    Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay for the direct methods based on matrix factorization is that the factors of a sparse matrix may not be sparse, so that for large sparse systems the memory cost make direct methods too expensive, in memory and in execution time. Instead we introduce iterative methods, for which matrix sparsity is exploited to develop fast algorithms with a low memory footprint. 7.1 Sparse matrix algebra Large sparse matrices We say that the matrix A Rn is large if n is large, and that A is sparse if most of the elements are2 zero. If a matrix is not sparse, we say that the matrix is dense. Whereas for a dense matrix the number of nonzero elements is (n2), for a sparse matrix it is only (n), which has obvious implicationsO for the memory footprint and efficiencyO for algorithms that exploit the sparsity of a matrix. AdiagonalmatrixisasparsematrixA =(aij), for which aij =0for all i = j,andadiagonalmatrixcanbegeneralizedtoabanded matrix, 6 for which there exists a number p,thebandwidth,suchthataij =0forall i<j p or i>j+ p.Forexample,atridiagonal matrix A is a banded − 59 CHAPTER 7. ITERATIVE METHODS FOR LARGE SPARSE 60 LINEAR SYSTEMS matrix with p =1, xx0000 xxx000 20 xxx003 A = , (7.1) 600xxx07 6 7 6000xxx7 6 7 60000xx7 6 7 where x represents a nonzero4 element. 5 Compressed row storage The compressed row storage (CRS) format is a data structure for efficient represention of a sparse matrix by three arrays, containing the nonzero values, the respective column indices, and the extents of the rows.
    [Show full text]
  • Chebyshev and Fourier Spectral Methods 2000
    Chebyshev and Fourier Spectral Methods Second Edition John P. Boyd University of Michigan Ann Arbor, Michigan 48109-2143 email: [email protected] http://www-personal.engin.umich.edu/jpboyd/ 2000 DOVER Publications, Inc. 31 East 2nd Street Mineola, New York 11501 1 Dedication To Marilyn, Ian, and Emma “A computation is a temptation that should be resisted as long as possible.” — J. P. Boyd, paraphrasing T. S. Eliot i Contents PREFACE x Acknowledgments xiv Errata and Extended-Bibliography xvi 1 Introduction 1 1.1 Series expansions .................................. 1 1.2 First Example .................................... 2 1.3 Comparison with finite element methods .................... 4 1.4 Comparisons with Finite Differences ....................... 6 1.5 Parallel Computers ................................. 9 1.6 Choice of basis functions .............................. 9 1.7 Boundary conditions ................................ 10 1.8 Non-Interpolating and Pseudospectral ...................... 12 1.9 Nonlinearity ..................................... 13 1.10 Time-dependent problems ............................. 15 1.11 FAQ: Frequently Asked Questions ........................ 16 1.12 The Chrysalis .................................... 17 2 Chebyshev & Fourier Series 19 2.1 Introduction ..................................... 19 2.2 Fourier series .................................... 20 2.3 Orders of Convergence ............................... 25 2.4 Convergence Order ................................. 27 2.5 Assumption of Equal Errors ...........................
    [Show full text]
  • Program of the Sessions San Diego, California, January 9–12, 2013
    Program of the Sessions San Diego, California, January 9–12, 2013 AMS Short Course on Random Matrices, Part Monday, January 7 I MAA Short Course on Conceptual Climate Models, Part I 9:00 AM –3:45PM Room 4, Upper Level, San Diego Convention Center 8:30 AM –5:30PM Room 5B, Upper Level, San Diego Convention Center Organizer: Van Vu,YaleUniversity Organizers: Esther Widiasih,University of Arizona 8:00AM Registration outside Room 5A, SDCC Mary Lou Zeeman,Bowdoin upper level. College 9:00AM Random Matrices: The Universality James Walsh, Oberlin (5) phenomenon for Wigner ensemble. College Preliminary report. 7:30AM Registration outside Room 5A, SDCC Terence Tao, University of California Los upper level. Angles 8:30AM Zero-dimensional energy balance models. 10:45AM Universality of random matrices and (1) Hans Kaper, Georgetown University (6) Dyson Brownian Motion. Preliminary 10:30AM Hands-on Session: Dynamics of energy report. (2) balance models, I. Laszlo Erdos, LMU, Munich Anna Barry*, Institute for Math and Its Applications, and Samantha 2:30PM Free probability and Random matrices. Oestreicher*, University of Minnesota (7) Preliminary report. Alice Guionnet, Massachusetts Institute 2:00PM One-dimensional energy balance models. of Technology (3) Hans Kaper, Georgetown University 4:00PM Hands-on Session: Dynamics of energy NSF-EHR Grant Proposal Writing Workshop (4) balance models, II. Anna Barry*, Institute for Math and Its Applications, and Samantha 3:00 PM –6:00PM Marina Ballroom Oestreicher*, University of Minnesota F, 3rd Floor, Marriott The time limit for each AMS contributed paper in the sessions meeting will be found in Volume 34, Issue 1 of Abstracts is ten minutes.
    [Show full text]
  • Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem †
    S S symmetry Article Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem † Shiliang Wu 1, *, Cuixia Li 1 and Praveen Agarwal 2,3 1 School of Mathematics, Yunnan Normal University, Kunming 650500, China; [email protected] 2 Department of Mathrematics, Anand International College of Engineering, Jaipur 303012, India; [email protected] 3 Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman, United Arab Emirates * Correspondence: [email protected] † This research was supported by National Natural Science Foundation of China (No.11961082). Abstract: In this paper, we obtain a new equivalent fixed-point form of the linear complementarity problem by introducing a relaxed matrix and establish a class of relaxed modulus-based matrix split- ting iteration methods for solving the linear complementarity problem. Some sufficient conditions for guaranteeing the convergence of relaxed modulus-based matrix splitting iteration methods are presented. Numerical examples are offered to show the efficacy of the proposed methods. Keywords: linear complementarity problem; matrix splitting; iteration method; convergence MSC: 90C33; 65F10; 65F50; 65G40 1. Introduction Citation: Wu, S.; Li, C.; Agarwal, P. Relaxed Modulus-Based Matrix In this paper, we focus on the iterative solution of the linear complementarity problem, Splitting Methods for the Linear abbreviated as ‘LCP(q, A)’, whose form is Complementarity Problem. Symmetry T 2021, 13, 503. https://doi.org/ w = Az + q ≥ 0, z ≥ 0 and z w = 0, (1) 10.3390/sym13030503 × where A 2 Rn n and q 2 Rn are given, and z 2 Rn is unknown, and for two s × t matrices Academic Editor: Jan Awrejcewicz G = (gij) and H = (hij) the order G ≥ (>)H means gij ≥ (>)hij for any i and j.
    [Show full text]
  • Numerical Solution of Saddle Point Problems
    Acta Numerica (2005), pp. 1–137 c Cambridge University Press, 2005 DOI: 10.1017/S0962492904000212 Printed in the United Kingdom Numerical solution of saddle point problems Michele Benzi∗ Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia 30322, USA E-mail: [email protected] Gene H. Golub† Scientific Computing and Computational Mathematics Program, Stanford University, Stanford, California 94305-9025, USA E-mail: [email protected] J¨org Liesen‡ Institut f¨ur Mathematik, Technische Universit¨at Berlin, D-10623 Berlin, Germany E-mail: [email protected] We dedicate this paper to Gil Strang on the occasion of his 70th birthday Large linear systems of saddle point type arise in a wide variety of applica- tions throughout computational science and engineering. Due to their indef- initeness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems. ∗ Supported in part by the National Science Foundation grant DMS-0207599. † Supported in part by the Department of Energy of the United States Government. ‡ Supported in part by the Emmy Noether Programm of the Deutsche Forschungs- gemeinschaft. 2 M. Benzi, G. H. Golub and J. Liesen CONTENTS 1 Introduction 2 2 Applications leading to saddle point problems 5 3 Properties of saddle point matrices 14 4 Overview of solution algorithms 29 5 Schur complement reduction 30 6 Null space methods 32 7 Coupled direct solvers 40 8 Stationary iterations 43 9 Krylov subspace methods 49 10 Preconditioners 59 11 Multilevel methods 96 12 Available software 105 13 Concluding remarks 107 References 109 1.
    [Show full text]
  • Trigonometric Transform Splitting Methods for Real Symmetric Toeplitz Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM TRIGONOMETRIC TRANSFORM SPLITTING METHODS FOR REAL SYMMETRIC TOEPLITZ SYSTEMS ZHONGYUN LIU∗, NIANCI WU∗, XIAORONG QIN∗, AND YULIN ZHANGy Abstract. In this paper we study efficient iterative methods for real symmetric Toeplitz systems based on the trigonometric transformation splitting (TTS) of the real symmetric Toeplitz matrix A. Theoretical analyses show that if the generating function f of the n × n Toeplitz matrix A is a real positive even function, then the TTS iterative methods converge to the unique solution of the linear system of equations for sufficient large n. Moreover, we derive an upper bound of the contraction factor of the TTS iteration which is dependent solely on the spectra of the two TTS matrices involved. Different from the CSCS iterative method in [19] in which all operations counts concern complex op- erations when the DFTs are employed, even if the Toeplitz matrix A is real and symmetric, our method only involves real arithmetics when the DCTs and DSTs are used. The numerical experiments show that our method works better than CSCS iterative method and much better than the positive definite and skew- symmetric splitting (PSS) iterative method in [3] and the symmetric Gauss-Seidel (SGS) iterative method. Key words. Sine transform, Cosine transform, Matrix splitting, Iterative methods, Real Toeplitz matrices. AMS subject classifications. 15A23, 65F10, 65F15. 1. Introduction. Consider the iterative solution to the following linear system of equa- tions Ax = b (1.1) by the two-step splitting iteration with alternation, where b 2 Rn and A 2 Rn×n is a symmetric positive definite Toeplitz matrix.
    [Show full text]
  • Numerical Linear Algebra
    Numerical Linear Algebra Comparison of Several Numerical Methods used to solve Poisson’s Equation Christiana Mavroyiakoumou University of Oxford A case study report submitted for the degree of M.Sc. in Mathematical Modelling and Scientific Computing Hilary 2017 1 Introduction In this report, we apply the finite difference scheme to the Poisson equation with homogeneous Dirichlet boundary conditions. This yields a system of linear equations with a large sparse system matrix that is a classical test problem for comparing direct and iterative linear solvers. The solution of sparse linear systems by iterative methods has become one of the core applications and research areas of scientific computing. The size of systems that are solved routinely has increased tremendously over time. This is because the discretisation of partial differential equations can lead to systems that are arbitrarily large. We introduce the reader to the general theory of regular splitting methods and present some of the classical methods such as Jacobi, Gauss-Seidel, Relaxed Jacobi, SOR and SSOR. Iterative methods yield the solution U of a linear system after an infinite number of steps. At each step, iterative methods require the computation of the residual of the system. In this report, we use iterative solution methods in order to solve n n n AU “ f;A P R ˆ ; f P R : (1.1) Linear systems can be solved by fixed point iteration. To do so, one must transform the system of linear equations into a fixed point form and this is in general achieved by splitting the matrix A into two parts, A “ M ´ N.
    [Show full text]
  • The Jacobi-Davidson Algorithm for Solving Large Sparse Symmetric Eigenvalue Problems with Application to the Design of Accelerator Cavities
    Research Collection Doctoral Thesis The Jacobi-Davidson algorithm for solving large sparse symmetric eigenvalue problems with application to the design of accelerator cavities Author(s): Geus, Roman Publication Date: 2002 Permanent Link: https://doi.org/10.3929/ethz-a-004469464 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library DISS. ETH NO. 14734 The Jacobi-Davidson algorithm for solving large sparse symmetric eigenvalue problems with application to the design of accelerator cavities A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of Technical Sciences presented by ROMAN GEUS Dipl. Informatik-Ing. ETH born 6. September 1970 citizen of Arbon, Switzerland accepted on the recommendation of Prof. Dr. Walter Gander, examiner Prof. Dr. Henk van der Vorst, co-examiner Dr. Peter Arbenz, co-examiner 2002 Acknowledgements This work would not have been possible without the help of many people. First of all I would like to thank Prof. Dr. Walter Gander for giving me the opportunity to do my Ph.D. thesis in his group. His guidance and encouragement were always a great help. My special thank goes to Dr. Peter Arbenz, my supervisor during all these years. With his support he contributed a great deal to this work. He was there to answer many of my questions and to help me dealing with difficult problems. He gave me the opportunity and freedom to learn a great variety of methods and to realise my own ideas.
    [Show full text]
  • Plenary Speakers
    FoCM95 Park City: Plenary speakers: WEEK 1 Marie-Francoise Roy, Universite de Rennes Shmuel Winograd, IBM Dima Y. Grigoriev, Pennsylvania State University Richard S. Varga, Kent State University Steve Smale, University of California, Berkeley John CannyUniversity of California, Berkeley Felipe Cucker, Universitat Pampeu Fabra, Spain Victor Pan, Herbert H. Lehman College, CUNY Michael Shub, IBM Roger Brockett, Harvard University WEEK 2 Henryk Wozniakowski, University of Warsaw David Donoho, University of California, Berkeley and Columbia University Yosef Yomdin, Weizmann Institute of Science, Israel Margaret H. Wright, AT&T Bell Laboratories N. Karmarker, AT&T Bell Laboratories Manuel Blum, University of California, Berkeley Roger Temam, Indiana University Arkadi Nemirovski, Israel Institute of Technology Hubertus Th.Jongen, Reinisch-Westf Tech Hochschule James M. Renegar, Cornell University WEEK 3 Herb Keller, California Institute of Technology Gene H. Golub, Stanford University Alexandre J. Chorin, University of California, Berkeley T. Y. Li, Michigan State University James Yorke, University of Maryland Lenore Blum, MSRI Eugene L. Allgower, Colorado State University Arieh Iserles, University of Cambridge, UK James W. Demmel, University of California, Berkeley W. Dahmen, Reinisch-Westf Tech Hochschule WEEK 4 Ronald A. DeVore, University of South Carolina, Columbia Ulrich Kulisch, University of Karlsruhe Victor A. V. Vassiliev, Institute for System Studies, Moscow Jacques Louis Lions, College de France Henryk Wozniakowski, University of
    [Show full text]
  • Relaxation Or Iterative Techniques for the Solution of Linear Equations
    ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS I. Hajj Vector Norms Properties of Vector Norms 1. ||x|| > 0 for all x ≠ 0. 2. ||x|| = 0 iff x = 0. 3. ||αx|| = |α| ||x|| for any scalar α. 4. ||x + y|| ≤ ||x|| + ||y|| for any two vectors x and y. Matrix Norms Given Ax = y 'Induced' Norm The norm of a matrix measures the maximum “stretching” the matrix does to any vector in the given vector norm. Properties of Matrix Norms Relaxation or Iterative Methods of Solving Ax = b • Aim: Generate a sequence of vectors x0, x1, ..., xt that will "hopefully" converge to the solution x* = A-1 b (without finding A-1 or the LU factors of A, just A) • Gauss-Jacobi • Gauss-Seidel Point Gauss-Jacobi Ax = b Repeat until ||xk+1- xk|| < ε (can be done in parallel) Block Gauss-Jacobi Start with initial guess x(0) Then solve Repeat until convergence: Point Forward Gauss-Seidel k =0, initial guess x0 Repeat until ||xk+1 - xk|| < ε Point Backward Gauss-Seidel Block Forward Gauss-Seidel • Initial guess x(0) • Solve i = 1, 2, …., p • Repeat until ||xk+1 - xk|| < ε Block Backward Gauss-Seidel • Initial guess x(0) • Solve i = p-1, p-2, …., 1 • Repeat until ||xk+1 - xk|| < ε Symmetrical Gauss-Seidel Method • A symmetrical G-S method performs a forward (point or block) G-S iteration followed by a backward G-S iteration G-J and G-S for bordered-block-diagonal matrices G-J and G-S for bordered-block-diagonal matrices G-J: G-S: Matrix Splitting More formally, Given Ax = b Let A = L + D + U (matrix splitting)
    [Show full text]