Download Special Issue

Total Page:16

File Type:pdf, Size:1020Kb

Download Special Issue Journal of Applied Mathematics Preconditioning Techniques for Sparse Linear Systems Guest Editors: Massimiliano Ferronato, Edmond Chow, and Kok-Kwang Phoon Preconditioning Techniques for Sparse Linear Systems Journal of Applied Mathematics Preconditioning Techniques for Sparse Linear Systems Guest Editors: Massimiliano Ferronato, Edmond Chow, and Kok-Kwang Phoon Copyright q 2012 Hindawi Publishing Corporation. All rights reserved. This is a special issue published in “Journal of Applied Mathematics.” All articles are open access articles distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Editorial Board Said Abbasbandy, Iran Pablo Gonzalez-Vera,´ Spain Michael Meylan, Australia Mina B. Abd-El-Malek, Egypt Laurent Gosse, Italy Alain Miranville, France Mohamed A. Abdou, Egypt K. S. Govinder, South Africa Jaime E. Munoz Rivera, Brazil Subhas Abel, India Jorge Luis Gracia, Spain Javier Murillo, Spain Mostafa Adimy, France Yuantong Gu, Australia Roberto Natalini, Italy Carlos J. S. Alves, Portugal Zhihong Guan, China Srinivasan Natesan, India Mohamad Alwash, USA Nicola Guglielmi, Italy Khalida Inayat Noor, Pakistan Igor Andrianov, Germany Frederico G. Guimaraes,˜ Brazil Donal O’Regan, Ireland Francis T. K. Au, Hong Kong Maoan Han, China Martin Ostoja-Starzewski, USA Olivier Bahn, Canada Ferenc Hartung, Hungary Turgut Ozis¨ ¸, Turkey Roberto Barrio, Spain Luis Javier Herrera, Spain Claudio Padra, Argentina Alfredo Bellen, Italy Ying U. Hu, France Reinaldo Martinez Palhares, Brazil J. Biazar, Iran Zhilong L. Huang, China Juan Manuel Pena,˜ Spain Hester Bijl, The Netherlands Kazufumi Ito, USA Ricardo Perera, Spain James Robert Buchanan, USA Takeshi Iwamoto, Japan Malgorzata Peszynska, USA J. C. Butcher, New Zealand George Jaiani, Georgia James F. Peters, Canada Xiao Chuan Cai, USA Zhongxiao Jia, China Mark A. Petersen, South Africa Weiming Cao, USA Jong H. Kim, Republic of Korea Miodrag Petkovic,´ Serbia Alexandre Carvalho, Brazil Kazutake Komori, Japan Vu Ngoc Phat, Vietnam Song Cen, China Vadim A. Krysko, Russia Andrew Pickering, Spain Qianshun Chang, China Jin L. Kuang, Singapore Hector Pomares, Spain Ke Chen, UK Miroslaw Lachowicz, Poland Maurizio Porfiri, USA Xinfu Chen, USA Hak-Keung Lam, UK Mario Primicerio, Italy Cheng-Sheng Chien, Taiwan Tak-Wah Lam, Hong Kong Morteza Rafei, The Netherlands Jeng-Tzong Chen, Taiwan Peter G. L. Leach, Greece B. V. Rathish Kumar, India Md. Chowdhury, Malaysia Yongkun Li, China Jacek Rokicki, Poland C. Conca, Chile Wan-Tong Li, China Carla Roque, Portugal Alessandro Corsini, Italy Chong Lin, China Debasish Roy, India Livija Cveticanin, Serbia Leevan Ling, Hong Kong Marcelo A. Savi, Brazil Mohamed Darouach, France Mingzhu Liu, China Wolfgang Schmidt, Germany Patrick De Leenheer, USA Kang Liu, USA Eckart Schnack, Germany Eric De Sturler, USA Yansheng Liu, China Mehmet Sezer, Turkey Kai Diethelm, Germany Fawang Liu, Australia Naseer Shahzad, Saudi Arabia Vit Dolejsi, Czech Republic Chein-Shan Liu, Taiwan Fatemeh Shakeri, Iran Meng Fan, China Julian´ Lopez-G´ omez,´ Spain Hui-Shen Shen, China Ya Ping Fang, China Shiping Lu, China Jian Hua Shen, China Antonio J. M. Ferreira, Portugal Gert Lube, Germany Fernando Simoes,˜ Portugal Michel Fliess, France Nazim I. Mahmudov, Turkey A. A. Soliman, Egypt M. A. Fontelos, Spain O. D. Makinde, South Africa Yuri N. Sotskov, Belarus Luca Formaggia, Italy F. Marcellan,´ Spain Peter Spreij, The Netherlands Huijun Gao, China Guiomar Mart´ın-Herran,´ Spain Niclas Stromberg,¨ Sweden B. Geurts, The Netherlands Nicola Mastronardi, Italy Ray Kai Leung Su, Hong Kong Jitao Sun, China E. S. Van Vleck, USA Jinyun Yuan, Brazil Wenyu Sun, China Ezio Venturino, Italy Alejandro Zarzo, Spain Xianhua Tang, China Jesus Vigo-Aguiar, Spain Guisheng Zhai, Japan Marco Henrique Terra, Brazil Junjie Wei, China Zhihua Zhang, China Alexander Timokha, Norway Li Weili, China Jingxin Zhang, Australia Jung-Fa Tsai, Taiwan Martin Weiser, Germany Shan Zhao, USA Ch. Tsitouras, Greece Dongmei Xiao, China Xiaoqiang Zhao, Canada Kuppalapalle Vajravelu, USA Yuesheng Xu, USA Renat Zhdanov, USA Alvaro Valencia, Chile Suh-Yuh Yang, Taiwan J. Hoenderkamp, The Netherlands Contents Preconditioning Techniques for Sparse Linear Systems, Massimiliano Ferronato, Edmond Chow, and Kok-Kwang Phoon Volume 2012, Article ID 518165, 3 pages A Graph Approach to Observability in Physical Sparse Linear Systems, Santiago Vazquez-Rodriguez, Jesus´ A.´ Gomollon,´ Richard J. Duro, and Fernando Lopez´ Pena˜ Volume 2012, Article ID 305415, 26 pages A Relaxed Splitting Preconditioner for the Incompressible Navier-Stokes Equations, Ning-Bo Tan, Ting-Zhu Huang, and Ze-Jun Hu Volume 2012, Article ID 402490, 12 pages A Parallel Wavelet-Based Algebraic Multigrid Black-Box Solver and Preconditioner, Fabio Henrique Pereira and S´ılvio Ikuyo Nabeta Volume 2012, Article ID 894074, 15 pages Parallel Rayleigh Quotient Optimization with FSAI-Based Preconditioning, Luca Bergamaschi, Angeles Mart´ınez, and Giorgio Pini Volume 2012, Article ID 872901, 14 pages Comparison of Algebraic Multigrid Preconditioners for Solving Helmholtz Equations, Dandan Chen, Ting-Zhu Huang, and Liang Li Volume 2012, Article ID 367909, 12 pages An Alternative HSS Preconditioner for the Unsteady Incompressible Navier-Stokes Equations in Rotation Form,JiaLiu Volume 2012, Article ID 307939, 12 pages A Note on the Eigenvalue Analysis of the SIMPLE Preconditioning for Incompressible Flow, Shi-Liang Wu, Feng Chen, and Xiao-Qi Niu Volume 2012, Article ID 564132, 7 pages Applications of Symmetric and Nonsymmetric MSSOR Preconditioners to Large-Scale Biot’s Consolidation Problems with Nonassociated Plasticity, Xi Chen and Kok Kwang Phoon Volume 2012, Article ID 352081, 15 pages A Direct Eigenanalysis of Multibody System in Equilibrium, Cheng Yang, Dazhi Cao, Zhihua Zhao, Zhengru Zhang, and Gexue Ren Volume 2012, Article ID 638546, 12 pages Finite Element Preconditioning on Spectral Element Discretizations for Coupled Elliptic Equations, JongKyum Kwon, Soorok Ryu, Philsu Kim, and Sang Dong Kim Volume 2012, Article ID 245051, 16 pages A Modified SSOR Preconditioning Strategy for Helmholtz Equations, Shi-Liang Wu and Cui-Xia Li Volume 2012, Article ID 365124, 9 pages Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2012, Article ID 518165, 3 pages doi:10.1155/2012/518165 Editorial Preconditioning Techniques for Sparse Linear Systems Massimiliano Ferronato,1 Edmond Chow,2 and Kok-Kwang Phoon3 1 Department of Civil, Environmental and Architectural Engineering, University of Padova, 35131 Padova, Italy 2 School of Computational Science and Engineering, College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA 3 Department of Civil Engineering, National University of Singapore, Singapore 117576 Correspondence should be addressed to Massimiliano Ferronato, [email protected] Received 8 June 2012; Accepted 8 June 2012 Copyright q 2012 Massimiliano Ferronato et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The implementation of and the solution to large computational models is becoming quite a common effort in both engineering and applied sciences. The accurate and cost-effective numerical solution to the sequence of linearized algebraic systems of equations arising from these models often represents the main memory-demanding and time-consuming task. Direct methods for sparse linear systems often represent the de facto solver in several commercial codes on the basis of their robustness and reliability. However, these methods scale poorly with the matrix size, especially on three-dimensional problems. For such large systems, iterative methods based on Krylov subspaces can be a much more attractive option. A significant number of general-purpose Krylov subspace, or conjugate gradient- like, solvers have been developed during the 70s through the 90s. Interest in these solvers is growing in many areas of engineering and scientific computing. Nonetheless, to become really competitive with direct solvers, iterative methods typically need an appropriate preconditioning to achieve convergence in a reasonable number of iterations and time. The term “preconditioning” refers to “the art of transforming a problem that appears intractable into another whose solution can be approximated rapidly” L.N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1997, while the “preconditioner” is the operator that is responsible for such a transformation. It is widely recognized that preconditioning is the key factor to increasing the robustness and the computational efficiency of iterative methods. Generally speaking, there are three basic requirements for a good preconditioner: i the preconditioned matrix should have a clustered eigenspectrum away from 0; ii the preconditioner should be as cheap to compute as possible; 2 Journal of Applied Mathematics iii its application to a vector should be cost-effective. It goes without saying that these are conflicting requirements, and an appropriate trade-off must be found for any specific problem at hand. Unfortunately, theoretical results are few, and frequently somewhat “empirical” algorithms may work surprisingly well despite the lack of a rigorous foundation. This is why finding a good preconditioner for
Recommended publications
  • Fast Solution of Sparse Linear Systems with Adaptive Choice of Preconditioners Zakariae Jorti
    Fast solution of sparse linear systems with adaptive choice of preconditioners Zakariae Jorti To cite this version: Zakariae Jorti. Fast solution of sparse linear systems with adaptive choice of preconditioners. Gen- eral Mathematics [math.GM]. Sorbonne Université, 2019. English. NNT : 2019SORUS149. tel- 02425679v2 HAL Id: tel-02425679 https://tel.archives-ouvertes.fr/tel-02425679v2 Submitted on 15 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Sorbonne Université École doctorale de Sciences Mathématiques de Paris Centre Laboratoire Jacques-Louis Lions Résolution rapide de systèmes linéaires creux avec choix adaptatif de préconditionneurs Par Zakariae Jorti Thèse de doctorat de Mathématiques appliquées Dirigée par Laura Grigori Co-supervisée par Ani Anciaux-Sedrakian et Soleiman Yousef Présentée et soutenue publiquement le 03/10/2019 Devant un jury composé de: M. Tromeur-Dervout Damien, Professeur, Université de Lyon, Rapporteur M. Schenk Olaf, Professeur, Università della Svizzera italiana, Rapporteur M. Hecht Frédéric, Professeur, Sorbonne Université, Président du jury Mme. Emad Nahid, Professeur, Université Paris-Saclay, Examinateur M. Vasseur Xavier, Ingénieur de recherche, ISAE-SUPAERO, Examinateur Mme. Grigori Laura, Directrice de recherche, Inria Paris, Directrice de thèse Mme. Anciaux-Sedrakian Ani, Ingénieur de recherche, IFPEN, Co-encadrante de thèse M.
    [Show full text]
  • Efficient “Black-Box” Multigrid Solvers for Convection-Dominated Problems
    EFFICIENT \BLACK-BOX" MULTIGRID SOLVERS FOR CONVECTION-DOMINATED PROBLEMS A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Science 2011 Glyn Owen Rees School of Computer Science 2 Contents Declaration 12 Copyright 13 Acknowledgement 14 1 Introduction 15 2 The Convection-Di®usion Problem 18 2.1 The continuous problem . 18 2.2 The Galerkin approximation method . 22 2.2.1 The ¯nite element method . 26 2.3 The Petrov-Galerkin (SUPG) approximation . 29 2.4 Properties of the discrete operator . 34 2.5 The Navier-Stokes problem . 37 3 Methods for Solving Linear Algebraic Systems 42 3.1 Basic iterative methods . 43 3.1.1 The Jacobi method . 45 3.1.2 The Gauss-Seidel method . 46 3.1.3 Convergence of splitting iterations . 48 3.1.4 Incomplete LU (ILU) factorisation . 49 3.2 Krylov methods . 56 3.2.1 The GMRES method . 59 3.2.2 Preconditioned GMRES method . 62 3.3 Multigrid method . 67 3.3.1 Geometric multigrid method . 67 3.3.2 Algebraic multigrid method . 75 3.3.3 Parallel AMG . 80 3.3.4 Literature summary of multigrid . 81 3.3.5 Multigrid preconditioning of Krylov solvers . 84 3 3.4 A new tILU smoother . 85 4 Two-Dimensional Case Studies 89 4.1 The di®usion problem . 90 4.2 Geometric multigrid preconditioning . 95 4.2.1 Constant uni-directional wind . 95 4.2.2 Double glazing problem - recirculating wind . 106 4.2.3 Combined uni-directional and recirculating wind .
    [Show full text]
  • Chapter 7 Iterative Methods for Large Sparse Linear Systems
    Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay for the direct methods based on matrix factorization is that the factors of a sparse matrix may not be sparse, so that for large sparse systems the memory cost make direct methods too expensive, in memory and in execution time. Instead we introduce iterative methods, for which matrix sparsity is exploited to develop fast algorithms with a low memory footprint. 7.1 Sparse matrix algebra Large sparse matrices We say that the matrix A Rn is large if n is large, and that A is sparse if most of the elements are2 zero. If a matrix is not sparse, we say that the matrix is dense. Whereas for a dense matrix the number of nonzero elements is (n2), for a sparse matrix it is only (n), which has obvious implicationsO for the memory footprint and efficiencyO for algorithms that exploit the sparsity of a matrix. AdiagonalmatrixisasparsematrixA =(aij), for which aij =0for all i = j,andadiagonalmatrixcanbegeneralizedtoabanded matrix, 6 for which there exists a number p,thebandwidth,suchthataij =0forall i<j p or i>j+ p.Forexample,atridiagonal matrix A is a banded − 59 CHAPTER 7. ITERATIVE METHODS FOR LARGE SPARSE 60 LINEAR SYSTEMS matrix with p =1, xx0000 xxx000 20 xxx003 A = , (7.1) 600xxx07 6 7 6000xxx7 6 7 60000xx7 6 7 where x represents a nonzero4 element. 5 Compressed row storage The compressed row storage (CRS) format is a data structure for efficient represention of a sparse matrix by three arrays, containing the nonzero values, the respective column indices, and the extents of the rows.
    [Show full text]
  • Chebyshev and Fourier Spectral Methods 2000
    Chebyshev and Fourier Spectral Methods Second Edition John P. Boyd University of Michigan Ann Arbor, Michigan 48109-2143 email: [email protected] http://www-personal.engin.umich.edu/jpboyd/ 2000 DOVER Publications, Inc. 31 East 2nd Street Mineola, New York 11501 1 Dedication To Marilyn, Ian, and Emma “A computation is a temptation that should be resisted as long as possible.” — J. P. Boyd, paraphrasing T. S. Eliot i Contents PREFACE x Acknowledgments xiv Errata and Extended-Bibliography xvi 1 Introduction 1 1.1 Series expansions .................................. 1 1.2 First Example .................................... 2 1.3 Comparison with finite element methods .................... 4 1.4 Comparisons with Finite Differences ....................... 6 1.5 Parallel Computers ................................. 9 1.6 Choice of basis functions .............................. 9 1.7 Boundary conditions ................................ 10 1.8 Non-Interpolating and Pseudospectral ...................... 12 1.9 Nonlinearity ..................................... 13 1.10 Time-dependent problems ............................. 15 1.11 FAQ: Frequently Asked Questions ........................ 16 1.12 The Chrysalis .................................... 17 2 Chebyshev & Fourier Series 19 2.1 Introduction ..................................... 19 2.2 Fourier series .................................... 20 2.3 Orders of Convergence ............................... 25 2.4 Convergence Order ................................. 27 2.5 Assumption of Equal Errors ...........................
    [Show full text]
  • Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem †
    S S symmetry Article Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem † Shiliang Wu 1, *, Cuixia Li 1 and Praveen Agarwal 2,3 1 School of Mathematics, Yunnan Normal University, Kunming 650500, China; [email protected] 2 Department of Mathrematics, Anand International College of Engineering, Jaipur 303012, India; [email protected] 3 Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman, United Arab Emirates * Correspondence: [email protected] † This research was supported by National Natural Science Foundation of China (No.11961082). Abstract: In this paper, we obtain a new equivalent fixed-point form of the linear complementarity problem by introducing a relaxed matrix and establish a class of relaxed modulus-based matrix split- ting iteration methods for solving the linear complementarity problem. Some sufficient conditions for guaranteeing the convergence of relaxed modulus-based matrix splitting iteration methods are presented. Numerical examples are offered to show the efficacy of the proposed methods. Keywords: linear complementarity problem; matrix splitting; iteration method; convergence MSC: 90C33; 65F10; 65F50; 65G40 1. Introduction Citation: Wu, S.; Li, C.; Agarwal, P. Relaxed Modulus-Based Matrix In this paper, we focus on the iterative solution of the linear complementarity problem, Splitting Methods for the Linear abbreviated as ‘LCP(q, A)’, whose form is Complementarity Problem. Symmetry T 2021, 13, 503. https://doi.org/ w = Az + q ≥ 0, z ≥ 0 and z w = 0, (1) 10.3390/sym13030503 × where A 2 Rn n and q 2 Rn are given, and z 2 Rn is unknown, and for two s × t matrices Academic Editor: Jan Awrejcewicz G = (gij) and H = (hij) the order G ≥ (>)H means gij ≥ (>)hij for any i and j.
    [Show full text]
  • Numerical Solution of Saddle Point Problems
    Acta Numerica (2005), pp. 1–137 c Cambridge University Press, 2005 DOI: 10.1017/S0962492904000212 Printed in the United Kingdom Numerical solution of saddle point problems Michele Benzi∗ Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia 30322, USA E-mail: [email protected] Gene H. Golub† Scientific Computing and Computational Mathematics Program, Stanford University, Stanford, California 94305-9025, USA E-mail: [email protected] J¨org Liesen‡ Institut f¨ur Mathematik, Technische Universit¨at Berlin, D-10623 Berlin, Germany E-mail: [email protected] We dedicate this paper to Gil Strang on the occasion of his 70th birthday Large linear systems of saddle point type arise in a wide variety of applica- tions throughout computational science and engineering. Due to their indef- initeness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems. ∗ Supported in part by the National Science Foundation grant DMS-0207599. † Supported in part by the Department of Energy of the United States Government. ‡ Supported in part by the Emmy Noether Programm of the Deutsche Forschungs- gemeinschaft. 2 M. Benzi, G. H. Golub and J. Liesen CONTENTS 1 Introduction 2 2 Applications leading to saddle point problems 5 3 Properties of saddle point matrices 14 4 Overview of solution algorithms 29 5 Schur complement reduction 30 6 Null space methods 32 7 Coupled direct solvers 40 8 Stationary iterations 43 9 Krylov subspace methods 49 10 Preconditioners 59 11 Multilevel methods 96 12 Available software 105 13 Concluding remarks 107 References 109 1.
    [Show full text]
  • Trigonometric Transform Splitting Methods for Real Symmetric Toeplitz Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM TRIGONOMETRIC TRANSFORM SPLITTING METHODS FOR REAL SYMMETRIC TOEPLITZ SYSTEMS ZHONGYUN LIU∗, NIANCI WU∗, XIAORONG QIN∗, AND YULIN ZHANGy Abstract. In this paper we study efficient iterative methods for real symmetric Toeplitz systems based on the trigonometric transformation splitting (TTS) of the real symmetric Toeplitz matrix A. Theoretical analyses show that if the generating function f of the n × n Toeplitz matrix A is a real positive even function, then the TTS iterative methods converge to the unique solution of the linear system of equations for sufficient large n. Moreover, we derive an upper bound of the contraction factor of the TTS iteration which is dependent solely on the spectra of the two TTS matrices involved. Different from the CSCS iterative method in [19] in which all operations counts concern complex op- erations when the DFTs are employed, even if the Toeplitz matrix A is real and symmetric, our method only involves real arithmetics when the DCTs and DSTs are used. The numerical experiments show that our method works better than CSCS iterative method and much better than the positive definite and skew- symmetric splitting (PSS) iterative method in [3] and the symmetric Gauss-Seidel (SGS) iterative method. Key words. Sine transform, Cosine transform, Matrix splitting, Iterative methods, Real Toeplitz matrices. AMS subject classifications. 15A23, 65F10, 65F15. 1. Introduction. Consider the iterative solution to the following linear system of equa- tions Ax = b (1.1) by the two-step splitting iteration with alternation, where b 2 Rn and A 2 Rn×n is a symmetric positive definite Toeplitz matrix.
    [Show full text]
  • Numerical Linear Algebra
    Numerical Linear Algebra Comparison of Several Numerical Methods used to solve Poisson’s Equation Christiana Mavroyiakoumou University of Oxford A case study report submitted for the degree of M.Sc. in Mathematical Modelling and Scientific Computing Hilary 2017 1 Introduction In this report, we apply the finite difference scheme to the Poisson equation with homogeneous Dirichlet boundary conditions. This yields a system of linear equations with a large sparse system matrix that is a classical test problem for comparing direct and iterative linear solvers. The solution of sparse linear systems by iterative methods has become one of the core applications and research areas of scientific computing. The size of systems that are solved routinely has increased tremendously over time. This is because the discretisation of partial differential equations can lead to systems that are arbitrarily large. We introduce the reader to the general theory of regular splitting methods and present some of the classical methods such as Jacobi, Gauss-Seidel, Relaxed Jacobi, SOR and SSOR. Iterative methods yield the solution U of a linear system after an infinite number of steps. At each step, iterative methods require the computation of the residual of the system. In this report, we use iterative solution methods in order to solve n n n AU “ f;A P R ˆ ; f P R : (1.1) Linear systems can be solved by fixed point iteration. To do so, one must transform the system of linear equations into a fixed point form and this is in general achieved by splitting the matrix A into two parts, A “ M ´ N.
    [Show full text]
  • The Jacobi-Davidson Algorithm for Solving Large Sparse Symmetric Eigenvalue Problems with Application to the Design of Accelerator Cavities
    Research Collection Doctoral Thesis The Jacobi-Davidson algorithm for solving large sparse symmetric eigenvalue problems with application to the design of accelerator cavities Author(s): Geus, Roman Publication Date: 2002 Permanent Link: https://doi.org/10.3929/ethz-a-004469464 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library DISS. ETH NO. 14734 The Jacobi-Davidson algorithm for solving large sparse symmetric eigenvalue problems with application to the design of accelerator cavities A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of Technical Sciences presented by ROMAN GEUS Dipl. Informatik-Ing. ETH born 6. September 1970 citizen of Arbon, Switzerland accepted on the recommendation of Prof. Dr. Walter Gander, examiner Prof. Dr. Henk van der Vorst, co-examiner Dr. Peter Arbenz, co-examiner 2002 Acknowledgements This work would not have been possible without the help of many people. First of all I would like to thank Prof. Dr. Walter Gander for giving me the opportunity to do my Ph.D. thesis in his group. His guidance and encouragement were always a great help. My special thank goes to Dr. Peter Arbenz, my supervisor during all these years. With his support he contributed a great deal to this work. He was there to answer many of my questions and to help me dealing with difficult problems. He gave me the opportunity and freedom to learn a great variety of methods and to realise my own ideas.
    [Show full text]
  • Relaxation Or Iterative Techniques for the Solution of Linear Equations
    ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS I. Hajj Vector Norms Properties of Vector Norms 1. ||x|| > 0 for all x ≠ 0. 2. ||x|| = 0 iff x = 0. 3. ||αx|| = |α| ||x|| for any scalar α. 4. ||x + y|| ≤ ||x|| + ||y|| for any two vectors x and y. Matrix Norms Given Ax = y 'Induced' Norm The norm of a matrix measures the maximum “stretching” the matrix does to any vector in the given vector norm. Properties of Matrix Norms Relaxation or Iterative Methods of Solving Ax = b • Aim: Generate a sequence of vectors x0, x1, ..., xt that will "hopefully" converge to the solution x* = A-1 b (without finding A-1 or the LU factors of A, just A) • Gauss-Jacobi • Gauss-Seidel Point Gauss-Jacobi Ax = b Repeat until ||xk+1- xk|| < ε (can be done in parallel) Block Gauss-Jacobi Start with initial guess x(0) Then solve Repeat until convergence: Point Forward Gauss-Seidel k =0, initial guess x0 Repeat until ||xk+1 - xk|| < ε Point Backward Gauss-Seidel Block Forward Gauss-Seidel • Initial guess x(0) • Solve i = 1, 2, …., p • Repeat until ||xk+1 - xk|| < ε Block Backward Gauss-Seidel • Initial guess x(0) • Solve i = p-1, p-2, …., 1 • Repeat until ||xk+1 - xk|| < ε Symmetrical Gauss-Seidel Method • A symmetrical G-S method performs a forward (point or block) G-S iteration followed by a backward G-S iteration G-J and G-S for bordered-block-diagonal matrices G-J and G-S for bordered-block-diagonal matrices G-J: G-S: Matrix Splitting More formally, Given Ax = b Let A = L + D + U (matrix splitting)
    [Show full text]
  • On CSCS-Based Iteration Methods for Toeplitz System of Weakly Nonlinear
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 235 (2011) 5095–5104 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam On CSCS-based iteration methods for Toeplitz system of weakly nonlinear equations Mu-Zheng Zhu a,b, Guo-Feng Zhang a,∗ a School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, PR China b Department of Mathematics, Hexi University, Zhangye 734000, PR China article info a b s t r a c t Article history: For Toeplitz system of weakly nonlinear equations, by using the separability and Received 3 February 2010 strong dominance between the linear and the nonlinear terms and using the circulant Received in revised form 31 December 2010 and skew-circulant splitting (CSCS) iteration technique, we establish two nonlinear composite iteration schemes, called Picard-CSCS and nonlinear CSCS-like iteration MSC: methods, respectively. The advantage of these methods is that they do not require accurate 65F10 65F50 computation and storage of Jacobian matrix, and only need to solve linear sub-systems of 65T50 constant coefficient matrices. Therefore, computational workloads and computer storage 65Y05 may be saved in actual implementations. Theoretical analysis shows that these new iteration methods are local convergent under suitable conditions. Numerical results show Keywords: that both Picard-CSCS and nonlinear CSCS-like iteration methods are feasible and effective Weakly nonlinear equations for some cases. Nonlinear iteration scheme CSCS iteration method ' 2011 Elsevier B.V. All rights reserved.
    [Show full text]
  • Stationary Splitting Iterative Methods for the Matrix Equation Axb = C
    STATIONARY SPLITTING ITERATIVE METHODS FOR THE MATRIX EQUATION AXB = C ZHONGYUN LIU∗, ZHEN LI∗, CARLA FERREIRAy , AND YULIN ZHANGy Abstract. Stationary splitting iterative methods for solving AXB = C are considered in this paper. The main tool to derive our new method is the induced splitting of a given nonsingular matrix A = M − N by a matrix H such that (I − H)−1 exists. Convergence properties of the proposed method are discussed and numerical experiments are presented to illustrate its computational effi- ciency and the effectiveness of some preconditioned variants. In particular, for certain surface-fitting applications our method is much more efficient than the progressive iterative approximation (PIA), a conventional iterative method often used in computer aided geometric design (CAGD). Key words. Hermitian positive definite, H-matrices, stationary splitting iteration, induced splitting,curves fitting. AMS subject classifications. 15A23, 65D17, 65F10, 65F15. 1. Introduction. Consider the matrix equation AXB = C; (1.1) where A 2 Rn×n, B 2 Rm×m and X; C 2 Rn×m. This matrix equation plays an important role in many practical applications such as surfaces fitting in computer aided geometric design (CAGD), signal and image processing, photogrammetry, etc., see for example [14, 19, 20, 30, 31] and a large literature therein. Given the existence of many and important applications, the theory and algorithms for matrix equations have intrigued the researchers for decades, see [7, 9, 15, 21, 28, 29, 30, 31, 34, 37]. A recent survey on the major numerical algorithms for the solution of the more general linear matrix equation AXE + DXB = C, for given square matrices A; B; D; E and conformable matrix C, and related problems is presented in [33].
    [Show full text]