On Multigrid Methods for Solving Electromagnetic Scattering Problems

Total Page:16

File Type:pdf, Size:1020Kb

On Multigrid Methods for Solving Electromagnetic Scattering Problems On Multigrid Methods for Solving Electromagnetic Scattering Problems Dissertation zur Erlangung des akademischen Grades eines Doktor der Ingenieurwissenschaften (Dr.-Ing.) der Technischen Fakultat¨ der Christian-Albrechts-Universitat¨ zu Kiel vorgelegt von Simona Gheorghe 2005 1. Gutachter: Prof. Dr.-Ing. L. Klinkenbusch 2. Gutachter: Prof. Dr. U. van Rienen Datum der mundliche¨ Prufung:¨ 20. Jan. 2006 Contents 1 Introductory remarks 3 1.1 General introduction . 3 1.2 Maxwell’s equations . 6 1.3 Boundary conditions . 7 1.3.1 Sommerfeld’s radiation condition . 9 1.4 Scattering problem (Model Problem I) . 10 1.5 Discontinuity in a parallel-plate waveguide (Model Problem II) . 11 1.6 Absorbing-boundary conditions . 12 1.6.1 Global radiation conditions . 13 1.6.2 Local radiation conditions . 18 1.7 Summary . 19 2 Coupling of FEM-BEM 21 2.1 Introduction . 21 2.2 Finite element formulation . 21 2.2.1 Discretization . 26 2.3 Boundary-element formulation . 28 3 4 CONTENTS 2.4 Coupling . 32 3 Iterative solvers for sparse matrices 35 3.1 Introduction . 35 3.2 Classical iterative methods . 36 3.3 Krylov subspace methods . 37 3.3.1 General projection methods . 37 3.3.2 Krylov subspace methods . 39 3.4 Preconditioning . 40 3.4.1 Matrix-based preconditioners . 41 3.4.2 Operator-based preconditioners . 42 3.5 Multigrid . 43 3.5.1 Full Multigrid . 47 4 Numerical results 49 4.1 Coupling between FEM and local/global boundary conditions . 49 4.1.1 Model problem I . 50 4.1.2 Model problem II . 63 4.2 Multigrid . 64 4.2.1 Theoretical considerations regarding the classical multi- grid behavior in the case of an indefinite problem . 64 4.2.2 Model problem I . 67 4.2.3 Model problem II . 75 CONTENTS 1 4.2.4 Operator-based preconditioners, in combination with multigrid . 78 4.3 Remarks on performance comparison . 84 5 Summary and conclusions 87 Appendices 88 A Scattering from an infinite circular cylinder 89 A.1 Perfect conductor . 89 A.2 Dielectric cylinder . 91 B References from functional analysis 93 List of symbols 97 Acknowledgements 103 Bibliography 105 2 CONTENTS Chapter 1 Introductory remarks 1.1 General introduction A large class of two-dimensional electromagnetic problems, among them the scat- tering of time-harmonic electromagnetic waves and their propagation in wave- guides with discontinuities is governed by the Helmholtz operator with open bound- aries. In order to solve the underlying second-order elliptic partial di®erential equation (PDE) numerically, the Finite-Element Method (FEM) has been suc- cessfully used and described in the engineering literature over the years. As the solutions of the corresponding boundary value problem must satisfy a Sommerfeld-type boundary condition at infinity, ensuring thus their uniqueness, several combinations between Finite Element and other methods which deliver a suitable boundary condition to be incorporated into the discretization have been proposed. An extensive survey of existing work on non-reflecting boundary con- ditions can be found in [17]. A classification in terms of the nature of the resulting boundary conditions (local and global) also reflects in the structure of the matri- ces that arise from this discretization: local boundary conditions, like Bayliss- Turkel, preserve the sparse character of the system matrix, while global boundary conditions that arise from the combination with boundary elements, Dirichlet-to- Neumann mapping or eigenfunctions expansions lead to full submatrices corre- 3 4 Chapter 1. Introductory remarks sponding to the boundary nodes. A short review of some of the most important methods, along with numeri- cal issues related to them is given in Chapters 1 and 2. Section 1.6 deals with the basic ideas behind the Bayliss-Turkel boundary conditions, described in [3], with the Dirichlet-to-Neumann mapping, whose development as a tool for solv- ing boundary value problems on truncated domains is due to J. B. Keller and D. Givoli cf. [18], [19] and with the eigenfunctions expansion method. In Chapter 2 we give a more detailed review of the Finite Element Method, the main problems characteristic to its application to the Helmholtz equation, as well as its combi- nation with the boundary element method, for which we mention some important results that serve to proof the existence and uniqueness of the solution. While the question of existence and uniqueness of solutions of Helmholtz problems is ad- dressed in some classical books as [8] or [9], its finite element discretization and some problems related to it, as the pollution e®ect for increasing wave numbers, have been studied in [28, 29, 30]. The problem of existence and uniqueness of the solution of Finite Element Method-Boundary Element Method (FEM-BEM) coupled problems for acoustic and electromagnetic scattering is treated in [27]. As described in Chapter 3, e±cient solution algorithms for the Helmholtz problem with open boundaries have been developed, among them direct solvers and those ones based on iterative methods (classical as well as Krylov subspace), used as stand-alone solvers or preconditioners. A review of the main numerical problems encountered when dealing with the Helmholtz problem is presented in [49], reflecting the amount of research that has been done for it. A study of Incom- plete LU (ILU) and Generalized Minimum Residual (GMRES ) as precondition- ers for the solution of the Helmholtz problem by FEM is to be found in [32], while other preconditioners have been developed especially for the Helmholtz operator, among them the analytic ILU preconditioner [15], the SoV preconditioner based on the separation of variables [42], as well as a class of preconditioners based on the discretization of the Laplacian, developed in 1983 by Bayliss, Goldstein and Turkel [2] and generalized in 2001 by Laird [37] and then in 2004, yielding 1.1. General introduction 5 the complex shifted Laplace preconditioner (CSL) whose e±ciency in combina- tion with di®erent Krylov subspace methods has been evaluated in [13]. A further improvement, consisting in the approximation of this preconditioner by means of multigrid methods, has been developed, by the same authors, in [12]. Multigrid solvers have been also extensively used [21, 20, 39], as their e±ciency for elliptic problems has been proved. Nevertheless, their application to the Helm– holtz equation poses some problems that sometimes lead to slow convergence or even divergence, when standard multigrid (that is, with linear smoothers) is used as a stand-alone solver, as shown in [10, 50] or other extensive numerical stud- ies, like that one in [33]. Modification of the standard multigrid algorithm, by identifying the levels where these problems appear and by using a Krylov-type smoother for them, instead of the classical Jacobi or Gauss-Seidel smoothers, as well as an ”outer-iteration” [11] or the use of a grid-dependent eigenvalue shift, in combination with under-interpolation [50] have been proposed. Another solution, meant to overcome the standard multigrid di±culties is the wave-ray multigrid method, which has been proposed by Brandt et al. [6]. Finally, a fast solver for exterior Helmholtz problems based on imbedding methods has been developed in [14]. The main contribution of this thesis, presented in Chapter 4, lies in numerical studies whose aim is to determine whether classical multigrid is well-suited for the solution of the considered problems. As our interest is to find optimal algo- rithms for the above mentioned problems, in the low frequency domain, we have studied in this chapter the applicability of standard geometric multigrid to those problems, as well as the use of some remedies that have been proposed in the lit- erature. Among them, the performance of multigrid accelerated Krylov subspace methods (mostly BICGSTAB) and the operator-based ”shifted-Laplace” precon- ditioner, in comparison to classical and full-multigrid, in terms of computation time and convergence have been studied. Based on these numerical studies, we present the main conclusions regarding the feasibility of multigrid methods for the considered problems in Chapter 5. 6 Chapter 1. Introductory remarks 1.2 Maxwell's equations Whenever dealing with electromagnetic field problems the fundamental equations are Maxwell’s equations. We will take their di®erential form as starting point: @B(x; t) E(x; t) = (Faraday’s law) 8 r £ ¡ @t > @D(x; t) > H(x; t) = + J(x; t) (Maxwell-Ampere law) (1.1) > r £ @t <> D(x; t) = ½(x; t) (Gauss’s law) r ¢ > B(x; t) = 0 > r ¢ > where:>E and H are the electric and magnetic field intensities, D and B are the elec- tric and magnetic flux densities and J and ½ are the electric current and electric charge densities. If the medium is non-dispersive, linear, isotropic and inhomoge- neous, the constitutive relations are written as: D(x; t) = " (x) E(x; t) (1.2) B(x; t) = ¹ (x) H(x; t) (1.3) J(x; t) = σe (x) E(x; t) (1.4) where " , ¹ and σe denote, respectively, the permittivity, permeability and con- ductivity of the medium under consideration. In the following we shall restrict to time-harmonic electromagnetic fields varying with an angular frequency ! = 2¼ f rad/sec. In this case all the above-mentioned fields have the following repre- sentation: j!t F (x; t) = Re F0 (x) e t R+; (1.5) h i 8 2 where the complex vector F(x) is referred to as the field phasor. Assuming that there are no charges in the medium (½ = 0), the divergence-free fields E0 and H0 will then be solutions of the ”reduced” Maxwell’s equations: r E (x) = j!¹H (x) £ 0 ¡ 0 (1.6) ( r H0(x) = j!" f E0(x) £ where σe + j!" = j!" f . For the rest of the thesis, we’ll refer only to phasor fields and for simplicity we 1.3.
Recommended publications
  • Coupling Finite and Boundary Element Methods for Static and Dynamic
    Comput. Methods Appl. Mech. Engrg. 198 (2008) 449–458 Contents lists available at ScienceDirect Comput. Methods Appl. Mech. Engrg. journal homepage: www.elsevier.com/locate/cma Coupling finite and boundary element methods for static and dynamic elastic problems with non-conforming interfaces Thomas Rüberg a, Martin Schanz b,* a Graz University of Technology, Institute for Structural Analysis, Graz, Austria b Graz University of Technology, Institute of Applied Mechanics, Technikerstr. 4, 8010 Graz, Austria article info abstract Article history: A coupling algorithm is presented, which allows for the flexible use of finite and boundary element meth- Received 1 February 2008 ods as local discretization methods. On the subdomain level, Dirichlet-to-Neumann maps are realized by Received in revised form 4 August 2008 means of each discretization method. Such maps are common for the treatment of static problems and Accepted 26 August 2008 are here transferred to dynamic problems. This is realized based on the similarity of the structure of Available online 5 September 2008 the systems of equations obtained after discretization in space and time. The global set of equations is then established by incorporating the interface conditions in a weighted sense by means of Lagrange Keywords: multipliers. Therefore, the interface continuity condition is relaxed and the interface meshes can be FEM–BEM coupling non-conforming. The field of application are problems from elastostatics and elastodynamics. Linear elastodynamics Ó Non-conforming interfaces 2008 Elsevier B.V. All rights reserved. FETI/BETI 1. Introduction main, this approach is often carried out only once for every time step which gives a staggering scheme.
    [Show full text]
  • Coupling of the Finite Volume Element Method and the Boundary Element Method – an a Priori Convergence Result
    COUPLING OF THE FINITE VOLUME ELEMENT METHOD AND THE BOUNDARY ELEMENT METHOD – AN A PRIORI CONVERGENCE RESULT CHRISTOPH ERATH∗ Abstract. The coupling of the finite volume element method and the boundary element method is an interesting approach to simulate a coupled system of a diffusion convection reaction process in an interior domain and a diffusion process in the corresponding unbounded exterior domain. This discrete system maintains naturally local conservation and a possible weighted upwind scheme guarantees the stability of the discrete system also for convection dominated problems. We show existence and uniqueness of the continuous system with appropriate transmission conditions on the coupling boundary, provide a convergence and an a priori analysis in an energy (semi-) norm, and an existence and an uniqueness result for the discrete system. All results are also valid for the upwind version. Numerical experiments show that our coupling is an efficient method for the numerical treatment of transmission problems, which can be also convection dominated. Key words. finite volume element method, boundary element method, coupling, existence and uniqueness, convergence, a priori estimate 1. Introduction. The transport of a concentration in a fluid or the heat propa- gation in an interior domain often follows a diffusion convection reaction process. The influence of such a system to the corresponding unbounded exterior domain can be described by a homogeneous diffusion process. Such a coupled system is usually solved by a numerical scheme and therefore, a method which ensures local conservation and stability with respect to the convection term is preferable. In the literature one can find various discretization schemes to solve similar prob- lems.
    [Show full text]
  • Uncertainties in the Solutions to Boundary Element
    UNCERTAINTIES IN THE SOLUTIONS TO BOUNDARY ELEMENT METHOD: AN INTERVAL APPROACH by BARTLOMIEJ FRANCISZEK ZALEWSKI Submitted in partial fulfillment of the requirements For the degree of Doctor of Philosophy Dissertation Adviser: Dr. Robert L. Mullen Department of Civil Engineering CASE WESTERN RESERVE UNIVERSITY August, 2008 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES We hereby approve the thesis/dissertation of ______________Bartlomiej Franciszek Zalewski______________ candidate for the Doctor of Philosophy degree *. (signed)_________________Robert L. Mullen________________ (chair of the committee) ______________Arthur A. Huckelbridge_______________ ______________Xiangwu (David) Zeng_______________ _________________Daniela Calvetti__________________ _________________Shad M. Sargand_________________ (date) _______4-30-2008________ *We also certify that written approval has been obtained for any proprietary material contained therein. With love I dedicate my Ph.D. dissertation to my parents. TABLE OF CONTENTS LIST OF TABLES ………………………………………………………………………. 5 LIST OF FIGURES ……………………………………………………………………… 6 ACKNOWLEDGEMENTS ………………………………………………………..…… 10 LIST OF SYMBOLS ……………………………………………………….……………11 ABSTRACT …………………………………………………………………………...... 13 CHAPTER I. INTRODUCTION ……………………………………………………………… 15 1.1 Background ………………………………………………...………… 15 1.2 Overview ……………………………………………..…………….… 20 1.3 Historical Background ………………………………………….……. 21 1.3.1 Historical Background of Boundary Element Method ……. 21 1.3.2 Historical Background of Interval
    [Show full text]
  • Sparse Matrices and Iterative Methods
    Iterative Methods Sparsity Sparse Matrices and Iterative Methods K. Cooper1 1Department of Mathematics Washington State University 2018 Cooper Washington State University Introduction Iterative Methods Sparsity Iterative Methods Consider the problem of solving Ax = b, where A is n × n. Why would we use an iterative method? I Avoid direct decomposition (LU, QR, Cholesky) I Replace with iterated matrix multiplication 3 I LU is O(n ) flops. 2 I . matrix-vector multiplication is O(n )... I so if we can get convergence in e.g. log(n), iteration might be faster. Cooper Washington State University Introduction Iterative Methods Sparsity Jacobi, GS, SOR Some old methods: I Jacobi is easily parallelized. I . but converges extremely slowly. I Gauss-Seidel/SOR converge faster. I . but cannot be effectively parallelized. I Only Jacobi really takes advantage of sparsity. Cooper Washington State University Introduction Iterative Methods Sparsity Sparsity When a matrix is sparse (many more zero entries than nonzero), then typically the number of nonzero entries is O(n), so matrix-vector multiplication becomes an O(n) operation. This makes iterative methods very attractive. It does not help direct solves as much because of the problem of fill-in, but we note that there are specialized solvers to minimize fill-in. Cooper Washington State University Introduction Iterative Methods Sparsity Krylov Subspace Methods A class of methods that converge in n iterations (in exact arithmetic). We hope that they arrive at a solution that is “close enough” in fewer iterations. Often these work much better than the classic methods. They are more readily parallelized, and take full advantage of sparsity.
    [Show full text]
  • Scalable Stochastic Kriging with Markovian Covariances
    Scalable Stochastic Kriging with Markovian Covariances Liang Ding and Xiaowei Zhang∗ Department of Industrial Engineering and Decision Analytics The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong Abstract Stochastic kriging is a popular technique for simulation metamodeling due to its flexibility and analytical tractability. Its computational bottleneck is the inversion of a covariance matrix, which takes O(n3) time in general and becomes prohibitive for large n, where n is the number of design points. Moreover, the covariance matrix is often ill-conditioned for large n, and thus the inversion is prone to numerical instability, resulting in erroneous parameter estimation and prediction. These two numerical issues preclude the use of stochastic kriging at a large scale. This paper presents a novel approach to address them. We construct a class of covariance functions, called Markovian covariance functions (MCFs), which have two properties: (i) the associated covariance matrices can be inverted analytically, and (ii) the inverse matrices are sparse. With the use of MCFs, the inversion-related computational time is reduced to O(n2) in general, and can be further reduced by orders of magnitude with additional assumptions on the simulation errors and design points. The analytical invertibility also enhance the numerical stability dramatically. The key in our approach is that we identify a general functional form of covariance functions that can induce sparsity in the corresponding inverse matrices. We also establish a connection between MCFs and linear ordinary differential equations. Such a connection provides a flexible, principled approach to constructing a wide class of MCFs. Extensive numerical experiments demonstrate that stochastic kriging with MCFs can handle large-scale problems in an both computationally efficient and numerically stable manner.
    [Show full text]
  • Chapter 7 Iterative Methods for Large Sparse Linear Systems
    Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay for the direct methods based on matrix factorization is that the factors of a sparse matrix may not be sparse, so that for large sparse systems the memory cost make direct methods too expensive, in memory and in execution time. Instead we introduce iterative methods, for which matrix sparsity is exploited to develop fast algorithms with a low memory footprint. 7.1 Sparse matrix algebra Large sparse matrices We say that the matrix A Rn is large if n is large, and that A is sparse if most of the elements are2 zero. If a matrix is not sparse, we say that the matrix is dense. Whereas for a dense matrix the number of nonzero elements is (n2), for a sparse matrix it is only (n), which has obvious implicationsO for the memory footprint and efficiencyO for algorithms that exploit the sparsity of a matrix. AdiagonalmatrixisasparsematrixA =(aij), for which aij =0for all i = j,andadiagonalmatrixcanbegeneralizedtoabanded matrix, 6 for which there exists a number p,thebandwidth,suchthataij =0forall i<j p or i>j+ p.Forexample,atridiagonal matrix A is a banded − 59 CHAPTER 7. ITERATIVE METHODS FOR LARGE SPARSE 60 LINEAR SYSTEMS matrix with p =1, xx0000 xxx000 20 xxx003 A = , (7.1) 600xxx07 6 7 6000xxx7 6 7 60000xx7 6 7 where x represents a nonzero4 element. 5 Compressed row storage The compressed row storage (CRS) format is a data structure for efficient represention of a sparse matrix by three arrays, containing the nonzero values, the respective column indices, and the extents of the rows.
    [Show full text]
  • The Boundary Element Method in Acoustics
    The Boundary Element Method in Acoustics by Stephen Kirkup 1998/2007 . The Boundary Element Method in Acoustics by S. M. Kirkup First edition published in 1998 by Integrated Sound Software and is published in 2007 in electronic format (corrections and minor amendments on the 1998 print). Information on this and related publications and software (including the second edi- tion of this work, when complete) can be obtained from the web site http://www.boundary-element-method.com The author’s publications can generally be found through the web page http://www.kirkup.info/papers The nine subroutines and the corresponding nine example test programs are available in Fortran 77 on CD ABEMFULL. The original core routines can be downloaded from the web site. The learning package of subroutines ABEM2D is can be downloaded from the web site. See the web site for a full price list and alternative methods of obtaining the software. ISBN 0 953 4031 06 The work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is subject to the permission of the author. c Stephen Kirkup 1998-2007. Preface to 2007 Print Having run out of hard copies of the book a number of years ago, the author is pleased to publish an on-line version of the book in PDF format. A number of minor corrections have been made to the original, following feedback from a number of readers.
    [Show full text]
  • A Parallel Solver for Graph Laplacians
    A Parallel Solver for Graph Laplacians Tristan Konolige Jed Brown University of Colorado Boulder University of Colorado Boulder [email protected] ABSTRACT low energy components while maintaining coarse grid sparsity. We Problems from graph drawing, spectral clustering, network ow also develop a parallel algorithm for nding and eliminating low and graph partitioning can all be expressed in terms of graph Lapla- degree vertices, a technique introduced for a sequential multigrid cian matrices. ere are a variety of practical approaches to solving algorithm by Livne and Brandt [22], that is important for irregular these problems in serial. However, as problem sizes increase and graphs. single core speeds stagnate, parallelism is essential to solve such While matrices can be distributed using a vertex partition (a problems quickly. We present an unsmoothed aggregation multi- 1D/row distribution) for PDE problems, this leads to unacceptable grid method for solving graph Laplacians in a distributed memory load imbalance for irregular graphs. We represent matrices using a seing. We introduce new parallel aggregation and low degree elim- 2D distribution (a partition of the edges) that maintains load balance ination algorithms targeted specically at irregular degree graphs. for parallel scaling [11]. Many algorithms that are practical for 1D ese algorithms are expressed in terms of sparse matrix-vector distributions are not feasible for more general distributions. Our products using generalized sum and product operations. is for- new parallel algorithms are distribution agnostic. mulation is amenable to linear algebra using arbitrary distributions and allows us to operate on a 2D sparse matrix distribution, which 2 BACKGROUND is necessary for parallel scalability.
    [Show full text]
  • Solving Linear Systems: Iterative Methods and Sparse Systems
    Solving Linear Systems: Iterative Methods and Sparse Systems COS 323 Last time • Linear system: Ax = b • Singular and ill-conditioned systems • Gaussian Elimination: A general purpose method – Naïve Gauss (no pivoting) – Gauss with partial and full pivoting – Asymptotic analysis: O(n3) • Triangular systems and LU decomposition • Special matrices and algorithms: – Symmetric positive definite: Cholesky decomposition – Tridiagonal matrices • Singularity detection and condition numbers Today: Methods for large and sparse systems • Rank-one updating with Sherman-Morrison • Iterative refinement • Fixed-point and stationary methods – Introduction – Iterative refinement as a stationary method – Gauss-Seidel and Jacobi methods – Successive over-relaxation (SOR) • Solving a system as an optimization problem • Representing sparse systems Problems with large systems • Gaussian elimination, LU decomposition (factoring step) take O(n3) • Expensive for big systems! • Can get by more easily with special matrices – Cholesky decomposition: for symmetric positive definite A; still O(n3) but halves storage and operations – Band-diagonal: O(n) storage and operations • What if A is big? (And not diagonal?) Special Example: Cyclic Tridiagonal • Interesting extension: cyclic tridiagonal • Could derive yet another special case algorithm, but there’s a better way Updating Inverse • Suppose we have some fast way of finding A-1 for some matrix A • Now A changes in a special way: A* = A + uvT for some n×1 vectors u and v • Goal: find a fast way of computing (A*)-1
    [Show full text]
  • High Performance Selected Inversion Methods for Sparse Matrices
    High performance selected inversion methods for sparse matrices Direct and stochastic approaches to selected inversion Doctoral Dissertation submitted to the Faculty of Informatics of the Università della Svizzera italiana in partial fulfillment of the requirements for the degree of Doctor of Philosophy presented by Fabio Verbosio under the supervision of Olaf Schenk February 2019 Dissertation Committee Illia Horenko Università della Svizzera italiana, Switzerland Igor Pivkin Università della Svizzera italiana, Switzerland Matthias Bollhöfer Technische Universität Braunschweig, Germany Laura Grigori INRIA Paris, France Dissertation accepted on 25 February 2019 Research Advisor PhD Program Director Olaf Schenk Walter Binder i I certify that except where due acknowledgement has been given, the work presented in this thesis is that of the author alone; the work has not been sub- mitted previously, in whole or in part, to qualify for any other academic award; and the content of the thesis is the result of work which has been carried out since the official commencement date of the approved research program. Fabio Verbosio Lugano, 25 February 2019 ii To my whole family. In its broadest sense. iii iv Le conoscenze matematiche sono proposizioni costruite dal nostro intelletto in modo da funzionare sempre come vere, o perché sono innate o perché la matematica è stata inventata prima delle altre scienze. E la biblioteca è stata costruita da una mente umana che pensa in modo matematico, perché senza matematica non fai labirinti. Umberto Eco, “Il nome della rosa” v vi Abstract The explicit evaluation of selected entries of the inverse of a given sparse ma- trix is an important process in various application fields and is gaining visibility in recent years.
    [Show full text]
  • A Parallel Gauss-Seidel Algorithm for Sparse Power Systems Matrices
    AParallel Gauss-Seidel Algorithm for Sparse Power Systems Matrices D. P. Ko ester, S. Ranka, and G. C. Fox Scho ol of Computer and Information Science and The Northeast Parallel Architectures Center (NPAC) Syracuse University Syracuse, NY 13244-4100 [email protected], [email protected], [email protected] A ComdensedVersion of this Paper was presentedatSuperComputing `94 NPACTechnical Rep ort | SCCS 630 4 April 1994 Abstract We describ e the implementation and p erformance of an ecient parallel Gauss-Seidel algorithm that has b een develop ed for irregular, sparse matrices from electrical p ower systems applications. Although, Gauss-Seidel algorithms are inherently sequential, by p erforming sp ecialized orderings on sparse matrices, it is p ossible to eliminate much of the data dep endencies caused by precedence in the calculations. A two-part matrix ordering technique has b een develop ed | rst to partition the matrix into blo ck-diagonal-b ordered form using diakoptic techniques and then to multi-color the data in the last diagonal blo ck using graph coloring techniques. The ordered matrices often have extensive parallelism, while maintaining the strict precedence relationships in the Gauss-Seidel algorithm. We present timing results for a parallel Gauss-Seidel solver implemented on the Thinking Machines CM-5 distributed memory multi-pro cessor. The algorithm presented here requires active message remote pro cedure calls in order to minimize communications overhead and obtain go o d relative sp eedup. The paradigm used with active messages greatly simpli ed the implementationof this sparse matrix algorithm. 1 Intro duction Wehavedevelop ed an ecient parallel Gauss-Seidel algorithm for irregular, sparse matrices from electrical p ower systems applications.
    [Show full text]
  • Decomposition Methods for Sparse Matrix Nearness Problems∗
    SIAM J. MATRIX ANAL. APPL. c 2015 Society for Industrial and Applied Mathematics Vol. 36, No. 4, pp. 1691–1717 DECOMPOSITION METHODS FOR SPARSE MATRIX NEARNESS PROBLEMS∗ YIFAN SUN† AND LIEVEN VANDENBERGHE† Abstract. We discuss three types of sparse matrix nearness problems: given a sparse symmetric matrix, find the matrix with the same sparsity pattern that is closest to it in Frobenius norm and (1) is positive semidefinite, (2) has a positive semidefinite completion, or (3) has a Euclidean distance matrix completion. Several proximal splitting and decomposition algorithms for these problems are presented and their performance is compared on a set of test problems. A key feature of the methods is that they involve a series of projections on small dense positive semidefinite or Euclidean distance matrix cones, corresponding to the cliques in a triangulation of the sparsity graph. The methods discussed include the dual block coordinate ascent algorithm (or Dykstra’s method), the dual projected gradient and accelerated projected gradient algorithms, and a primal and a dual application of the Douglas–Rachford splitting algorithm. Key words. positive semidefinite completion, Euclidean distance matrices, chordal graphs, convex optimization, first order methods AMS subject classifications. 65F50, 90C22, 90C25 DOI. 10.1137/15M1011020 1. Introduction. We discuss matrix optimization problems of the form minimize X − C2 (1.1) F subject to X ∈S, where the variable X is a symmetric p × p matrix with a given sparsity pattern E, and C is a given symmetric matrix with the same sparsity pattern as X.Wefocus on the following three choices for the constraint set S.
    [Show full text]