On Multigrid Methods for Solving Electromagnetic Scattering Problems
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Coupling Finite and Boundary Element Methods for Static and Dynamic
Comput. Methods Appl. Mech. Engrg. 198 (2008) 449–458 Contents lists available at ScienceDirect Comput. Methods Appl. Mech. Engrg. journal homepage: www.elsevier.com/locate/cma Coupling finite and boundary element methods for static and dynamic elastic problems with non-conforming interfaces Thomas Rüberg a, Martin Schanz b,* a Graz University of Technology, Institute for Structural Analysis, Graz, Austria b Graz University of Technology, Institute of Applied Mechanics, Technikerstr. 4, 8010 Graz, Austria article info abstract Article history: A coupling algorithm is presented, which allows for the flexible use of finite and boundary element meth- Received 1 February 2008 ods as local discretization methods. On the subdomain level, Dirichlet-to-Neumann maps are realized by Received in revised form 4 August 2008 means of each discretization method. Such maps are common for the treatment of static problems and Accepted 26 August 2008 are here transferred to dynamic problems. This is realized based on the similarity of the structure of Available online 5 September 2008 the systems of equations obtained after discretization in space and time. The global set of equations is then established by incorporating the interface conditions in a weighted sense by means of Lagrange Keywords: multipliers. Therefore, the interface continuity condition is relaxed and the interface meshes can be FEM–BEM coupling non-conforming. The field of application are problems from elastostatics and elastodynamics. Linear elastodynamics Ó Non-conforming interfaces 2008 Elsevier B.V. All rights reserved. FETI/BETI 1. Introduction main, this approach is often carried out only once for every time step which gives a staggering scheme. -
Coupling of the Finite Volume Element Method and the Boundary Element Method – an a Priori Convergence Result
COUPLING OF THE FINITE VOLUME ELEMENT METHOD AND THE BOUNDARY ELEMENT METHOD – AN A PRIORI CONVERGENCE RESULT CHRISTOPH ERATH∗ Abstract. The coupling of the finite volume element method and the boundary element method is an interesting approach to simulate a coupled system of a diffusion convection reaction process in an interior domain and a diffusion process in the corresponding unbounded exterior domain. This discrete system maintains naturally local conservation and a possible weighted upwind scheme guarantees the stability of the discrete system also for convection dominated problems. We show existence and uniqueness of the continuous system with appropriate transmission conditions on the coupling boundary, provide a convergence and an a priori analysis in an energy (semi-) norm, and an existence and an uniqueness result for the discrete system. All results are also valid for the upwind version. Numerical experiments show that our coupling is an efficient method for the numerical treatment of transmission problems, which can be also convection dominated. Key words. finite volume element method, boundary element method, coupling, existence and uniqueness, convergence, a priori estimate 1. Introduction. The transport of a concentration in a fluid or the heat propa- gation in an interior domain often follows a diffusion convection reaction process. The influence of such a system to the corresponding unbounded exterior domain can be described by a homogeneous diffusion process. Such a coupled system is usually solved by a numerical scheme and therefore, a method which ensures local conservation and stability with respect to the convection term is preferable. In the literature one can find various discretization schemes to solve similar prob- lems. -
Uncertainties in the Solutions to Boundary Element
UNCERTAINTIES IN THE SOLUTIONS TO BOUNDARY ELEMENT METHOD: AN INTERVAL APPROACH by BARTLOMIEJ FRANCISZEK ZALEWSKI Submitted in partial fulfillment of the requirements For the degree of Doctor of Philosophy Dissertation Adviser: Dr. Robert L. Mullen Department of Civil Engineering CASE WESTERN RESERVE UNIVERSITY August, 2008 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES We hereby approve the thesis/dissertation of ______________Bartlomiej Franciszek Zalewski______________ candidate for the Doctor of Philosophy degree *. (signed)_________________Robert L. Mullen________________ (chair of the committee) ______________Arthur A. Huckelbridge_______________ ______________Xiangwu (David) Zeng_______________ _________________Daniela Calvetti__________________ _________________Shad M. Sargand_________________ (date) _______4-30-2008________ *We also certify that written approval has been obtained for any proprietary material contained therein. With love I dedicate my Ph.D. dissertation to my parents. TABLE OF CONTENTS LIST OF TABLES ………………………………………………………………………. 5 LIST OF FIGURES ……………………………………………………………………… 6 ACKNOWLEDGEMENTS ………………………………………………………..…… 10 LIST OF SYMBOLS ……………………………………………………….……………11 ABSTRACT …………………………………………………………………………...... 13 CHAPTER I. INTRODUCTION ……………………………………………………………… 15 1.1 Background ………………………………………………...………… 15 1.2 Overview ……………………………………………..…………….… 20 1.3 Historical Background ………………………………………….……. 21 1.3.1 Historical Background of Boundary Element Method ……. 21 1.3.2 Historical Background of Interval -
Sparse Matrices and Iterative Methods
Iterative Methods Sparsity Sparse Matrices and Iterative Methods K. Cooper1 1Department of Mathematics Washington State University 2018 Cooper Washington State University Introduction Iterative Methods Sparsity Iterative Methods Consider the problem of solving Ax = b, where A is n × n. Why would we use an iterative method? I Avoid direct decomposition (LU, QR, Cholesky) I Replace with iterated matrix multiplication 3 I LU is O(n ) flops. 2 I . matrix-vector multiplication is O(n )... I so if we can get convergence in e.g. log(n), iteration might be faster. Cooper Washington State University Introduction Iterative Methods Sparsity Jacobi, GS, SOR Some old methods: I Jacobi is easily parallelized. I . but converges extremely slowly. I Gauss-Seidel/SOR converge faster. I . but cannot be effectively parallelized. I Only Jacobi really takes advantage of sparsity. Cooper Washington State University Introduction Iterative Methods Sparsity Sparsity When a matrix is sparse (many more zero entries than nonzero), then typically the number of nonzero entries is O(n), so matrix-vector multiplication becomes an O(n) operation. This makes iterative methods very attractive. It does not help direct solves as much because of the problem of fill-in, but we note that there are specialized solvers to minimize fill-in. Cooper Washington State University Introduction Iterative Methods Sparsity Krylov Subspace Methods A class of methods that converge in n iterations (in exact arithmetic). We hope that they arrive at a solution that is “close enough” in fewer iterations. Often these work much better than the classic methods. They are more readily parallelized, and take full advantage of sparsity. -
Scalable Stochastic Kriging with Markovian Covariances
Scalable Stochastic Kriging with Markovian Covariances Liang Ding and Xiaowei Zhang∗ Department of Industrial Engineering and Decision Analytics The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong Abstract Stochastic kriging is a popular technique for simulation metamodeling due to its flexibility and analytical tractability. Its computational bottleneck is the inversion of a covariance matrix, which takes O(n3) time in general and becomes prohibitive for large n, where n is the number of design points. Moreover, the covariance matrix is often ill-conditioned for large n, and thus the inversion is prone to numerical instability, resulting in erroneous parameter estimation and prediction. These two numerical issues preclude the use of stochastic kriging at a large scale. This paper presents a novel approach to address them. We construct a class of covariance functions, called Markovian covariance functions (MCFs), which have two properties: (i) the associated covariance matrices can be inverted analytically, and (ii) the inverse matrices are sparse. With the use of MCFs, the inversion-related computational time is reduced to O(n2) in general, and can be further reduced by orders of magnitude with additional assumptions on the simulation errors and design points. The analytical invertibility also enhance the numerical stability dramatically. The key in our approach is that we identify a general functional form of covariance functions that can induce sparsity in the corresponding inverse matrices. We also establish a connection between MCFs and linear ordinary differential equations. Such a connection provides a flexible, principled approach to constructing a wide class of MCFs. Extensive numerical experiments demonstrate that stochastic kriging with MCFs can handle large-scale problems in an both computationally efficient and numerically stable manner. -
Chapter 7 Iterative Methods for Large Sparse Linear Systems
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay for the direct methods based on matrix factorization is that the factors of a sparse matrix may not be sparse, so that for large sparse systems the memory cost make direct methods too expensive, in memory and in execution time. Instead we introduce iterative methods, for which matrix sparsity is exploited to develop fast algorithms with a low memory footprint. 7.1 Sparse matrix algebra Large sparse matrices We say that the matrix A Rn is large if n is large, and that A is sparse if most of the elements are2 zero. If a matrix is not sparse, we say that the matrix is dense. Whereas for a dense matrix the number of nonzero elements is (n2), for a sparse matrix it is only (n), which has obvious implicationsO for the memory footprint and efficiencyO for algorithms that exploit the sparsity of a matrix. AdiagonalmatrixisasparsematrixA =(aij), for which aij =0for all i = j,andadiagonalmatrixcanbegeneralizedtoabanded matrix, 6 for which there exists a number p,thebandwidth,suchthataij =0forall i<j p or i>j+ p.Forexample,atridiagonal matrix A is a banded − 59 CHAPTER 7. ITERATIVE METHODS FOR LARGE SPARSE 60 LINEAR SYSTEMS matrix with p =1, xx0000 xxx000 20 xxx003 A = , (7.1) 600xxx07 6 7 6000xxx7 6 7 60000xx7 6 7 where x represents a nonzero4 element. 5 Compressed row storage The compressed row storage (CRS) format is a data structure for efficient represention of a sparse matrix by three arrays, containing the nonzero values, the respective column indices, and the extents of the rows. -
The Boundary Element Method in Acoustics
The Boundary Element Method in Acoustics by Stephen Kirkup 1998/2007 . The Boundary Element Method in Acoustics by S. M. Kirkup First edition published in 1998 by Integrated Sound Software and is published in 2007 in electronic format (corrections and minor amendments on the 1998 print). Information on this and related publications and software (including the second edi- tion of this work, when complete) can be obtained from the web site http://www.boundary-element-method.com The author’s publications can generally be found through the web page http://www.kirkup.info/papers The nine subroutines and the corresponding nine example test programs are available in Fortran 77 on CD ABEMFULL. The original core routines can be downloaded from the web site. The learning package of subroutines ABEM2D is can be downloaded from the web site. See the web site for a full price list and alternative methods of obtaining the software. ISBN 0 953 4031 06 The work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is subject to the permission of the author. c Stephen Kirkup 1998-2007. Preface to 2007 Print Having run out of hard copies of the book a number of years ago, the author is pleased to publish an on-line version of the book in PDF format. A number of minor corrections have been made to the original, following feedback from a number of readers. -
A Parallel Solver for Graph Laplacians
A Parallel Solver for Graph Laplacians Tristan Konolige Jed Brown University of Colorado Boulder University of Colorado Boulder [email protected] ABSTRACT low energy components while maintaining coarse grid sparsity. We Problems from graph drawing, spectral clustering, network ow also develop a parallel algorithm for nding and eliminating low and graph partitioning can all be expressed in terms of graph Lapla- degree vertices, a technique introduced for a sequential multigrid cian matrices. ere are a variety of practical approaches to solving algorithm by Livne and Brandt [22], that is important for irregular these problems in serial. However, as problem sizes increase and graphs. single core speeds stagnate, parallelism is essential to solve such While matrices can be distributed using a vertex partition (a problems quickly. We present an unsmoothed aggregation multi- 1D/row distribution) for PDE problems, this leads to unacceptable grid method for solving graph Laplacians in a distributed memory load imbalance for irregular graphs. We represent matrices using a seing. We introduce new parallel aggregation and low degree elim- 2D distribution (a partition of the edges) that maintains load balance ination algorithms targeted specically at irregular degree graphs. for parallel scaling [11]. Many algorithms that are practical for 1D ese algorithms are expressed in terms of sparse matrix-vector distributions are not feasible for more general distributions. Our products using generalized sum and product operations. is for- new parallel algorithms are distribution agnostic. mulation is amenable to linear algebra using arbitrary distributions and allows us to operate on a 2D sparse matrix distribution, which 2 BACKGROUND is necessary for parallel scalability. -
Solving Linear Systems: Iterative Methods and Sparse Systems
Solving Linear Systems: Iterative Methods and Sparse Systems COS 323 Last time • Linear system: Ax = b • Singular and ill-conditioned systems • Gaussian Elimination: A general purpose method – Naïve Gauss (no pivoting) – Gauss with partial and full pivoting – Asymptotic analysis: O(n3) • Triangular systems and LU decomposition • Special matrices and algorithms: – Symmetric positive definite: Cholesky decomposition – Tridiagonal matrices • Singularity detection and condition numbers Today: Methods for large and sparse systems • Rank-one updating with Sherman-Morrison • Iterative refinement • Fixed-point and stationary methods – Introduction – Iterative refinement as a stationary method – Gauss-Seidel and Jacobi methods – Successive over-relaxation (SOR) • Solving a system as an optimization problem • Representing sparse systems Problems with large systems • Gaussian elimination, LU decomposition (factoring step) take O(n3) • Expensive for big systems! • Can get by more easily with special matrices – Cholesky decomposition: for symmetric positive definite A; still O(n3) but halves storage and operations – Band-diagonal: O(n) storage and operations • What if A is big? (And not diagonal?) Special Example: Cyclic Tridiagonal • Interesting extension: cyclic tridiagonal • Could derive yet another special case algorithm, but there’s a better way Updating Inverse • Suppose we have some fast way of finding A-1 for some matrix A • Now A changes in a special way: A* = A + uvT for some n×1 vectors u and v • Goal: find a fast way of computing (A*)-1 -
High Performance Selected Inversion Methods for Sparse Matrices
High performance selected inversion methods for sparse matrices Direct and stochastic approaches to selected inversion Doctoral Dissertation submitted to the Faculty of Informatics of the Università della Svizzera italiana in partial fulfillment of the requirements for the degree of Doctor of Philosophy presented by Fabio Verbosio under the supervision of Olaf Schenk February 2019 Dissertation Committee Illia Horenko Università della Svizzera italiana, Switzerland Igor Pivkin Università della Svizzera italiana, Switzerland Matthias Bollhöfer Technische Universität Braunschweig, Germany Laura Grigori INRIA Paris, France Dissertation accepted on 25 February 2019 Research Advisor PhD Program Director Olaf Schenk Walter Binder i I certify that except where due acknowledgement has been given, the work presented in this thesis is that of the author alone; the work has not been sub- mitted previously, in whole or in part, to qualify for any other academic award; and the content of the thesis is the result of work which has been carried out since the official commencement date of the approved research program. Fabio Verbosio Lugano, 25 February 2019 ii To my whole family. In its broadest sense. iii iv Le conoscenze matematiche sono proposizioni costruite dal nostro intelletto in modo da funzionare sempre come vere, o perché sono innate o perché la matematica è stata inventata prima delle altre scienze. E la biblioteca è stata costruita da una mente umana che pensa in modo matematico, perché senza matematica non fai labirinti. Umberto Eco, “Il nome della rosa” v vi Abstract The explicit evaluation of selected entries of the inverse of a given sparse ma- trix is an important process in various application fields and is gaining visibility in recent years. -
A Parallel Gauss-Seidel Algorithm for Sparse Power Systems Matrices
AParallel Gauss-Seidel Algorithm for Sparse Power Systems Matrices D. P. Ko ester, S. Ranka, and G. C. Fox Scho ol of Computer and Information Science and The Northeast Parallel Architectures Center (NPAC) Syracuse University Syracuse, NY 13244-4100 [email protected], [email protected], [email protected] A ComdensedVersion of this Paper was presentedatSuperComputing `94 NPACTechnical Rep ort | SCCS 630 4 April 1994 Abstract We describ e the implementation and p erformance of an ecient parallel Gauss-Seidel algorithm that has b een develop ed for irregular, sparse matrices from electrical p ower systems applications. Although, Gauss-Seidel algorithms are inherently sequential, by p erforming sp ecialized orderings on sparse matrices, it is p ossible to eliminate much of the data dep endencies caused by precedence in the calculations. A two-part matrix ordering technique has b een develop ed | rst to partition the matrix into blo ck-diagonal-b ordered form using diakoptic techniques and then to multi-color the data in the last diagonal blo ck using graph coloring techniques. The ordered matrices often have extensive parallelism, while maintaining the strict precedence relationships in the Gauss-Seidel algorithm. We present timing results for a parallel Gauss-Seidel solver implemented on the Thinking Machines CM-5 distributed memory multi-pro cessor. The algorithm presented here requires active message remote pro cedure calls in order to minimize communications overhead and obtain go o d relative sp eedup. The paradigm used with active messages greatly simpli ed the implementationof this sparse matrix algorithm. 1 Intro duction Wehavedevelop ed an ecient parallel Gauss-Seidel algorithm for irregular, sparse matrices from electrical p ower systems applications. -
Decomposition Methods for Sparse Matrix Nearness Problems∗
SIAM J. MATRIX ANAL. APPL. c 2015 Society for Industrial and Applied Mathematics Vol. 36, No. 4, pp. 1691–1717 DECOMPOSITION METHODS FOR SPARSE MATRIX NEARNESS PROBLEMS∗ YIFAN SUN† AND LIEVEN VANDENBERGHE† Abstract. We discuss three types of sparse matrix nearness problems: given a sparse symmetric matrix, find the matrix with the same sparsity pattern that is closest to it in Frobenius norm and (1) is positive semidefinite, (2) has a positive semidefinite completion, or (3) has a Euclidean distance matrix completion. Several proximal splitting and decomposition algorithms for these problems are presented and their performance is compared on a set of test problems. A key feature of the methods is that they involve a series of projections on small dense positive semidefinite or Euclidean distance matrix cones, corresponding to the cliques in a triangulation of the sparsity graph. The methods discussed include the dual block coordinate ascent algorithm (or Dykstra’s method), the dual projected gradient and accelerated projected gradient algorithms, and a primal and a dual application of the Douglas–Rachford splitting algorithm. Key words. positive semidefinite completion, Euclidean distance matrices, chordal graphs, convex optimization, first order methods AMS subject classifications. 65F50, 90C22, 90C25 DOI. 10.1137/15M1011020 1. Introduction. We discuss matrix optimization problems of the form minimize X − C2 (1.1) F subject to X ∈S, where the variable X is a symmetric p × p matrix with a given sparsity pattern E, and C is a given symmetric matrix with the same sparsity pattern as X.Wefocus on the following three choices for the constraint set S.