Space-Efficient and Exact System Representations for the Nonlocal

Total Page:16

File Type:pdf, Size:1020Kb

Space-Efficient and Exact System Representations for the Nonlocal Space-efficient and exact system representations for the nonlocal protein electrostatics problem Dissertation zur Erlangung des Grades „Doktor der Naturwissenschaften” am Fachbereich Physik, Mathematik und Informatik der Johannes Gutenberg-Universität in Mainz Thomas Kemmer geb. in Wittlich Mainz, den 14. Oktober 2020 1. Berichterstatter: Prof. Dr. Andreas Hildebrandt 2. Berichterstatter: Prof. Dr. Bertil Schmidt Tag des Kolloquiums: 09.03.2021 D77 ii Abstract The study of proteins and protein interactions, which represent the main constituents of biological functions, plays an important role in modern biology. Application settings such as the development of pharmaceutical drugs and therapies rely on an accurate description of the electrostatics in biomolecular systems. This particularly includes nonlocal electrostatic contributions of the water-based solvents in the cell, which have a significant impact on the long-range visibility of immersed proteins. Existing mathematical models for the nonlocal protein electrostatics problem can be approached with numerical standard techniques, including boundary element methods (BEM). However, the typically large, dense, and asymmetric matrices involved in discretized BEM formulations previously prevented the application of the method to realistically-sized biomolecular systems. Here, we overcome this obstacle by developing implicit yet exact representations for such BEM matrices, capturing trillions of floating-point values in only a few megabytes of memory and without loss of information. We present generalized reference implementations for our implicit matrix types alongside specialized matrix operations for the Julia and Impala programming languages and demonstrate how they can be utilized with existing linear system solvers. On top of these implementations, we develop full- fledged CPU- and CUDA-based BEM solvers for the nonlocal protein electrostatics problem and make them available to the scientific community. In this context, we show that our solvers can perform dozens of matrix-vector products for the previously inaccessible BEM systems within a few seconds or minutes and thus allow, for the first time, to solve the employed BEM formulations with exact system matrices in the same time frame. iii German summary Proteine und ihre Interaktionen mit anderen Biomolekülen in der Zelle bilden eine wichtige Grundlage biologischer Funktionen. Studien im Bereich dieser Interaktio- nen verwenden, je nach Problemstellung, unterschiedliche mathematische Modelle zur Beschreibung der beteiligten Energien. Im Bereich der Medikamentenentwick- lung ist dabei beispielsweise eine möglichst präzise Modellierung der Elektrostatik des Systems entscheidend, da diese maßgeblich am Findungsprozess potenzieller Bindungspartner beteiligt ist. Die strukturelle Komplexität der wasserbasierten Lösungsmittel in der Zelle stellt hier eine besondere Herausforderung dar, denn Dipolmomente sowie der Drang zur Ausbildung von Wasserstoffbrückenbindungen schränken die Bewegungsfreiheit der Wassermoleküle in der unmittelbaren Umge- bung gelöster Proteine stark ein. Diese Effekte können z. B. durch das theoretische Rahmenwerk der nichtlokalen Elektrostatik beschrieben und die resultierenden Gle- ichungssysteme mit Standardverfahren der Numerik gelöst werden. Hier betrachten wir eine existierende Formulierung des Proteinelektrostatikproblems unter Verwen- dung einer Randelementmethode (engl. boundary element method, BEM), deren Beitrag zur genaueren Berechnung elektrostatischer Potentiale in der Vergangenheit bereits für kleine Biomolekularsysteme gezeigt werden konnte, deren praktische Anwendbarkeit auf größere Systeme jedoch bislang durch ihre immensen Speicher- anforderungen eingeschränkt war. Im Rahmen dieser Dissertation entwickeln wir implizite Repräsentationen für die üblicherweise dicht besetzten und nicht symmetrischen Systemmatrizen der in den numerischen Lösungsprozess involvierten linearen Gleichungssysteme, die der Grund für die oben genannten Einschränkungen sind. Unsere impliziten Repräsenta- tionen reduzieren den ursprünglich quadratischen Speicherbedarf auf einen linearen, ohne dabei Informationen zu verlieren, so dass Matrizen mit mehreren Milliarden Elementen problemlos mit wenigen Megabytes im Speicher dargestellt werden kön- nen. Wir präsentieren außerdem Referenzimplementierungen für unsere impliziten Matrixtypen sowie Operationen für selbige in den Programmiersprachen Julia und Impala und beschreiben, wie diese in vorhandenen Lösern für lineare Gleichungssys- teme zur Anwendung kommen oder die Repräsentationen auf andere Probleme übertragen werden können. v Auf dieser Grundlage entwickeln wir schließlich vollwertige Randelementlöser für das nichtlokale Proteinelektrostatikproblem, die zum ersten Mal in der Lage sind, praxisnahe Eingabegrößen verarbeiten und Lösungen innerhalb weniger Sekunden oder Minuten bereitstellen zu können. Dazu implementieren wir verschiedene Varianten der allgemeinen Matrix-Vektor-Multiplikation für CPUs und GPUs (unter Verwendung von NVIDIAs bekannter CUDA-Plattform), deren Performanz wir hier auf unterschiedlichen Rechnern gegenüberstellen. Ein besonderes Augenmerk legen wir dabei auf unsere Julia-Implementierungen, die durch das geschickte Ausnutzen von Spracheigenschaften eine Manipulation von eigentlich seriellen und rein CPU- basierten Gleichungssystemlösern erlauben, die effektiv zu einer (wahlweise CPU- oder GPU-basierten) Parallelisierung der Löser führen. Ein weiteres Augenmerk legen wir auf die domänenspezifische Implementierung unserer Impala-Lösungen, die für die gewählte Zielplattform spezialisierte und optimierte Programme aus einer weitgehend plattformunabhängigen Codebasis generiert, was einerseits den Wartungsaufwand der Implementierung drastisch verringert und andererseits das einfache Hinzufügen oder Austauschen von Plattformen ermöglicht. Alle im Rahmen dieser Arbeit entwickelten Softwarepakete sind quelloffen und frei verfügbar, so dass die hier präsentierten Ergebnisse leicht nachvollzogen und unsere Randelementlöser insbesondere für wissenschaftliche Zwecke eingesetzt werden können. vi Chapter 0 Acknowledgments First and foremost, I want to express my gratitude to my advisor, Professor Dr. Andreas Hildebrandt, who introduced me into the world of nonlocal protein electro- statics, guided me through the early stages of this project and provided invaluable feedback ever since. I am also indebted to my second reviewer, Professor Dr. Bertil Schmidt, for several constructive discussions on the high-performance capabilities of the system solving process. Furthermore, I want to thank my dear friends who assisted me with proofreading, for every bit of nitpicking, all fruitful discussions, and for accompanying me on my never-ending quest for working through unspeakable amounts of coffee and sushi. I am also deeply grateful to my significant other for her endless support and encouragement over the course of the writing process, for proofreading, and for company during roughly one or two all-nighters. I would like to thank my friends and colleagues at the Institute of Computer Science, the Scientific Computing and Bioinformatics group, and the Integrated Research Training Group for a familiar working atmosphere and numerous discussions on diverse topics. Last but not least, I am indebted to my former supervisor, who introduced me into a more structured approach to scientific writing during my Master studies. Although she was not involved in this manuscript or the projects leading to it, her invaluable feedback to my Master’s thesis a long time ago sharpened my awareness of stylistic consistency permanently and thus helped me, once again, through the writing process. vii Abbreviations AoS Array of structs BEM Boundary element method DSL Domain-specific language FDM Finite difference method FEM Finite element method GMRES Generalized minimal residual method GPGPU General-purpose computing on GPUs IR Intermediate representation JIT Just in time PE Partial evaluation REPL Read-eval-print loop SIMD Single instruction, multiple data SIMT Single instruction, multiple threads SM Streaming multiprocessor SoA Struct of arrays ix Contents 1 Introduction 1 2 Nonlocal protein electrostatics 5 2.1 Protein interactions . 5 2.1.1 Protein structure hierarchy . 6 2.1.2 The human proteome . 7 2.1.3 Protein docking . 8 2.1.4 Force field models . 9 2.2 Protein electrostatics . 10 2.2.1 Local cavity model . 10 2.2.2 Nonlocal cavity model . 13 2.3 Numerical solvers . 15 3 BEM system analysis 19 3.1 Background: Operators . 19 3.1.1 Trace operators . 20 3.1.2 Boundary integral operators . 21 3.2 Local cavity model . 22 3.2.1 Continuous formulation for the local cavity model . 23 3.2.2 Discrete formulation for the local cavity model . 24 3.3 Nonlocal cavity model . 27 3.3.1 Continuous formulation for the nonlocal cavity model . 28 3.3.2 Discrete formulation for the nonlocal cavity model . 28 3.4 Derived quantities . 30 3.4.1 Interior potentials . 31 3.4.2 Exterior potentials . 32 3.4.3 Reaction field energies . 34 4 Platform selection 37 4.1 The Julia language . 37 4.1.1 Scripting language characteristics . 38 4.1.2 High-performance capabilities . 40 xi 4.2 AnyDSL . 42 4.2.1 Compilation flow . 42 4.2.2 Impala . 43 4.3 CUDA . 44 4.3.1 Programming model . 45 4.3.2 CUDA support in Julia . 47 4.3.3 CUDA support in Impala . 47 5 Solving implicit linear systems 49 5.1 Background: Iterative solvers . 49 5.1.1 Deterministic iterative solvers . 49 5.1.2 Probabilistic and randomized iterative solvers
Recommended publications
  • Randomized Fast Solvers for Linear and Nonlinear Problems in Data Science
    UC Irvine UC Irvine Electronic Theses and Dissertations Title Randomized Fast Solvers for Linear and Nonlinear Problems in Data Science Permalink https://escholarship.org/uc/item/7c81w1kx Author Wu, Huiwen Publication Date 2019 License https://creativecommons.org/licenses/by/4.0/ 4.0 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA, IRVINE Randomized Fast Solvers for Linear and Nonlinear Problems in Data Science DISSERTATION submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Math by Huiwen Wu Dissertation Committee: Professor Long Chen, Chair Professor Jack Xin Chancellor’s Professor Hongkai Zhao 2019 © 2019 Huiwen Wu DEDICATION To my family ii TABLE OF CONTENTS Page LIST OF FIGURES vi LIST OF TABLES vii ACKNOWLEDGMENTS ix CURRICULUM VITAE x ABSTRACT OF THE DISSERTATION xi 1 Introduction 1 2 Linear and Nonlinear Problems in Data Science 5 2.1 Least Squares Problem . .5 2.2 Linear Regression . .6 2.3 Logistic Regression . 10 2.3.1 Introduction . 10 2.3.2 Likelihood Function for Logistic Regression . 12 2.3.3 More on Logistic Regression Objective Function . 14 2.4 Non-constraint Convex and Smooth Minimization Problem . 16 3 Existing Methods for Least Squares Problems 20 3.1 Direct Methods . 21 3.1.1 Gauss Elimination . 21 3.1.2 Cholesky Decomposition . 21 3.1.3 QR Decomposition . 23 3.1.4 SVD Decomposition . 24 3.2 Iterative Methods . 25 3.2.1 Residual Correction Method . 25 3.2.2 Conjugate Gradient Method . 26 3.2.3 Kaczmarz Method .
    [Show full text]
  • A Novel Greedy Kaczmarz Method for Solving Consistent Linear Systems
    A NOVEL GREEDY KACZMARZ METHOD FOR SOLVING CONSISTENT LINEAR SYSTEMS∗ HANYU LI† AND YANJUN ZHANG† Abstract. With a quite different way to determine the working rows, we propose a novel greedy Kaczmarz method for solving consistent linear systems. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the greedy randomized Kaczmarz method and the relaxed greedy randomized Kaczmarz method introduced recently by Bai and Wu [Z.Z. BAI AND W.T. WU, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), pp. A592–A606; Z.Z. BAI AND W.T. WU, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), pp. 21–26] in term of the computing time. Key words. greedy Kaczmarz method, greedy randomized Kaczmarz method, greedy strategy, iterative method, consistent linear systems AMS subject classifications. 65F10, 65F20, 65K05, 90C25, 15A06 1. Introduction. We consider the following consistent linear systems (1.1) Ax = b, where A ∈ Cm×n, b ∈ Cm, and x is the n-dimensional unknown vector. As we know, the Kacz- marz method [10] is a popular so-called row-action method for solving the systems (1.1). In 2009, Strohmer and Vershynin [18] proved the linear convergence of the randomized Kaczmarz (RK) method. Following that, Needell [13] found that the RK method is not converge to the ordi- nary least squares solution when the system is inconsistent. To overcome it, Zouzias and Freris [20] extended the RK method to the randomized extended Kaczmarz (REK) method.
    [Show full text]
  • Parallelization and Implementation of Methods for Image Reconstruction Applications in GNSS Water Vapor Tomography
    UNIVERSIDADE DA BEIRA INTERIOR Faculdade Engenharia Parallelization and Implementation of Methods for Image Reconstruction Applications in GNSS Water Vapor Tomography Fábio André de Oliveira Bento Submitted to the University of Beira Interior in candidature for the Degree of Master of Science in Computer Science and Engineering Supervisor: Prof. Dr. Paul Andrew Crocker Co-supervisor: Prof. Dr. Rui Manuel da Silva Fernandes Covilhã, October 2013 Parallelization and Implementation of Methods for Image Reconstruction ii Parallelization and Implementation of Methods for Image Reconstruction Acknowledgements I would like to thank my supervisor Prof. Paul Crocker for all his support, enthusiasm, patience, time and effort. I am grateful for all his guidance in all of the research and writing of this dissertation. It was very important to me to work with someone with experience and knowledge and that believed in my work. I also would like to thank my co-supervisor Prof. Rui Fernandes for all his support in the research and writing of this dissertation. His experience and knowledge was really helpful especially in the GNSS area and the dissertation organization. I am really grateful for all his support and availability. The third person that I would like to thank is André Sá. He was always available to help me with all my GNSS doubts, his support was really important for the progress of this dissertation. i am really thankful for his time, enthusiasm and patience. I am also very grateful to Prof. Cédric Champollion for all his support and providing the ES- COMPTE data, David Adams for his support and providing the Chuva Project data and Michael Bender for his support and for providing German and European GNSS data.
    [Show full text]
  • Randomised Algorithms for Solving Systems of Linear Equations
    Randomised algorithms for solving systems of linear equations Per-Gunnar Martinsson Dept. of Mathematics & Oden Inst. for Computational Sciences and Engineering University of Texas at Austin Students, postdocs, collaborators: Tracy Babb, Ke Chen, Robert van de Geijn, Abinand Gopal, Nathan Halko, Nathan Heavner, James Levitt, Yijun Liu, Gregorio Quintana-Ortí, Joel Tropp, Bowei Wu, Sergey Voronin, Anna Yesypenko, Patrick Young. Slides: http://users.oden.utexas.edu/∼pgm/main_talks.html Research support by: Scope: Let A be a given m × n matrix (real or complex), and let b be a given vector. The talk is about techniques for solving Ax = b: Environments considered: (Time permitting . ) • A is square, nonsingular, and stored in RAM. • A is square, nonsingular, and stored out of core. (Very large matrix.) • A is rectangular, with m n. (Very over determined system.) • A is a graph Laplacian matrix. (Very large, and sparse). n d • A is an n × n “kernel matrix”, in the sense that given some set of points fxigi=1 ⊂ R , the ij entry of the matrix can be written as k(xi; xj) for some kernel function k. ? Scientific computing: High accuracy required. ? Data analysis: d is large (d = 4, or 10, or 100, or 1 000, . ). Techniques: The recurring theme is randomisation. • Randomized sampling. Typically used to build preconditioners. • Randomized embeddings. Reduce effective dimension of intermediate steps. Prelude: We introduce the ideas around randomized embeddings by reviewing randomized techniques for low rank approximation. (Recap of Tuesday talk.) Randomised SVD: Objective: Given an m × n matrix A of approximate rank k, compute a factorisation A ≈ UDV∗ m × n m × k k × k k × n where U and V are orthonormal, and D is diagonal.
    [Show full text]
  • A Kaczmarz Method with Simple Random Sampling for Solving Large Linear Systems
    A KACZMARZ METHOD WITH SIMPLE RANDOM SAMPLING FOR SOLVING LARGE LINEAR SYSTEMS YUTONG JIANG∗, GANG WU†, AND LONG JIANG‡ Abstract. The Kaczmarz method is a popular iterative scheme for solving large, consistent system of over- determined linear equations. This method has been widely used in many areas such as reconstruction of CT scanned images, computed tomography and signal processing. In the Kaczmarz method, one cycles through the rows of the linear system and each iteration is formed by projecting the current point to the hyperplane formed by the active row. However, the Kaczmarz method may converge very slowly in practice. The randomized Kaczmarz method (RK) greatly improves the convergence rate of the Kaczmarz method, by using the rows of the coefficient matrix in random order rather than in their given order. An obvious disadvantage of the randomized Kaczmarz method is its probability criterion for selecting the active or working rows in the coefficient matrix. In [Z.Z. BAI, W. WU, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM Journal on Scientific Computing, 2018, 40: A592–A606], the authors proposed a new probability criterion that can capture larger entries of the residual vector of the linear system as far as possible at each iteration step, and presented a greedy randomized Kaczmarz method (GRK). However, the greedy Kaczmarz method may suffer from heavily computational cost when the size of the matrix is large, and the overhead will be prohibitively large for big data problems. The contribution of this work is as follows. First, from the probability significance point of view, we present a partially randomized Kaczmarz method, which can reduce the computational overhead needed in greedy randomized Kaczmarz method.
    [Show full text]
  • Solution of General Linear Systems of Equations Using Block Krylov Based
    Solution of general linear systems of equations using blo ck Krylov based iterative metho ds on distributed computing environments LeroyAnthony Drummond Lewis Decemb er i Resolution de systemes lineaires creux par des metho des iteratives par blo cs dans des environnements distribues heterogenes Resume Nous etudions limplantation de metho des iteratives par blo cs dans des environnements mul tipro cesseur amemoire distribuee p our la resolution de systemes lineaires quelconques Dans un premier temps nous nous interessons aletude du p otentiel de la metho de du gradient con jugue classique en environnement parallele Dans un deuxieme temps nous etudions une autre metho de iterativedelameme famille que celle du gradient conjugue qui a ete concue p our resoudre simultanementdessystemes lineaires avec de multiples seconds membres et commune mentreferencee sous le nom de gradient conjugue par blo cs La complexite algorithmique de la metho de du gradient conjugue par blo cs est superieure a celle de la metho de du gradientconjugue classique car elle demande plus de calculs par iteration et necessite en outre plus de memoire Malgre cela la metho de du gradient conjugue par blo cs apparat comme etant mieux adaptee aux environnements vectoriels et paralleles Nous avons etudie trois variantes du gradient conjugue par blo cs basees sur dierents mo deles de programmation parallele et leur ecaciteaete comparee sur divers environnements distribues Pour la resolution des systemes lineaires non symetriques nous considerons lutilisation de me thodes
    [Show full text]
  • Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
    Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory Item Type Preprint Authors Richtarik, Peter; Takáč, Martin Eprint version Pre-print Publisher arXiv Rights Archived with thanks to arXiv Download date 25/09/2021 03:25:50 Link to Item http://hdl.handle.net/10754/626554.1 Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory Item Type Article Authors Richtarik, Peter; Takãč, Martin Citation Richtárik, P., & Takáč, M. (2020). Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. SIAM Journal on Matrix Analysis and Applications, 41(2), 487–524. doi:10.1137/18m1179249 Eprint version Publisher's Version/PDF DOI 10.1137/18M1179249 Publisher arXiv; Society for Industrial & Applied Mathematics (SIAM) Journal SIAM Journal on Matrix Analysis and Applications Rights Archived with thanks to SIAM Journal on Matrix Analysis and Applications Download date 02/06/2020 12:35:39 Item License CCBY license Link to Item http://hdl.handle.net/10754/626554.1 Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory∗ Peter Richt´ariky Martin Tak´aˇcz June 7, 2017 Abstract We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our re- formulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact.
    [Show full text]
  • Accelerated Projection Methods for Computing Pseudoinverse Solutions of Systems of Linear Equations
    BIT 19 11979). 145,163 ACCELERATED PROJECTION METHODS FOR COMPUTING PSEUDOINVERSE SOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS AKE BJORCK and TOMMY ELFVING Abstract. Iterative methods are developed for computing the Moore-Penrose pseudoinverse solution of a linear system Ax b, where A is an m x n sparse matrix. The methods do not require the explicit formation of AT A or AAT and therefore are advantageous to use when these matrices are much less sparse than A itself. The methods are based on solving the two related systems (i) x=ATy, AAly=b, and (ii) AT Ax=A1 b. First it is shown how the SOR­ and SSOR-methods for these two systems can be implemented efficiently. Further, the acceleration of the SSOR-method by Chebyshev semi-iteration and the conjugate gradient method is discussed. In particular it is shown that the SSOR-cg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns of A respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction. 1. IntrodDction. Let A be a given m x n sparse matrix, b a given m-vector and x = A + b the Moore-Penrose pseudoinverse solution of the linear system of equations (1.1) Ax b. We denote the range and nullspace of a matrix A by R(A) and N(A) respectively.
    [Show full text]
  • Randomized Iterative Methods with Alternating Projections
    Randomized Iterative Methods with Alternating Projections Hua Xianga∗ Lin Zhangb† a School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P.R. China b Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, P.R. China. September 24, 2018 Abstract We use a unified framework to summarize sixteen randomized iterative methods including Kaczmarz method, coordinate descent method, etc. Some new iterative schemes are given as well. Some relationships with mg and ddm are also discussed. We analyze the convergence properties of the iterative schemes by using the matrix integrals associated with alternating projectors, and demonstrate the convergence behaviors of the randomized iterative methods by numerical examples. Keywords. linear systems, randomized iterative methods, Kaczmarz, co- ordinate descent, alternating projection 1 Introduction Solving a linear system is a basic task in scientific computing. Given a real matrix A Rm×n and a real vector b Rm, in this paper we consider the following consistent∈ linear system ∈ Ax = b. (1) For a large scale problem, iterative methods are more suitable than direct meth- ods. Though the classical iterative methods are generally deterministic [1, 3], arXiv:1708.09845v1 [math.NA] 31 Aug 2017 in this paper, we consider randomized iterative methods, for example, Kacz- marz method, the coordinate descent (cd) method and their block variants. Motivated by work in [4], we give a very general framework to such randomized iterative schemes. We need two matrices Y Rm×l and Z Rn×l, where l 6 min m,n , and usually l is much smaller than∈ m and n. In∈ most practical cases,{ only} one of ∗Corresponding author (H.
    [Show full text]
  • Bibliography on the Kaczmarz Method the Original Paper of Kaczmarz
    January, 23rd, 2021 Andrzej Cegielski Institute of Mathematics University of Zielona Góra ul. Szafrana 4a, 65-516 Zielona Góra, Poland e-mail: [email protected] Bibliography on the Kaczmarz method The original paper of Kaczmarz [Kac37] S. Kaczmarz, Angenäherte Auflösung von Systemen linearer Gleichungen (Przyblizone· rozwi ¾azywanie uk÷adówrówna´n liniowych), Bull. Intern. Acad. Polonaise Sci. Lett., Cl. Sci. Math. Nat. A, 35 (1937) 355-357. English translation: S. Kaczmarz, Approximate solution of systems of linear equations, International Journal of Control, 57 (1993) 1269-1271. Publications which cite [Kac37] [1] J. Abaffy, E. Spedicato, A generalization of Huang’smethod for solving systems of linear algebraic equations, Bolletino dell’UnioneMatematica Italiana, 3B (1984) 517-529. [2] J. Abaffy, E. Spedicato, ABS Projection Algorithms: Mathematical Techniques for Linear and Nonlinear Equations, Halstead Press, Chichester UK, 1989. [3] M.I.K. Abir, Iterative CT Reconstruction from Few Projections for the Nondestructive Post Irradiation Examination of Nuclear Fuel Assemblies, Ph.D. Thesis, Missouri University of Science and Technology, Rolla, Missouri, USA, 2015. [4] A. Aboud, A Dualized Kaczmarz Algorithm in Hilbert and Banach Space, Ph.D. Thesis, Iowa State University, Ames, Iowa, USA, 2019. [5] M.F. Adams, A low memory, highly concurrent multigrid algorithm, arXiv:1207.6720v3 (2012). [6] M. Aftanas, Through Wall Imaging with UWB Radar System, Ph.D. Thesis, Technical University of Koice, Koice, Slovakia, 2009. [7] A. Agaskar, Problems in Signal Processing and Inference on Graphs, Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 2015. [8] A. Agaskar, C. Wang, Y.M. Lu, Randomized Kaczmarz algorithms: Exact MSE analysis and optimal sampling probabilities, 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, 3-5 Dec.
    [Show full text]
  • A MATLAB Package of Algebraic Iterative Reconstruction Methods
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 236 (2012) 2167–2178 Contents lists available at SciVerse ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam AIR Tools — A MATLAB package of algebraic iterative reconstruction methodsI Per Christian Hansen ∗, Maria Saxild-Hansen Department of Informatics and Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark article info a b s t r a c t Article history: We present a MATLAB package with implementations of several algebraic iterative Received 29 October 2010 reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the MSC: problem. Two classes of methods are implemented: Algebraic Reconstruction Techniques 65F22 (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide 65Y15 a few simplified test problems from medical and seismic tomography. For each iterative Keywords: method, a number of strategies are available for choosing the relaxation parameter and the ART methods stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; SIRT methods in the former case we provide a new ``training'' algorithm that finds the optimal parameter Semi-convergence for a given test problem. The stopping rules provided are the discrepancy principle, the Relaxation parameters monotone error rule, and the NCP criterion; for the first two methods ``training'' can be Stopping rules used to find the optimal discrepancy parameter. Tomographic imaging ' 2011 Elsevier B.V.
    [Show full text]
  • AIR Tools a MATLAB Package of Algebraic Iterative Reconstruction Techniques
    AIR Tools A MATLAB Package of Algebraic Iterative Reconstruction Techniques Version 1.2 for Matlab 8.0 Per Christian Hansen Maria Saxild-Hansen Department of Applied Mathematics and Computer Science Building 324, Technical University of Denmark DK-2800 Lyngby, Denmark October 2014 Abstract This collection of MATLAB software contains implementations of several Algebraic Iterative Reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter and the stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; in the former case we provide a new \training" algorithm that finds the optimal parameter for a given test problem. The stopping rules provided are the discrepancy principle, the monotone error rule, and the NCP criterion; for the first two methods \training" can be used to find the optimal discrepancy parameter. The corresponding manuscript is: • P. C. Hansen and M. Saxild-Hansen, AIR Tools { A MATLAB Package of Algebraic Iterative Reconstruction Techniques, Journal of Computational and Applied Mathemat- ics, 236 (2012), pp. 2167{2178; doi:10.1016/j.cam.2011.09.039. We have included the most common algebraic iterative reconstruction methods in the package { but we left out block versions of the methods, which are better suited for other programming languages than MATLAB.
    [Show full text]