Section 3 Iterative Methods in Matrix Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Section 3 Iterative Methods in Matrix Algebra Section 3 Iterative Methods in Matrix Algebra Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 155 Vector norm Definition A vector norm on Rn, denoted by , is a mapping from Rn to k · k R such that I x 0 for all x Rn, k k ≥ ∈ I x = 0 if and only if x = 0, k k I αx = α x for all α R and x Rn, k k | |k k ∈ ∈ I x + y x + y for all x, y R. k k ≤ k k k k ∈ Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 156 Vector norm Definition (lp norms) The lp (sometimes Lp or `p) norm of a vector is defined by n 1/p 1 p < : x = x p ≤ ∞ k kp | i | Xi=1 p = : x = max xi ∞ k k∞ 1 i n | | ≤ ≤ In particular, the l2 norm is also called the Euclidean norm. Note that when 0 p < 1, is not norm, strictly speaking, ≤ k · kp but have some usages in specific applications. Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 157 7.1 Norms of Vectors and Matrices 433 Note that each of these norms reduces to the absolute value in the case n 1. = The l2 norm is called the Euclidean norm of the vector xbecause it represents the usual notion of distance from the origin in case xis in R1 R, R2,orR3. For example, the t ≡ l2 norm of the vector x (x1, x2, x3) gives the length of the straight line joining the points = 2 3 (0, 0, 0) and (x1, x2, x3). Figure 7.1 shows the boundary of those vectors in R and R that have l2 norm less than 1. Figure 7.2 is a similar illustration for the l norm. l2 norm ∞ Figure 7.1 x x 2 3 The vectors in the ޒ3 The vectors in ޒ2 first octant of with l2 norm less with l2 norm less than 1 are inside (0, 1) than 1 are inside this figure. (0, 0, 1) this figure. (Ϫ1, 0) (1, 0) x 1 (1, 0, 0) (0, 1, 0) x x 1 2 (0, Ϫ1) Figure 7.2 x x Numerical Analysis II – Xiaojing2 Ye, Math & Stat, Georgia State University3 158 (0, 1) (1, 1) (Ϫ1, 1) (0, 0, 1) (1, 0, 1) (0, 1, 1) (1, 1, 1) x 1 (Ϫ1, 0) (1, 0) (1, 0, 0) (0, 1, 0) x 1 x 2 (Ϫ1, Ϫ1) (0, Ϫ1) (1, Ϫ1) (1, 1, 0) The vectors in ޒ2 with The vectors in the first 3 lϱ norm less than 1 are octant of ޒ with lϱ norm inside this figure. less than 1 are inside this figure. Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 7.1 Norms of Vectors and Matrices 433 Note that each of these norms reduces to the absolute value in the case n 1. = The l2 norm is called the Euclidean norm of the vector xbecause it represents the usual notion of distance from the origin in case xis in R1 R, R2,orR3. For example, the t ≡ l2 norm of the vector x (x1, x2, x3) gives the length of the straight line joining the points = 2 3 (0, 0, 0) and (x1, x2, x3). Figure 7.1 shows the boundary of those vectors in R and R that have l2 norm less than 1. Figure 7.2 is a similar illustration for the l norm. ∞ Figure 7.1 x x 2 3 The vectors in the ޒ3 The vectors in ޒ2 first octant of with l2 norm less with l2 norm less than 1 are inside (0, 1) than 1 are inside this figure. (0, 0, 1) this figure. (Ϫ1, 0) (1, 0) x 1 (1, 0, 0) (0, 1, 0) x x 1 2 Ϫ l norm (0, 1) ∞ Figure 7.2 x 2 x 3 (0, 1) (1, 1) (Ϫ1, 1) (0, 0, 1) (1, 0, 1) (0, 1, 1) (1, 1, 1) x 1 (Ϫ1, 0) (1, 0) (1, 0, 0) (0, 1, 0) x 1 x 2 (Ϫ1, Ϫ1) (0, Ϫ1) (1, Ϫ1) (1, 1, 0) The vectors in ޒ2 with The vectors in the first 3 lϱ norm less than 1 are octant of ޒ with lϱ norm inside this figure. less than 1 are inside this figure. Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial reviewNumerical has deemed that any Analysis suppressed content II – does Xiaojing not materially Ye, affect Math the overall & learning Stat, experience. Georgia Cengage State Learning University reserves the right to remove additional content at any time if subsequent rights restrictions 159 require it. Vector norms Example 3 Compute the l2 and l norms of vector x = (1, 1, 2) R . ∞ − ∈ Solution: x = 1 2 + 1 2 + 2 2 = √6 k k2 | | | − | | | q x = max xi = max 1 , 1 , 2 = 2 k k∞ 1 i 3 | | {| | | − | | |} ≤ ≤ Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 160 Theorem (Cauchy-Schwarz inequality) n For any vectors x = (x1,..., xn)> R and n ∈ y = (y1,..., yn)> R , there is ∈ n n 1/2 n 1/2 2 2 x>y = x y x y = x y | | i i ≤ | i | | i | k k2k k2 i=1 i=1 i=1 X X X Proof. It is obviously true for x = 0 or y = 0. If x, y = 0, then for any 6 λ R, there is ∈ 2 2 2 2 0 x λy = x 2λx>y + λ y ≤ k − k2 k k2 − k k2 and the equality holds when λ = x / y . k k2 k k2 Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 161 Distance between vectors Definition (Distance between two vectors) n The lp distance (1 p ) between two vectors x, y R is ≤ ≤ ∞ ∈ defined by x y . k − kp Definition (Convergence of a sequence of vectors) A sequence x(k) is said to converge with respect to the l { } p norm if for any given > 0, there exists an integer N() such that x(k) x < , for all k N() k − k ≥ Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 162 Convergence of a sequence of vectors Theorem A sequence of vectors x(k) converges to x if and only if { } x(k) x for every i = 1, 2,..., n. i → i Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 163 Theorem For any vector x Rn, there is ∈ x x 2 √n x k k∞ ≤ k k ≤ k k∞ Proof. 2 2 2 x = max xi = max xi x1 + + xn = x 2 k k∞ i | | i | | ≤ | | ··· | | k k q 2 q 2 2 x 2 = x1 + + xn n max xi k k | | ··· | | ≤ i | | q 2 q = √n max xi = √n max xi = √n x i | | i | | k k∞ q Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 164 7.1 Norms of Vectors and Matrices 437 Proof Let xj be a coordinate of xsuch that x max1 i n xi xj . Then ∥ ∥∞ = ≤ ≤ | |=| | n 2 2 2 2 2 x xj xj xi x 2, ∥ ∥∞ =| | = ≤ = ∥ ∥ i 1 != and x x 2. ∥ ∥∞ ≤∥ ∥ So n n 2 2 2 2 2 x 2 xi xj nxj n x , ∥ ∥ = ≤ = = || ||∞ i 1 i 1 != != and x 2 √n x . ∥ ∥ ≤ ∥ ∥∞ Figure 7.3 illustrates this result when n 2. = 2 Compare l2 and l norms in R Figure 7.3 ∞ x 2 ʈxʈϱ р 1 1 ʈxʈ2 р 1 Ϫ 11x 1 2 x р √ ʈ ʈϱ 2 Ϫ1 Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 165 Example 4 In Example 3, we found that the sequence x(k) , defined by { } t (k) 1 3 k x 1, 2 , , e− sin k , = + k k2 " # converges to x (1, 2, 0, 0)t with respect to the l norm. Show that this sequence also = ∞ converges to xwith respect to the l2 norm. Solution Given any ε > 0, there exists an integer N(ε/2) with the property that ε x(k) x < , ∥ − ∥∞ 2 whenever k N(ε/2). By Theorem 7.7, this implies that ≥ (k) (k) x x 2 √4 x x 2(ε/2) ε, ∥ − ∥ ≤ ∥ − ∥∞ ≤ = (k) when k N(ε/2). So x also converges to xwith respect to the l2 norm. ≥ { } Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. Matrix norm Definition A matrix norm on the set of n n matrices is a real-valued × function, denoted by , that satisfies the follows for all n n k · k A, B R × and α R: ∈ ∈ I A 0 k k ≥ I A = 0 if and only if A = 0 the zero matrix, k k I αA = α A k k | |k k I A + B A + B k k ≤ k k k k I AB A B k k ≤ k kk k Numerical Analysis II – Xiaojing Ye, Math & Stat, Georgia State University 166 Distance between matrices Definition n n Suppose is a norm defined on R × .
Recommended publications
  • Lecture Notes in Advanced Matrix Computations
    Lecture Notes in Advanced Matrix Computations Lectures by Dr. Michael Tsatsomeros Throughout these notes, signifies end proof, and N signifies end of example. Table of Contents Table of Contents i Lecture 0 Technicalities 1 0.1 Importance . 1 0.2 Background . 1 0.3 Material . 1 0.4 Matrix Multiplication . 1 Lecture 1 The Cost of Matrix Algebra 2 1.1 Block Matrices . 2 1.2 Systems of Linear Equations . 3 1.3 Triangular Systems . 4 Lecture 2 LU Decomposition 5 2.1 Gaussian Elimination and LU Decomposition . 5 Lecture 3 Partial Pivoting 6 3.1 LU Decomposition continued . 6 3.2 Gaussian Elimination with (Partial) Pivoting . 7 Lecture 4 Complete Pivoting 8 4.1 LU Decomposition continued . 8 4.2 Positive Definite Systems . 9 Lecture 5 Cholesky Decomposition 10 5.1 Positive Definiteness and Cholesky Decomposition . 10 Lecture 6 Vector and Matrix Norms 11 6.1 Vector Norms . 11 6.2 Matrix Norms . 13 Lecture 7 Vector and Matrix Norms continued 13 7.1 Matrix Norms . 13 7.2 Condition Numbers . 15 Notes by Jakob Streipel. Last updated April 27, 2018. i TABLE OF CONTENTS ii Lecture 8 Condition Numbers 16 8.1 Solving Perturbed Systems . 16 Lecture 9 Estimating the Condition Number 18 9.1 More on Condition Numbers . 18 9.2 Ill-conditioning by Scaling . 19 9.3 Estimating the Condition Number . 20 Lecture 10 Perturbing Not Only the Vector 21 10.1 Perturbing the Coefficient Matrix . 21 10.2 Perturbing Everything . 22 Lecture 11 Error Analysis After The Fact 23 11.1 A Posteriori Error Analysis Using the Residue .
    [Show full text]
  • Chapter 7 Powers of Matrices
    Chapter 7 Powers of Matrices 7.1 Introduction In chapter 5 we had seen that many natural (repetitive) processes such as the rabbit examples can be described by a discrete linear system (1) ~un+1 = A~un, n = 0, 1, 2,..., and that the solution of such a system has the form n (2) ~un = A ~u0. n In addition, we had learned how to find an explicit expression for A and hence for ~un by using the constituent matrices of A. In this chapter, however, we will be primarily interested in “long range predictions”, i.e. we want to know the behaviour of ~un for n large (or as n → ∞). In view of (2), this is implied by an understanding of the limit matrix n A∞ = lim A , n→∞ provided that this limit exists. Thus we shall: 1) Establish a criterion to guarantee the existence of this limit (cf. Theorem 7.6); 2) Find a quick method to compute this limit when it exists (cf. Theorems 7.7 and 7.8). We then apply these methods to analyze two application problems: 1) The shipping of commodities (cf. section 7.7) 2) Rat mazes (cf. section 7.7) As we had seen in section 5.9, the latter give rise to Markov chains, which we shall study here in more detail (cf. section 7.7). 312 Chapter 7: Powers of Matrices 7.2 Powers of Numbers As was mentioned in the introduction, we want to study here the behaviour of the powers An of a matrix A as n → ∞. However, before considering the general case, it is useful to first investigate the situation for 1 × 1 matrices, i.e.
    [Show full text]
  • Topological Recursion and Random Finite Noncommutative Geometries
    Western University Scholarship@Western Electronic Thesis and Dissertation Repository 8-21-2018 2:30 PM Topological Recursion and Random Finite Noncommutative Geometries Shahab Azarfar The University of Western Ontario Supervisor Khalkhali, Masoud The University of Western Ontario Graduate Program in Mathematics A thesis submitted in partial fulfillment of the equirr ements for the degree in Doctor of Philosophy © Shahab Azarfar 2018 Follow this and additional works at: https://ir.lib.uwo.ca/etd Part of the Discrete Mathematics and Combinatorics Commons, Geometry and Topology Commons, and the Quantum Physics Commons Recommended Citation Azarfar, Shahab, "Topological Recursion and Random Finite Noncommutative Geometries" (2018). Electronic Thesis and Dissertation Repository. 5546. https://ir.lib.uwo.ca/etd/5546 This Dissertation/Thesis is brought to you for free and open access by Scholarship@Western. It has been accepted for inclusion in Electronic Thesis and Dissertation Repository by an authorized administrator of Scholarship@Western. For more information, please contact [email protected]. Abstract In this thesis, we investigate a model for quantum gravity on finite noncommutative spaces using the topological recursion method originated from random matrix theory. More precisely, we consider a particular type of finite noncommutative geometries, in the sense of Connes, called spectral triples of type (1; 0) , introduced by Barrett. A random spectral triple of type (1; 0) has a fixed fermion space, and the moduli space of its Dirac operator D = fH; ·} ;H 2 HN , encoding all the possible geometries over the fermion −S(D) space, is the space of Hermitian matrices HN . A distribution of the form e dD is considered over the moduli space of Dirac operators.
    [Show full text]
  • Students' Solutions Manual Applied Linear Algebra
    Students’ Solutions Manual for Applied Linear Algebra by Peter J.Olver and Chehrzad Shakiban Second Edition Undergraduate Texts in Mathematics Springer, New York, 2018. ISBN 978–3–319–91040–6 To the Student These solutions are a resource for students studying the second edition of our text Applied Linear Algebra, published by Springer in 2018. An expanded solutions manual is available for registered instructors of courses adopting it as the textbook. Using the Manual The material taught in this book requires an active engagement with the exercises, and we urge you not to read the solutions in advance. Rather, you should use the ones in this manual as a means of verifying that your solution is correct. (It is our hope that all solutions appearing here are correct; errors should be reported to the authors.) If you get stuck on an exercise, try skimming the solution to get a hint for how to proceed, but then work out the exercise yourself. The more you can do on your own, the more you will learn. Please note: for students taking a course based on Applied Linear Algebra, copying solutions from this Manual can place you in violation of academic honesty. In particular, many solutions here just provide the final answer, and for full credit one must also supply an explanation of how this is found. Acknowledgements We thank a number of people, who are named in the text, for corrections to the solutions manuals that accompanied the first edition. Of course, as authors, we take full respon- sibility for all errors that may yet appear.
    [Show full text]
  • Volume 3 (2010), Number 2
    APLIMAT - JOURNAL OF APPLIED MATHEMATICS VOLUME 3 (2010), NUMBER 2 APLIMAT - JOURNAL OF APPLIED MATHEMATICS VOLUME 3 (2010), NUMBER 2 Edited by: Slovak University of Technology in Bratislava Editor - in - Chief: KOVÁČOVÁ Monika (Slovak Republic) Editorial Board: CARKOVS Jevgenijs (Latvia ) CZANNER Gabriela (USA) CZANNER Silvester (Great Britain) DE LA VILLA Augustin (Spain) DOLEŽALOVÁ Jarmila (Czech Republic) FEČKAN Michal (Slovak Republic) FERREIRA M. A. Martins (Portugal) FRANCAVIGLIA Mauro (Italy) KARPÍŠEK Zdeněk (Czech Republic) KOROTOV Sergey (Finland) LORENZI Marcella Giulia (Italy) MESIAR Radko (Slovak Republic) TALAŠOVÁ Jana (Czech Republic) VELICHOVÁ Daniela (Slovak Republic) Editorial Office: Institute of natural sciences, humanities and social sciences Faculty of Mechanical Engineering Slovak University of Technology in Bratislava Námestie slobody 17 812 31 Bratislava Correspodence concerning subscriptions, claims and distribution: F.X. spol s.r.o Azalková 21 821 00 Bratislava [email protected] Frequency: One volume per year consisting of three issues at price of 120 EUR, per volume, including surface mail shipment abroad. Registration number EV 2540/08 Information and instructions for authors are available on the address: http://www.journal.aplimat.com/ Printed by: FX spol s.r.o, Azalková 21, 821 00 Bratislava Copyright © STU 2007-2010, Bratislava All rights reserved. No part may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise,
    [Show full text]
  • Formal Matrix Integrals and Combinatorics of Maps
    SPhT-T05/241 Formal matrix integrals and combinatorics of maps B. Eynard 1 Service de Physique Th´eorique de Saclay, F-91191 Gif-sur-Yvette Cedex, France. Abstract: This article is a short review on the relationship between convergent matrix inte- grals, formal matrix integrals, and combinatorics of maps. 1 Introduction This article is a short review on the relationship between convergent matrix inte- grals, formal matrix integrals, and combinatorics of maps. We briefly summa- rize results developed over the last 30 years, as well as more recent discoveries. We recall that formal matrix integrals are identical to combinatorial generating functions for maps, and that formal matrix integrals are in general very different from arXiv:math-ph/0611087v3 19 Dec 2006 convergent matrix integrals. Both may coincide perturbatively (i.e. up to terms smaller than any negative power of N), only for some potentials which correspond to negative weights for the maps, and therefore not very interesting from the combinatorics point of view. We also recall that both convergent and formal matrix integrals are solutions of the same set of loop equations, and that loop equations do not have a unique solution in general. Finally, we give a list of the classical matrix models which have played an important role in physics in the past decades. Some of them are now well understood, some are still difficult challenges. Matrix integrals were first introduced by physicists [55], mostly in two ways: - in nuclear physics, solid state physics, quantum chaos, convergent matrix integrals are 1 E-mail: [email protected] 1 studied for the eigenvalues statistical properties [48, 33, 5, 52].
    [Show full text]
  • Memory Capacity of Neural Turing Machines with Matrix Representation
    Memory Capacity of Neural Turing Machines with Matrix Representation Animesh Renanse * [email protected] Department of Electronics & Electrical Engineering Indian Institute of Technology Guwahati Assam, India Rohitash Chandra * [email protected] School of Mathematics and Statistics University of New South Wales Sydney, NSW 2052, Australia Alok Sharma [email protected] Laboratory for Medical Science Mathematics RIKEN Center for Integrative Medical Sciences Yokohama, Japan *Equal contributions Editor: Abstract It is well known that recurrent neural networks (RNNs) faced limitations in learning long- term dependencies that have been addressed by memory structures in long short-term memory (LSTM) networks. Matrix neural networks feature matrix representation which inherently preserves the spatial structure of data and has the potential to provide better memory structures when compared to canonical neural networks that use vector repre- sentation. Neural Turing machines (NTMs) are novel RNNs that implement notion of programmable computers with neural network controllers to feature algorithms that have copying, sorting, and associative recall tasks. In this paper, we study augmentation of mem- arXiv:2104.07454v1 [cs.LG] 11 Apr 2021 ory capacity with matrix representation of RNNs and NTMs (MatNTMs). We investigate if matrix representation has a better memory capacity than the vector representations in conventional neural networks. We use a probabilistic model of the memory capacity us- ing Fisher information and investigate how the memory capacity for matrix representation networks are limited under various constraints, and in general, without any constraints. In the case of memory capacity without any constraints, we found that the upper bound on memory capacity to be N 2 for an N ×N state matrix.
    [Show full text]
  • Convergence of Two-Stage Iterative Scheme for $ K $-Weak Regular Splittings of Type II with Application to Covid-19 Pandemic Model
    Convergence of two-stage iterative scheme for K-weak regular splittings of type II with application to Covid-19 pandemic model Vaibhav Shekharb, Nachiketa Mishraa, and Debasisha Mishra∗b aDepartment of Mathematics, Indian Institute of Information Technology, Design and Manufacturing, Kanchipuram emaila: [email protected] bDepartment of Mathematics, National Institute of Technology Raipur, India. email∗: [email protected]. Abstract Monotone matrices play a key role in the convergence theory of regular splittings and different types of weak regular splittings. If monotonicity fails, then it is difficult to guarantee the convergence of the above-mentioned classes of matrices. In such a case, K- monotonicity is sufficient for the convergence of K-regular and K-weak regular splittings, where K is a proper cone in Rn. However, the convergence theory of a two-stage iteration scheme in general proper cone setting is a gap in the literature. Especially, the same study for weak regular splittings of type II (even if in standard proper cone setting, i.e., K = Rn +), is open. To this end, we propose convergence theory of two-stage iterative scheme for K-weak regular splittings of both types in the proper cone setting. We provide some sufficient conditions which guarantee that the induced splitting from a two-stage iterative scheme is a K-regular splitting and then establish some comparison theorems. We also study K-monotone convergence theory of the stationary two-stage iterative method in case of a K-weak regular splitting of type II. The most interesting and important part of this work is on M-matrices appearing in the Covid-19 pandemic model.
    [Show full text]
  • Additional Details and Fortification for Chapter 1
    APPENDIX A Additional Details and Fortification for Chapter 1 A.1 Matrix Classes and Special Matrices The matrices can be grouped into several classes based on their operational prop- erties. A short list of various classes of matrices is given in Tables A.1 and A.2. Some of these have already been described earlier, for example, elementary, sym- metric, hermitian, othogonal, unitary, positive definite/semidefinite, negative defi- nite/semidefinite, real, imaginary, and reducible/irreducible. Some of the matrix classes are defined based on the existence of associated matri- ces. For instance, A is a diagonalizable matrix if there exists nonsingular matrices T such that TAT −1 = D results in a diagonal matrix D. Connected with diagonalizable matrices are normal matrices. A matrix B is a normal matrix if BB∗ = B∗B.Normal matrices are guaranteed to be diagonalizable matrices. However, defective matri- ces are not diagonalizable. Once a matrix has been identified to be diagonalizable, then the following fact can be used for easier computation of integral powers of the matrix: A = T −1DT → Ak = T −1DT T −1DT ··· T −1DT = T −1DkT and then take advantage of the fact that ⎛ ⎞ dk 0 ⎜ 1 ⎟ K . D = ⎝ .. ⎠ K 0 dN Another set of related classes of matrices are the idempotent, projection, invo- lutory, nilpotent, and convergent matrices. These classes are based on the results of integral powers. Matrix A is idempotent if A2 = A, and if, in addition, A is hermitian, then A is known as a projection matrix. Projection matrices are used to partition an N-dimensional space into two subspaces that are orthogonal to each other.
    [Show full text]
  • The Quantum Structure of Space and Time
    QcEntwn Structure &pace and Time This page intentionally left blank Proceedings of the 23rd Solvay Conference on Physics Brussels, Belgium 1 - 3 December 2005 The Quantum Structure of Space and Time EDITORS DAVID GROSS Kavli Institute. University of California. Santa Barbara. USA MARC HENNEAUX Universite Libre de Bruxelles & International Solvay Institutes. Belgium ALEXANDER SEVRIN Vrije Universiteit Brussel & International Solvay Institutes. Belgium \b World Scientific NEW JERSEY LONOON * SINGAPORE BElJlNG * SHANGHAI HONG KONG TAIPEI * CHENNAI Published by World Scientific Publishing Co. Re. Ltd. 5 Toh Tuck Link, Singapore 596224 USA ofJice: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK ofice; 57 Shelton Street, Covent Garden, London WC2H 9HE British Library Cataloguing-in-PublicationData A catalogue record for this hook is available from the British Library. THE QUANTUM STRUCTURE OF SPACE AND TIME Proceedings of the 23rd Solvay Conference on Physics Copyright 0 2007 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereoi may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher. For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher. ISBN 981-256-952-9 ISBN 981-256-953-7 (phk) Printed in Singapore by World Scientific Printers (S) Pte Ltd The International Solvay Institutes Board of Directors Members Mr.
    [Show full text]
  • Displacements Method in the Solution of Matrix and Operator Equations
    On the Extrapolated Jacobi or Simultaneous Displacements Method in the Solution of Matrix and Operator Equations By W. V. Petryshyn 1. Introduction. Let A = (a,-¿) be an n X n real or complex nonsingular matrix with an ¿¿ 0 and let b be a given column vector. If we put A = D + Q, where D is a diagonal matrix with da = a,-¿ and Q = A — D, then one of the simplest and oldest iterative methods for the solution of equation (i) Ax = b is the Jacobi or simultaneous displacements method according to which the itérants x(m+1>are determined by (ii) aiix/m+1) = - E ai}x/m) +bif Utgn, m ^ 0, j+i or, in matrix form, by (iii) Dxlm+1) = -Qxlm) + b, where x<0)is an arbitrary initial approximation to the exact solution x* of (i). If e(m) denotes the error vector e(m) e= x(m) — x*, then from (iii) we derive (iv) e(m+1) = Je(m) = • • • = Jm+V0), where ./ = —D~ Q is the Jacobi iteration matrix associated with A. It is known that the Jacobi method converges rather slowly. The standard technique of improving its convergence is that of extrapolation or relaxation which in our case leads to the extrapolated Jacobi or simultaneous displacement method (v) Dx(m+1) = -{(« - 1)2) + wQ}x(m) + œb with the extrapolated Jacobi matrix given by (vi) Ja = -{(« - i) + «zr'Ql, where œ is the relaxation parameter which is a real number. It is known [17] that for an arbitrary xm the error vectors e(m+1>of these iterative methods tend to the zero vector if and only if the Jm+1 tend to the zero matrix, or equivalently, if and only if the spectral radius r(JJ) ( = maxiá<á„ | \i(Ja) | ) of Ja is less than unity.
    [Show full text]
  • Sufficient Conditions for the Convergent Splittings of Non-Hermitian Positive Definite Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 330 (2001) 215–218 www.elsevier.com/locate/laa Sufficient conditions for the convergent splittings of non-Hermitian positive definite matricesୋ Chuan-Long Wang a, Zhong-Zhi Bai b,∗ aDepartment of Computer Science and Mathematics, The Normal College of Shanxi University, Taiyuan 030012, People’s Republic of China bState Key Laboratory of Scientific/Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, Beijing 100080, People’s Republic of China Received 31 March 2000; accepted 6 February 2001 Submitted by M. Neumann Abstract We present sufficient conditions for the convergent splitting of a non-Hermitian positive definite matrix. These results are applicable to identify the convergence of iterative methods for solving large sparse system of linear equations. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 65F10; 65F50; CR: G1.3 Keywords: Non-Hermitian matrix; Positive definite matrix; Convergent splitting 1. Introduction Iterative methods based on matrix splittings play an important role for solving large sparse systems of linear equations (see [1,2]), and convergence property of a matrix splitting determines the numerical behaviour of the corresponding iterative method. There are many studies for the convergence property of a matrix splitting for important matrix classes such as M-matrix, H-matrix and Hermitian positive definite matrix. However, a little is known about a general non-Hermitian positive definite ୋ Subsidized by The Special Funds For Major State Basic Research Projects G1999032800.
    [Show full text]