Appunti Di Perron-Frobenius

Total Page:16

File Type:pdf, Size:1020Kb

Appunti Di Perron-Frobenius Appunti di Perron-Frobenius Nikita Deniskin 13 July 2020 1 Prerequisiti e fatti sparsi > ej `e j-esimo vettore della base canonica e il vettore 1 = (1; 1;::: 1) . Spectrum: σ(A) Spectral radius: ρ(A) Autospazio generalizzato: EA(λ) = Ker(A − λ I) Algebraic multiplicity: mi of λi Geometric multiplicity: di of λi λi is simple if mi = 1, semi-simple if mi = di, defective if di < mi. lim Ak `eil limite puntuale delle matrici. k k 1 Teorema di Gelfand e su ρ(A) = lim(jjA jj) k Cose su Jordan e scrittura astratta con il proiettore. Si ha che lim Bk = 0 se e solo se ρ(B) < 1. k Inoltre il limite di Bk non diverge solo se ρ(B) ≤ 1 e 1 `eun autovalore semisemplice (guarda la forma di Jordan). DA COMPLETARE Definition 1.1 Let A 2 Cn×n. Then: • A is zero-convergent if lim An = 0 n!1 • A is convergent if lim An = B 2 n×n. n!1 C Achtung! Some authors call zero-convergent matrix convergent and convergent matrix semi- convergent. Theorem 1.2 Let A 2 Cn×n, A is zero-convergent () ρ(A) < 1. Proof. Write A = CJC−1 where J is the Jordan Normal Form and note that A is zero- convergent () J is zero-convergent. Theorem 1.3 Let A 2 Cn×n, A is convergent () ALL the three following conditions apply: 1. ρ(A) ≤ 1 1 2. If ρ(A) = 1, then λ 2 σ(A); jλj = 1 ) λ = 1 3. If ρ(A) = 1, then the eigenvalue λ = 1 is semi-simple. Or, in an equivalent formulation, if we are in one of the following two mutually exclusive cases: 1. ρ(A) < 1 2. ρ(A) = 1, the eigenvalue µ = 1 is semi-simple, and if λ 2 σ(A); jλj = 1 ) λ = 1. Proof. Use the Jordan Normal Form Remark 1.4 1 is a semi-simple eigenvalue of A if and only if rank(I − A) = rank(I − A)2 < n. This is useful since it's computationally easier to calculate the rank than all the eigenvalues of A. 2 Perron-Frobenius theorem 2.1 Positive and non-negative matrices Definition 2.1 (Positive and non-negative vectors) Let v 2 Rn be a vector • v is positive if for all i, 1 ≤ i ≤ n, we have vi > 0. We will indicate that v is positive by writing v > 0. • v is non-negative if for all i, 1 ≤ i ≤ n, we have vi ≥ 0. We will indicate that v is non-negative by writing v ≥ 0. The set of all non-negative vector of dimension n is called the non-negative orthant of Rn and n will be indicated with R+. Definition 2.2 (Positive and non-negative matrices) Let A 2 Rn×n be a matrix • A is positive if for all i; j, 1 ≤ i; j ≤ n, we have aij > 0. We will indicate that A is positive by writing A > 0. • A is non-negative if for all i; j, 1 ≤ i; j ≤ n, we have aij ≥ 0. We will indicate that A is positive by writing A ≥ 0. The set of all non-negative matrices of dimension n × n is called the cone of non-negative n×n matrices and will be indicated with R+ . We can define in similar way for A 2 Rm×n to be positive or non-negative. The space of all m×n non-negative m × n matrices will be denoted by R+ . n Remark 2.3 A is a non-negative matrix if and only if 8 x 2 R+, the product Ax is still a non-negative vector. 2 The definitions1 and2 can be used to state that a matrix (or a vector) is pointwise greater than 0. Similarly we can introduce an ordering on Rn×n to state that a matrix is pointwise greater than another. Definition 2.4 (Partial ordering) Let A; B 2 Rn×n, we say that • A > B if the matrix A − B is positive, i.e. A − B > 0. • A ≥ B if the matrix A − B is non-negative, i.e. A − B ≥ 0. A useful observation is that (Rn×n; ≥) is a partially ordered set: the relation ≥ is reflexive, transitive and antisymmetric. Definition 2.5 (Absolute value) m×n th Let A 2 C , we will denote by jAj the matrix that has as i; j element jaijj. Similarly we can define the absolute value of a vector jvj. m×n n m×n m By this definition, we have jAj 2 R+ and jvj 2 R+ for every A 2 C and v 2 C . Proposition 2.6 Properties of the ordering ≥: m m P P 1. If Ai ≥ Bi are matrices, then Ai ≥ Bi i=1 i=1 2. If A ≥ B are matrices and c 2 R+ is a scalar, then cA ≥ cB 3. If A ≥ B and C ≥ 0 are matrices, then AC ≥ BC and CA ≥ CB if the products are defined 4. If Ak ≥ Bk and there exist a pointwise limit lim Ak = A and lim Bk = B, then A ≥ B k k 5. If A 2 Cm×n, then jAj ≥ 0 and jAj = 0 if and only if A = 0 6. If A 2 Cm×n and γ 2 C, jγAj = jγj jAj 7. If A; B 2 Cm×n then jA + Bj ≤ jAj + jBj. 8. If A; B 2 Cn×n then jA · Bj ≤ jAj · jBj. This is true also for every couple of matrices such that A · B is defined, i.e. A 2 Cn×k and B 2 Ck×m. 9. If A 2 Cn×n, and k 2 N, then jAkj ≤ jAjk. Remark 2.7 There are a couple of simple but very useful particular cases of property 3 and 8: • If A ≥ 0 is a matrix and v ≥ 0 is a vector, then Av ≥ 0. • If A > 0 is a matrix, v ≥ 0 is a vector, v 6= 0, then Av > 0. • Let A 2 Cn×n and x 2 Cn. Then j A x j ≤ jAj · jxj 3 Proposition 2.8 Let A 2 Rn×n. Then: 1. A ≥ 0 () 8 v 2 Rn, with v > 0, we have Av ≥ 0. 2. A > 0 () 8 v 2 Rn, with v ≥ 0, v 6= 0, we have Av > 0. Proof. The ) implications follow easily from the above properties. 1) ( If there were ai j < 0, then choosing v with vj = 1 and other components k 6= j; vk = " P we have [Av]i = ai jvj + ai k" ≤ −ai j + (n − 1)" · max jai kj which is negative for sufficiently k6=j k6=j small ". 2) ( If there were ai j ≤ 0, it is sufficient to choose v = ej, then [Av]i = ai jvj + 0 = ai j ≤ 0. 2.2 Spectral radius bounds Theorem 2.9 (Lappo-Danilevsky) n×n n×n Let A 2 C and B 2 R+ such that jAj ≤ B. Then ρ(A) ≤ ρ(jAj) ≤ ρ(B) Proof. It is sufficent to prove that for all matrices A1;B1 that satisfy the hyphothesis of the theorem, it's true that ρ(A1) ≤ ρ(B1). Then taking A1 = A and B1 = jAj we obtain ρ(A) ≤ ρ(jAj), and by taking A1 = jAj and B1 = B we obtain ρ(jAj) ≤ ρ(B). Let's prove the claim by contradiction, so assume ρ(B) < ρ(A). Then there exist a scalar γ 2 R+ such that γρ(B) < 1 < γρ(A) Which is the same as ρ(γB) < 1 < ρ(γA) k k This means that the lim(γB) = 0, while the sequence of matrices f(γA) gk2 does not converge k N to any matrix (if it were to converge, then the spectral radius has to converge to a real number). On the other hand, we have 0 ≤ jAj ≤ B, so 0 ≤ jγAj ≤ γB and 0 ≤ jγAjk ≤ (γB)k for all k 2 N. Since (γB)k converges to the 0 matrix, it follows that also jγAjk and (γA)k has to converge to the 0 matrix, which is a contradiction. Alternative proof. For every x 2 Cn, we have: jA · xj ≤ jAj · jxj ≤ B · jxj Let jj · jj be an induced matrix norm from the vector norm jj · jjv (for example, the 2-norm). Then: jj Ax jjv ≤ jj jAj · jxj jjv ≤ jj B · jxj jjv 4 n By taking the sup on all x 2 C of unitary norm, jjxjjv = 1, we obtain the matrix norm: jj A jj ≤ jj jAj jj ≤ jj B jj Since A ≤ jAj ≤ B, we have Ak ≤ jAjk ≤ Bk for every integer k > 0; then applying the above method gives us jj Ak jj ≤ jj jAjk jj ≤ jj Bk jj k 1 k 1 k 1 jj A jj k ≤ jj jAj jj k ≤ jj B jj k We can take the limit for k ! 1 and apply Gelfand's theorem to obtain: ρ(A) ≤ ρ(jAj) ≤ ρ(B) n n×n P th Theorem 2.10 Let A 2 R+ , let Ri = ai j be the sum of entries of the i -row and j=1 n P th Cj = ai j be the sum of the j -column.Then: i=1 min Ri ≤ ρ(A) ≤ max Ri 1≤i≤n 1≤i≤n min Cj ≤ ρ(A) ≤ max Cj 1≤j≤n 1≤j≤n Proof. The maximum row-sum can be expressed as the infinity norm, since A is non-negative max Ri = jjAjj1 1≤i≤n Furthermore, this is an induced norm, so ρ(A) ≤ jjAjj1 and we have the upper bound inequality.
Recommended publications
  • Lecture Notes in Advanced Matrix Computations
    Lecture Notes in Advanced Matrix Computations Lectures by Dr. Michael Tsatsomeros Throughout these notes, signifies end proof, and N signifies end of example. Table of Contents Table of Contents i Lecture 0 Technicalities 1 0.1 Importance . 1 0.2 Background . 1 0.3 Material . 1 0.4 Matrix Multiplication . 1 Lecture 1 The Cost of Matrix Algebra 2 1.1 Block Matrices . 2 1.2 Systems of Linear Equations . 3 1.3 Triangular Systems . 4 Lecture 2 LU Decomposition 5 2.1 Gaussian Elimination and LU Decomposition . 5 Lecture 3 Partial Pivoting 6 3.1 LU Decomposition continued . 6 3.2 Gaussian Elimination with (Partial) Pivoting . 7 Lecture 4 Complete Pivoting 8 4.1 LU Decomposition continued . 8 4.2 Positive Definite Systems . 9 Lecture 5 Cholesky Decomposition 10 5.1 Positive Definiteness and Cholesky Decomposition . 10 Lecture 6 Vector and Matrix Norms 11 6.1 Vector Norms . 11 6.2 Matrix Norms . 13 Lecture 7 Vector and Matrix Norms continued 13 7.1 Matrix Norms . 13 7.2 Condition Numbers . 15 Notes by Jakob Streipel. Last updated April 27, 2018. i TABLE OF CONTENTS ii Lecture 8 Condition Numbers 16 8.1 Solving Perturbed Systems . 16 Lecture 9 Estimating the Condition Number 18 9.1 More on Condition Numbers . 18 9.2 Ill-conditioning by Scaling . 19 9.3 Estimating the Condition Number . 20 Lecture 10 Perturbing Not Only the Vector 21 10.1 Perturbing the Coefficient Matrix . 21 10.2 Perturbing Everything . 22 Lecture 11 Error Analysis After The Fact 23 11.1 A Posteriori Error Analysis Using the Residue .
    [Show full text]
  • Tilburg University on the Matrix
    Tilburg University On the Matrix (I + X)-1 Engwerda, J.C. Publication date: 2005 Link to publication in Tilburg University Research Portal Citation for published version (APA): Engwerda, J. C. (2005). On the Matrix (I + X)-1. (CentER Discussion Paper; Vol. 2005-120). Macroeconomics. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 25. sep. 2021 No. 2005–120 -1 ON THE MATRIX INEQUALITY ( I + x ) I . By Jacob Engwerda November 2005 ISSN 0924-7815 On the matrix inequality (I + X)−1 ≤ I. Jacob Engwerda Tilburg University Dept. of Econometrics and O.R. P.O. Box: 90153, 5000 LE Tilburg, The Netherlands e-mail: [email protected] November, 2005 1 Abstract: In this note we consider the question under which conditions all entries of the matrix I −(I +X)−1 are nonnegative in case matrix X is a real positive definite matrix.
    [Show full text]
  • Chapter 7 Powers of Matrices
    Chapter 7 Powers of Matrices 7.1 Introduction In chapter 5 we had seen that many natural (repetitive) processes such as the rabbit examples can be described by a discrete linear system (1) ~un+1 = A~un, n = 0, 1, 2,..., and that the solution of such a system has the form n (2) ~un = A ~u0. n In addition, we had learned how to find an explicit expression for A and hence for ~un by using the constituent matrices of A. In this chapter, however, we will be primarily interested in “long range predictions”, i.e. we want to know the behaviour of ~un for n large (or as n → ∞). In view of (2), this is implied by an understanding of the limit matrix n A∞ = lim A , n→∞ provided that this limit exists. Thus we shall: 1) Establish a criterion to guarantee the existence of this limit (cf. Theorem 7.6); 2) Find a quick method to compute this limit when it exists (cf. Theorems 7.7 and 7.8). We then apply these methods to analyze two application problems: 1) The shipping of commodities (cf. section 7.7) 2) Rat mazes (cf. section 7.7) As we had seen in section 5.9, the latter give rise to Markov chains, which we shall study here in more detail (cf. section 7.7). 312 Chapter 7: Powers of Matrices 7.2 Powers of Numbers As was mentioned in the introduction, we want to study here the behaviour of the powers An of a matrix A as n → ∞. However, before considering the general case, it is useful to first investigate the situation for 1 × 1 matrices, i.e.
    [Show full text]
  • Topological Recursion and Random Finite Noncommutative Geometries
    Western University Scholarship@Western Electronic Thesis and Dissertation Repository 8-21-2018 2:30 PM Topological Recursion and Random Finite Noncommutative Geometries Shahab Azarfar The University of Western Ontario Supervisor Khalkhali, Masoud The University of Western Ontario Graduate Program in Mathematics A thesis submitted in partial fulfillment of the equirr ements for the degree in Doctor of Philosophy © Shahab Azarfar 2018 Follow this and additional works at: https://ir.lib.uwo.ca/etd Part of the Discrete Mathematics and Combinatorics Commons, Geometry and Topology Commons, and the Quantum Physics Commons Recommended Citation Azarfar, Shahab, "Topological Recursion and Random Finite Noncommutative Geometries" (2018). Electronic Thesis and Dissertation Repository. 5546. https://ir.lib.uwo.ca/etd/5546 This Dissertation/Thesis is brought to you for free and open access by Scholarship@Western. It has been accepted for inclusion in Electronic Thesis and Dissertation Repository by an authorized administrator of Scholarship@Western. For more information, please contact [email protected]. Abstract In this thesis, we investigate a model for quantum gravity on finite noncommutative spaces using the topological recursion method originated from random matrix theory. More precisely, we consider a particular type of finite noncommutative geometries, in the sense of Connes, called spectral triples of type (1; 0) , introduced by Barrett. A random spectral triple of type (1; 0) has a fixed fermion space, and the moduli space of its Dirac operator D = fH; ·} ;H 2 HN , encoding all the possible geometries over the fermion −S(D) space, is the space of Hermitian matrices HN . A distribution of the form e dD is considered over the moduli space of Dirac operators.
    [Show full text]
  • Arxiv:1909.13402V1 [Math.CA] 30 Sep 2019 Routh-Hurwitz Array [14], Argument Principle [23] and So On
    ON GENERALIZATION OF CLASSICAL HURWITZ STABILITY CRITERIA FOR MATRIX POLYNOMIALS XUZHOU ZHAN AND ALEXANDER DYACHENKO Abstract. In this paper, we associate a class of Hurwitz matrix polynomi- als with Stieltjes positive definite matrix sequences. This connection leads to an extension of two classical criteria of Hurwitz stability for real polynomials to matrix polynomials: tests for Hurwitz stability via positive definiteness of block-Hankel matrices built from matricial Markov parameters and via matricial Stieltjes continued fractions. We obtain further conditions for Hurwitz stability in terms of block-Hankel minors and quasiminors, which may be viewed as a weak version of the total positivity criterion. Keywords: Hurwitz stability, matrix polynomials, total positivity, Markov parameters, Hankel matrices, Stieltjes positive definite sequences, quasiminors 1. Introduction Consider a high-order differential system (n) (n−1) A0y (t) + A1y (t) + ··· + Any(t) = u(t); where A0;:::;An are complex matrices, y(t) is the output vector and u(t) denotes the control input vector. The asymptotic stability of such a system is determined by the Hurwitz stability of its characteristic matrix polynomial n n−1 F (z) = A0z + A1z + ··· + An; or to say, by that all roots of det F (z) lie in the open left half-plane <z < 0. Many algebraic techniques are developed for testing the Hurwitz stability of matrix polynomials, which allow to avoid computing the determinant and zeros: LMI approach [20, 21, 27, 28], the Anderson-Jury Bezoutian [29, 30], matrix Cauchy indices [6], lossless positive real property [4], block Hurwitz matrix [25], extended arXiv:1909.13402v1 [math.CA] 30 Sep 2019 Routh-Hurwitz array [14], argument principle [23] and so on.
    [Show full text]
  • Students' Solutions Manual Applied Linear Algebra
    Students’ Solutions Manual for Applied Linear Algebra by Peter J.Olver and Chehrzad Shakiban Second Edition Undergraduate Texts in Mathematics Springer, New York, 2018. ISBN 978–3–319–91040–6 To the Student These solutions are a resource for students studying the second edition of our text Applied Linear Algebra, published by Springer in 2018. An expanded solutions manual is available for registered instructors of courses adopting it as the textbook. Using the Manual The material taught in this book requires an active engagement with the exercises, and we urge you not to read the solutions in advance. Rather, you should use the ones in this manual as a means of verifying that your solution is correct. (It is our hope that all solutions appearing here are correct; errors should be reported to the authors.) If you get stuck on an exercise, try skimming the solution to get a hint for how to proceed, but then work out the exercise yourself. The more you can do on your own, the more you will learn. Please note: for students taking a course based on Applied Linear Algebra, copying solutions from this Manual can place you in violation of academic honesty. In particular, many solutions here just provide the final answer, and for full credit one must also supply an explanation of how this is found. Acknowledgements We thank a number of people, who are named in the text, for corrections to the solutions manuals that accompanied the first edition. Of course, as authors, we take full respon- sibility for all errors that may yet appear.
    [Show full text]
  • Volume 3 (2010), Number 2
    APLIMAT - JOURNAL OF APPLIED MATHEMATICS VOLUME 3 (2010), NUMBER 2 APLIMAT - JOURNAL OF APPLIED MATHEMATICS VOLUME 3 (2010), NUMBER 2 Edited by: Slovak University of Technology in Bratislava Editor - in - Chief: KOVÁČOVÁ Monika (Slovak Republic) Editorial Board: CARKOVS Jevgenijs (Latvia ) CZANNER Gabriela (USA) CZANNER Silvester (Great Britain) DE LA VILLA Augustin (Spain) DOLEŽALOVÁ Jarmila (Czech Republic) FEČKAN Michal (Slovak Republic) FERREIRA M. A. Martins (Portugal) FRANCAVIGLIA Mauro (Italy) KARPÍŠEK Zdeněk (Czech Republic) KOROTOV Sergey (Finland) LORENZI Marcella Giulia (Italy) MESIAR Radko (Slovak Republic) TALAŠOVÁ Jana (Czech Republic) VELICHOVÁ Daniela (Slovak Republic) Editorial Office: Institute of natural sciences, humanities and social sciences Faculty of Mechanical Engineering Slovak University of Technology in Bratislava Námestie slobody 17 812 31 Bratislava Correspodence concerning subscriptions, claims and distribution: F.X. spol s.r.o Azalková 21 821 00 Bratislava [email protected] Frequency: One volume per year consisting of three issues at price of 120 EUR, per volume, including surface mail shipment abroad. Registration number EV 2540/08 Information and instructions for authors are available on the address: http://www.journal.aplimat.com/ Printed by: FX spol s.r.o, Azalková 21, 821 00 Bratislava Copyright © STU 2007-2010, Bratislava All rights reserved. No part may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise,
    [Show full text]
  • Formal Matrix Integrals and Combinatorics of Maps
    SPhT-T05/241 Formal matrix integrals and combinatorics of maps B. Eynard 1 Service de Physique Th´eorique de Saclay, F-91191 Gif-sur-Yvette Cedex, France. Abstract: This article is a short review on the relationship between convergent matrix inte- grals, formal matrix integrals, and combinatorics of maps. 1 Introduction This article is a short review on the relationship between convergent matrix inte- grals, formal matrix integrals, and combinatorics of maps. We briefly summa- rize results developed over the last 30 years, as well as more recent discoveries. We recall that formal matrix integrals are identical to combinatorial generating functions for maps, and that formal matrix integrals are in general very different from arXiv:math-ph/0611087v3 19 Dec 2006 convergent matrix integrals. Both may coincide perturbatively (i.e. up to terms smaller than any negative power of N), only for some potentials which correspond to negative weights for the maps, and therefore not very interesting from the combinatorics point of view. We also recall that both convergent and formal matrix integrals are solutions of the same set of loop equations, and that loop equations do not have a unique solution in general. Finally, we give a list of the classical matrix models which have played an important role in physics in the past decades. Some of them are now well understood, some are still difficult challenges. Matrix integrals were first introduced by physicists [55], mostly in two ways: - in nuclear physics, solid state physics, quantum chaos, convergent matrix integrals are 1 E-mail: [email protected] 1 studied for the eigenvalues statistical properties [48, 33, 5, 52].
    [Show full text]
  • Memory Capacity of Neural Turing Machines with Matrix Representation
    Memory Capacity of Neural Turing Machines with Matrix Representation Animesh Renanse * [email protected] Department of Electronics & Electrical Engineering Indian Institute of Technology Guwahati Assam, India Rohitash Chandra * [email protected] School of Mathematics and Statistics University of New South Wales Sydney, NSW 2052, Australia Alok Sharma [email protected] Laboratory for Medical Science Mathematics RIKEN Center for Integrative Medical Sciences Yokohama, Japan *Equal contributions Editor: Abstract It is well known that recurrent neural networks (RNNs) faced limitations in learning long- term dependencies that have been addressed by memory structures in long short-term memory (LSTM) networks. Matrix neural networks feature matrix representation which inherently preserves the spatial structure of data and has the potential to provide better memory structures when compared to canonical neural networks that use vector repre- sentation. Neural Turing machines (NTMs) are novel RNNs that implement notion of programmable computers with neural network controllers to feature algorithms that have copying, sorting, and associative recall tasks. In this paper, we study augmentation of mem- arXiv:2104.07454v1 [cs.LG] 11 Apr 2021 ory capacity with matrix representation of RNNs and NTMs (MatNTMs). We investigate if matrix representation has a better memory capacity than the vector representations in conventional neural networks. We use a probabilistic model of the memory capacity us- ing Fisher information and investigate how the memory capacity for matrix representation networks are limited under various constraints, and in general, without any constraints. In the case of memory capacity without any constraints, we found that the upper bound on memory capacity to be N 2 for an N ×N state matrix.
    [Show full text]
  • A Note on Nonnegative Diagonally Dominant Matrices Geir Dahl
    UNIVERSITY OF OSLO Department of Informatics A note on nonnegative diagonally dominant matrices Geir Dahl Report 269, ISBN 82-7368-211-0 April 1999 A note on nonnegative diagonally dominant matrices ∗ Geir Dahl April 1999 ∗ e make some observations concerning the set C of real nonnegative, W n diagonally dominant matrices of order . This set is a symmetric and n convex cone and we determine its extreme rays. From this we derive ∗ dierent results, e.g., that the rank and the kernel of each matrix A ∈Cn is , and may b e found explicitly. y a certain supp ort graph of determined b A ∗ ver, the set of doubly sto chastic matrices in C is studied. Moreo n Keywords: Diagonal ly dominant matrices, convex cones, graphs and ma- trices. 1 An observation e recall that a real matrix of order is called diagonal ly dominant if W P A n | |≥ | | for . If all these inequalities are strict, is ai,i j=6 i ai,j i =1,...,n A strictly diagonal ly dominant. These matrices arise in many applications as e.g., discretization of partial dierential equations [14] and cubic spline interp ola- [10], and a typical problem is to solve a linear system where tion Ax = b strictly diagonally dominant, see also [13]. Strict diagonal dominance A is is a criterion which is easy to check for nonsingularity, and this is imp ortant for the estimation of eigenvalues confer Ger²chgorin disks, see e.g. [7]. For more ab out diagonally dominant matrices, see [7] or [13]. A matrix is called nonnegative positive if all its elements are nonnegative p ositive.
    [Show full text]
  • Conditioning Analysis of Positive Definite Matrices by Approximate Factorizations
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 26 (1989) 257-269 257 North-Holland Conditioning analysis of positive definite matrices by approximate factorizations Robert BEAUWENS Service de M&rologie NucGaire, UniversitP Libre de Bruxelles, 50 au. F.D. Roosevelt, 1050 Brussels, Belgium Renaud WILMET The Johns Hopkins University, Baltimore, MD 21218, U.S.A Received 19 February 1988 Revised 7 December 1988 Abstract: The conditioning analysis of positive definite matrices by approximate LU factorizations is usually reduced to that of Stieltjes matrices (or even to more specific classes of matrices) by means of perturbation arguments like spectral equivalence. We show in the present work that a wider class, which we call “almost Stieltjes” matrices, can be used as reference class and that it has decisive advantages for the conditioning analysis of finite element approxima- tions of large multidimensional steady-state diffusion problems. Keywords: Iterative analysis, preconditioning, approximate factorizations, solution of finite element approximations to partial differential equations. 1. Introduction The conditioning analysis of positive definite matrices by approximate factorizations has been the subject of a geometrical approach by Axelsson and his coworkers [1,2,7,8] and of an algebraic approach by one of the authors [3-51. In both approaches, the main role is played by Stieltjes matrices to which other cases are reduced by means of spectral equivalence or by a similar perturbation argument. In applications to discrete partial differential equations, such arguments lead to asymptotic estimates which may be quite poor in nonasymptotic regions, a typical instance in large engineering multidimensional steady-state diffusion applications, as illustrated below.
    [Show full text]
  • Convergence of Two-Stage Iterative Scheme for $ K $-Weak Regular Splittings of Type II with Application to Covid-19 Pandemic Model
    Convergence of two-stage iterative scheme for K-weak regular splittings of type II with application to Covid-19 pandemic model Vaibhav Shekharb, Nachiketa Mishraa, and Debasisha Mishra∗b aDepartment of Mathematics, Indian Institute of Information Technology, Design and Manufacturing, Kanchipuram emaila: [email protected] bDepartment of Mathematics, National Institute of Technology Raipur, India. email∗: [email protected]. Abstract Monotone matrices play a key role in the convergence theory of regular splittings and different types of weak regular splittings. If monotonicity fails, then it is difficult to guarantee the convergence of the above-mentioned classes of matrices. In such a case, K- monotonicity is sufficient for the convergence of K-regular and K-weak regular splittings, where K is a proper cone in Rn. However, the convergence theory of a two-stage iteration scheme in general proper cone setting is a gap in the literature. Especially, the same study for weak regular splittings of type II (even if in standard proper cone setting, i.e., K = Rn +), is open. To this end, we propose convergence theory of two-stage iterative scheme for K-weak regular splittings of both types in the proper cone setting. We provide some sufficient conditions which guarantee that the induced splitting from a two-stage iterative scheme is a K-regular splitting and then establish some comparison theorems. We also study K-monotone convergence theory of the stationary two-stage iterative method in case of a K-weak regular splitting of type II. The most interesting and important part of this work is on M-matrices appearing in the Covid-19 pandemic model.
    [Show full text]