Facts from Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Facts from Linear Algebra Appendix A Facts from Linear Algebra Abstract We introduce the notation of vector and matrices (cf. Section A.1), and recall the solvability of linear systems (cf. Section A.2). Section A.3 introduces the spectrum σ(A), matrix polynomials P (A) and their spectra, the spectral radius ρ(A), and its properties. Block structures are introduced in Section A.4. Subjects of Section A.5 are orthogonal and orthonormal vectors, orthogonalisation, the QR method, and orthogonal projections. Section A.6 is devoted to the Schur normal form (§A.6.1) and the Jordan normal form (§A.6.2). Diagonalisability is discussed in §A.6.3. Finally, in §A.6.4, the singular value decomposition is explained. A.1 Notation for Vectors and Matrices We recall that the field K denotes either R or C. Given a finite index set I, the linear I space of all vectors x =(xi)i∈I with xi ∈ K is denoted by K . The corresponding square matrices form the space KI×I . KI×J with another index set J describes rectangular matrices mapping KJ into KI . The linear subspace of a vector space V spanned by the vectors {xα ∈V : α ∈ I} is denoted and defined by α α span{x : α ∈ I} := aαx : aα ∈ K . α∈I I×I T Let A =(aαβ)α,β∈I ∈ K . Then A =(aβα)α,β∈I denotes the transposed H matrix, while A =(aβα)α,β∈I is the adjoint (or Hermitian transposed) matrix. T H Note that A = A holds if K = R . Since (x1,x2,...) indicates a row vector, T (x1,x2,...) is used for a column vector. Exercise A.1. Prove the following rules for T and H (where λ ∈ K): (A + B)T = AT + BT, (AB)T = BTAT, (λA)T = λAT, (A + B)H = AH + BH, (AB)H = BHAH, (λA)H = λA¯ H, (A−1)T =(AT)−1, (A−1)H =(AH)−1 = A−H. © Springer International Publishing Switzerland 2016 401 W. Hackbusch, Iterative Solution of Large Sparse Systems of Equations, Applied Mathematical Sciences 95, DOI 10.1007/978-3-319-28483-5 402 Appendix A The inverse of a transposed or adjoint matrix is shortly denoted by A−T := (AT)−1,A−H := (AH)−1. Definition A.2. A matrix A ∈ KI×I is called symmetric if A = AT, Hermitian if A = AH, regular if A−1 exists, unitary if AHA=I (i.e., A regular and A−1 = AH), normal if AAH = AHA . Remark A.3. (a) Hermitian or unitary matrices are also normal. (b) All matrix properties of Definition A.2 carry over from A to the adjoint AH. (c) Products of regular (unitary) matrices are again regular (unitary). A diagonal matrix D is completely described by its diagonal entries. We write ! dα for α = β, D =diag{dα : a ∈ I} for D with Dαβ = (A.1) 0 for α = β. If I is ordered, we may also write D =diag{d1,d2,...,dn}. For an arbitrary matrix A ∈ KI×I , D =diag{A} denotes the diagonal part diag{aαα : α ∈ I} of A. In the case of an ordered index set, a matrix T is called tridiagonal if Tij =0 for all |i − j| > 1; i.e., if T has the band width 1 (cf. Definition 1.6). The entries αi = Ti,i−1 define the lower side diagonal, βi = Tii the (main) diagonal, and γi = Ti,i+1 the upper side diagonal, while all other entries of T vanish. Such a matrix is abbreviated as T =tridiag{(αi,βi,γi): i ∈ I} (A.2) (here the values α1 and γ#I are meaningless). By tridiag{A} we denote the tridiagonal part of an arbitrary matrix A. Assuming again an ordered index set, a matrix T is called a lower triangular matrix if Tij =0for all i<j. Similarly, T is called upper triangular if Tij =0 for all i>j. T is a strictly lower or upper triangular matrix if, in addition, Tii =0 for all i ∈ I. A.2 Systems of Linear Equations Let A ∈ KI×I and b ∈ KI . The system of equations to be solved is Ax = b, i.e., aαβ xβ = bα for all α ∈ I. β∈I A.2 Systems of Linear Equations 403 Since the right-hand side b may be perturbed (by rounding errors, etc.), the relevant question is: when is Ax = b solvable for all b ∈ KI ? The following theorem recalls that this property is equivalent to the regularity of A. Theorem A.4. For A ∈ KI×I , the following properties are equivalent: (a) A is regular, (b) rank(A)=#I, (c) det(A)=0, (d) Ax =0 has only the trivial solution x =0, (e) Ax = b is solvable for all b ∈ KI , (f) Ax = b has at most one solution, (g) Ax = b is uniquely solvable for all b ∈ KI . A.3 Eigenvalues and Eigenvectors The spectrum of a matrix A ∈ KI×I is defined by σ(A):={λ ∈ C :det(A − λI)=0}. Each λ ∈ σ(A) is called an eigenvalue of A. An eigenvalue has the algebraic multiplicity k if it is a k-fold root of the characteristic polynomial det(A − λI). Since det(A − λI) is a polynomial in λ of degree n =#I, there exist exactly n eigenvalues when they are counted according to their algebraic multiplicity. The geometric multiplicity of λ is the dimension of ker(A − λI). The properties of the determinant prove the next properties. Remark A.5. σ(AT)=σ(A) and σ(AH)=σ(A¯)=σ(A):={λ¯ : λ ∈ σ(A)}. A vector e ∈ CI is called an eigenvector of the matrix A,ife =0 and Ae = λe. (A.3) By Theorem A.4c,d, we conclude from (A.3) that λ must be an eigenvalue. Vice versa, the same theorem proves the following lemma. Lemma A.6. For each λ ∈ σ(A), there exists an eigenvector e satisfying the eigen- value problem (A.3). Hence the geometric multiplicity is at least one. Exercise A.7. Let A =(aij)i,j∈I be an upper or lower triangular matrix or a diagonal matrix. Prove that σ(A)={aii : i ∈ I}. Definition A.8. Two matrices A, B ∈ KI×I are called similar if there is a regular matrix T such that A = T −1BT. (A.4) If T is unitary, the matrices A and B are called unitarily similar. 404 Appendix A Theorem A.9. (a) The eigenvalues of similar matrices A and B coincide: σ(A)=σ(B). The algebraic multiplicities of the eigenvalues are also equal as well as the geometric multiplicities. (b) If T is the similarity transformation in (A.4) and e is an eigenvector of A , then Te is an eigenvector of B. Proof. The algebraic multiplicities are equal since det(A − λI)=det(T −1(B − λI)T )=det(T −1)det(B − λI)det(T ) 1 = det(B − λI)det(T )=det(B − λI). det(T ) ker(A − λI)=ker(T −1(B − λI)T )=ker(B − λI)T proves identical dimensions of ker(A − λI) and ker(B − λI) and therefore of the geometric multiplicities. Part (b) uses B(Te)=TT−1BTe = TAe = T (λe)=λ (Te). Theorem A.10. The products AB and BA have the same spectra with a possible exception of a zero eigenvalue: σ(AB)\{0} = σ(BA)\{0}. This statement is also true for rectangular matrices A ∈ KI×J and B ∈ KJ×I . Proof. Let the eigenvector e =0 belong to the eigenvalue 0 = λ ∈ σ(AB): ABe = λe. Since λe =0 , the vector v := Be does not vanish. Multiplying by B yields BABe = λBe, i.e., BAv = λv with v =0 . λ ∈ σ(BA)\{0} proves σ(AB)\{0}⊂σ(BA)\{0}. The reverse inclusion is analogous. ν Given a polynomial P (ξ)= ν aν ξ in ξ ∈ C, we can extend the domain of definition of P by ν I×I P (A):= aν A for arbitrary A ∈ K ν to the set of square matrices. Here, A0 is defined as the identity I. The proof of the following lemma is postponed to the end of §A.6.1. Lemma A.11. (a) The spectra of A and P (A) satisfy σ(P (A)) = P (σ(A)) := {P (λ):λ ∈ σ(A)}. (b) The algebraic multiplicity of the eigenvalues P (λ) of P (A) is the sum of the multiplicities of all eigenvalues λ1,λ2,...,λk of A with P (λj)=P (λ) (1≤j ≤k). (c) Each eigenvector of A associated with the eigenvalue λ is also an eigenvector of P (A) with the eigenvalue P (λ). A.3 Eigenvalues and Eigenvectors 405 Exercise A.12. Prove the following: (a) If σ(A) contains no zeros of the polynomial P (ξ), then the matrix P (A) is regular. (b) The properties ‘diagonal’, ‘upper triangular matrix’, ‘lower triangular ma- trix’ carry over from A to P (A). This statement is also true for the properties ‘symmetric’ and ‘Hermitian’, provided that P has real coefficients. (c) Let A be regular. All properties mentioned in (b) carry over from A to A−1 . Lemma A.13. Let A ∈ KI×I be a strictly (upper or lower) triangular matrix. Then Am =0 holds for all m>#I. Proof. One proves by induction that Am (m ∈ N) has a vanishing main diagonal m and m − 1 vanishing side diagonals: (A )ij =0for |i − j| <m.Form>#I, the inequality |i − j| <mholds for all indices; hence Am =0. Two matrices A and B are called commutative (or ‘A and B commute’) if AB = BA.
Recommended publications
  • Tilburg University on the Matrix
    Tilburg University On the Matrix (I + X)-1 Engwerda, J.C. Publication date: 2005 Link to publication in Tilburg University Research Portal Citation for published version (APA): Engwerda, J. C. (2005). On the Matrix (I + X)-1. (CentER Discussion Paper; Vol. 2005-120). Macroeconomics. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 25. sep. 2021 No. 2005–120 -1 ON THE MATRIX INEQUALITY ( I + x ) I . By Jacob Engwerda November 2005 ISSN 0924-7815 On the matrix inequality (I + X)−1 ≤ I. Jacob Engwerda Tilburg University Dept. of Econometrics and O.R. P.O. Box: 90153, 5000 LE Tilburg, The Netherlands e-mail: [email protected] November, 2005 1 Abstract: In this note we consider the question under which conditions all entries of the matrix I −(I +X)−1 are nonnegative in case matrix X is a real positive definite matrix.
    [Show full text]
  • 38. a Preconditioner for the Schur Complement Domain Decomposition Method J.-M
    Fourteenth International Conference on Domain Decomposition Methods Editors: Ismael Herrera , David E. Keyes, Olof B. Widlund, Robert Yates c 2003 DDM.org 38. A preconditioner for the Schur complement domain decomposition method J.-M. Cros 1 1. Introduction. This paper presents a preconditioner for the Schur complement domain decomposition method inspired by the dual-primal FETI method [4]. Indeed the proposed method enforces the continuity of the preconditioned gradient at cross-points di- rectly by a reformulation of the classical Neumann-Neumann preconditioner. In the case of elasticity problems discretized by finite elements, the degrees of freedom corresponding to the cross-points coming from domain decomposition, in the stiffness matrix, are separated from the rest. Elimination of the remaining degrees of freedom results in a Schur complement ma- trix for the cross-points. This assembled matrix represents the coarse problem. The method is not mathematically optimal as shown by numerical results but its use is rather economical. The paper is organized as follows: in sections 2 and 3, the Schur complement method and the formulation of the Neumann-Neumann preconditioner are briefly recalled to introduce the notations. Section 4 is devoted to the reformulation of the Neumann-Neumann precon- ditioner. In section 5, the proposed method is compared with other domain decomposition methods such as generalized Neumann-Neumann algorithm [7][9], one-level FETI method [5] and dual-primal FETI method. Performances on a parallel machine are also given for structural analysis problems. 2. The Schur complement domain decomposition method. Let Ω denote the computational domain of an elasticity problem.
    [Show full text]
  • SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian Matrix
    Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 1 SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian matrix In the fitting of the Hessian matrix elements as functions of the reaction coordinate, one can take advantage of symmetry. The non-diagonal elements may be written in the form of numerical derivatives as E(Δα, Δβ ) − E(Δα,−Δβ ) − E(−Δα, Δβ ) + E(−Δα,−Δβ ) H (α, β ) = . (S1) 4δ 2 Here, α and β label any of the 15 atomic coordinates in {Cx, Cy, …, H4z}, E(Δα,Δβ) denotes the DFT energy of CH4 interacting with Ni(111) with a small displacement along α and β, and δ the small displacement used in the second order differencing. For example, the H(Cx,Cy) (or H(Cy,Cx)) is 0, since E(ΔCx ,ΔC y ) = E(ΔC x ,−ΔC y ) and E(−ΔCx ,ΔC y ) = E(−ΔCx ,−ΔC y ) . From Eq.S1, one can deduce the symmetry properties of H of methane interacting with Ni(111) in a geometry belonging to the Cs symmetry (Fig. 1 in the paper): (1) there are always 18 zero elements in the lower triangle of the Hessian matrix (see Fig. S1), (2) the main block matrices A, B and F (see Fig. S1) can be split up in six 3×3 sub-blocks, namely A1, A2 , B1, B2, F1 and F2, in which the absolute values of all the corresponding elements in the sub-blocks corresponding to each other are numerically identical to each other except for the sign of their off-diagonal elements, (3) the triangular matrices E1 and E2 are also numerically the same except for the sign of their off-diagonal terms, (4) the 1 Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 2 block D is a unique block and its off-diagonal terms differ only from each other in their sign.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • Removing External Degrees of Freedom from Transition State
    Removing External Degrees of Freedom from Transition State Search Methods using Quaternions Marko Melander1,2*, Kari Laasonen1,2, Hannes Jónsson3,4 1) COMP Centre of Excellence, Aalto University, FI-00076 Aalto, Finland 2) Department of Chemistry, Aalto University, FI-00076 Aalto, Finland 3) Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland 4) Faculty of Physical Sciences, University of Iceland, 107 Reykjavík, Iceland 1 ABSTRACT In finite systems, such as nanoparticles and gas-phase molecules, calculations of minimum energy paths (MEP) connecting initial and final states of transitions as well as searches for saddle points are complicated by the presence of external degrees of freedom, such as overall translation and rotation. A method based on quaternion algebra for removing the external degrees of freedom is described here and applied in calculations using two commonly used methods: the nudged elastic band (NEB) method for finding minimum energy paths and DIMER method for finding the minimum mode in minimum mode following searches of first order saddle points. With the quaternion approach, fewer images in the NEB are needed to represent MEPs accurately. In both NEB and DIMER calculations of finite systems, the number of iterations required to reach convergence is significantly reduced. The algorithms have been implemented in the Atomic Simulation Environment (ASE) open source software. Keywords: Nudged Elastic Band, DIMER, quaternion, saddle point, transition. 2 1. INTRODUCTION Chemical reactions, diffusion events and configurational changes of molecules are transitions from some initial arrangement of the atoms to another, from an initial state minimum on the energy surface to a final state minimum.
    [Show full text]
  • 18.700 JORDAN NORMAL FORM NOTES These Are Some Supplementary Notes on How to Find the Jordan Normal Form of a Small Matrix. Firs
    18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm for finding the Jordan normal form of a linear operator, and then we will see how this works for small matrices. 1. Facts Throughout we will work over the field C of complex numbers, but if you like you may replace this with any other algebraically closed field. Suppose that V is a C-vector space of dimension n and suppose that T : V → V is a C-linear operator. Then the characteristic polynomial of T factors into a product of linear terms, and the irreducible factorization has the form m1 m2 mr cT (X) = (X − λ1) (X − λ2) ... (X − λr) , (1) for some distinct numbers λ1, . , λr ∈ C and with each mi an integer m1 ≥ 1 such that m1 + ··· + mr = n. Recall that for each eigenvalue λi, the eigenspace Eλi is the kernel of T − λiIV . We generalized this by defining for each integer k = 1, 2,... the vector subspace k k E(X−λi) = ker(T − λiIV ) . (2) It is clear that we have inclusions 2 e Eλi = EX−λi ⊂ E(X−λi) ⊂ · · · ⊂ E(X−λi) ⊂ .... (3) k k+1 Since dim(V ) = n, it cannot happen that each dim(E(X−λi) ) < dim(E(X−λi) ), for each e e +1 k = 1, . , n. Therefore there is some least integer ei ≤ n such that E(X−λi) i = E(X−λi) i .
    [Show full text]
  • Linear Algebra Review James Chuang
    Linear algebra review James Chuang December 15, 2016 Contents 2.1 vector-vector products ............................................... 1 2.2 matrix-vector products ............................................... 2 2.3 matrix-matrix products ............................................... 4 3.2 the transpose .................................................... 5 3.3 symmetric matrices ................................................. 5 3.4 the trace ....................................................... 6 3.5 norms ........................................................ 6 3.6 linear independence and rank ............................................ 7 3.7 the inverse ...................................................... 7 3.8 orthogonal matrices ................................................. 8 3.9 range and nullspace of a matrix ........................................... 8 3.10 the determinant ................................................... 9 3.11 quadratic forms and positive semidefinite matrices ................................ 10 3.12 eigenvalues and eigenvectors ........................................... 11 3.13 eigenvalues and eigenvectors of symmetric matrices ............................... 12 4.1 the gradient ..................................................... 13 4.2 the Hessian ..................................................... 14 4.3 gradients and hessians of linear and quadratic functions ............................. 15 4.5 gradients of the determinant ............................................ 16 4.6 eigenvalues
    [Show full text]
  • (VI.E) Jordan Normal Form
    (VI.E) Jordan Normal Form Set V = Cn and let T : V ! V be any linear transformation, with distinct eigenvalues s1,..., sm. In the last lecture we showed that V decomposes into stable eigenspaces for T : s s V = W1 ⊕ · · · ⊕ Wm = ker (T − s1I) ⊕ · · · ⊕ ker (T − smI). Let B = fB1,..., Bmg be a basis for V subordinate to this direct sum and set B = [T j ] , so that k Wk Bk [T]B = diagfB1,..., Bmg. Each Bk has only sk as eigenvalue. In the event that A = [T]eˆ is s diagonalizable, or equivalently ker (T − skI) = ker(T − skI) for all k , B is an eigenbasis and [T]B is a diagonal matrix diagf s1,..., s1 ;...; sm,..., sm g. | {z } | {z } d1=dim W1 dm=dim Wm Otherwise we must perform further surgery on the Bk ’s separately, in order to transform the blocks Bk (and so the entire matrix for T ) into the “simplest possible” form. The attentive reader will have noticed above that I have written T − skI in place of skI − T . This is a strategic move: when deal- ing with characteristic polynomials it is far more convenient to write det(lI − A) to produce a monic polynomial. On the other hand, as you’ll see now, it is better to work on the individual Wk with the nilpotent transformation T j − s I =: N . Wk k k Decomposition of the Stable Eigenspaces (Take 1). Let’s briefly omit subscripts and consider T : W ! W with one eigenvalue s , dim W = d , B a basis for W and [T]B = B.
    [Show full text]
  • 8.3 Positive Definite Matrices
    8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number.
    [Show full text]
  • Arxiv:1909.13402V1 [Math.CA] 30 Sep 2019 Routh-Hurwitz Array [14], Argument Principle [23] and So On
    ON GENERALIZATION OF CLASSICAL HURWITZ STABILITY CRITERIA FOR MATRIX POLYNOMIALS XUZHOU ZHAN AND ALEXANDER DYACHENKO Abstract. In this paper, we associate a class of Hurwitz matrix polynomi- als with Stieltjes positive definite matrix sequences. This connection leads to an extension of two classical criteria of Hurwitz stability for real polynomials to matrix polynomials: tests for Hurwitz stability via positive definiteness of block-Hankel matrices built from matricial Markov parameters and via matricial Stieltjes continued fractions. We obtain further conditions for Hurwitz stability in terms of block-Hankel minors and quasiminors, which may be viewed as a weak version of the total positivity criterion. Keywords: Hurwitz stability, matrix polynomials, total positivity, Markov parameters, Hankel matrices, Stieltjes positive definite sequences, quasiminors 1. Introduction Consider a high-order differential system (n) (n−1) A0y (t) + A1y (t) + ··· + Any(t) = u(t); where A0;:::;An are complex matrices, y(t) is the output vector and u(t) denotes the control input vector. The asymptotic stability of such a system is determined by the Hurwitz stability of its characteristic matrix polynomial n n−1 F (z) = A0z + A1z + ··· + An; or to say, by that all roots of det F (z) lie in the open left half-plane <z < 0. Many algebraic techniques are developed for testing the Hurwitz stability of matrix polynomials, which allow to avoid computing the determinant and zeros: LMI approach [20, 21, 27, 28], the Anderson-Jury Bezoutian [29, 30], matrix Cauchy indices [6], lossless positive real property [4], block Hurwitz matrix [25], extended arXiv:1909.13402v1 [math.CA] 30 Sep 2019 Routh-Hurwitz array [14], argument principle [23] and so on.
    [Show full text]
  • Domain Decomposition Solvers (FETI) Divide Et Impera
    Domain Decomposition solvers (FETI) a random walk in history and some current trends Daniel J. Rixen Technische Universität München Institute of Applied Mechanics www.amm.mw.tum.de [email protected] 8-10 October 2014 39th Woudschoten Conference, organised by the Werkgemeenschap Scientific Computing (WSC) 1 Divide et impera Center for Aerospace Structures CU, Boulder wikipedia When splitting the problem in parts and asking different cpu‘s (or threads) to take care of subproblems, will the problem be solved faster ? FETI, Primal Schur (Balancing) method around 1990 …….. basic methods, mesh decomposer technology 1990-2001 .…….. improvements •! preconditioners, coarse grids •! application to Helmholtz, dynamics, non-linear ... Here the concepts are outlined using some mechanical interpretation. For mathematical details, see lecture of Axel Klawonn. !"!! !"#$%&'()$&'(*+*',-%-.&$(*&$-%'#$%-'(*-/$0%1*()%-).1*,2%()'(%()$%,.,3$4*-($,+$% .5%'%-.6"(*.,%*&76*$-%'%6.2*+'6%+.,(8'0*+(*.,9%1)*6$%$,2*,$$#-%&*2)(%+.,-*0$#%'% ,"&$#*+'6%#$-"6(%'-%()$%.,6:%#$'-.,';6$%2.'6<%% ="+)%.,$%-*0$0%>*$1-%-$$&%(.%#$?$+(%)"&',%6*&*('(*.,-%#'()$#%()',%.;@$+(*>$%>'6"$-<%% A,%*(-$65%&'()$&'(*+-%*-%',%*,0*>*-*;6$%.#2',*-&%",*(*,2%()$.#$(*+'6%+.,($&76'(*.,% ',0%'+(*>$%'776*+'(*.,<%%%%% %%%%%%% R. Courant %B % in Variational! Methods for the solution of problems of equilibrium and vibrations Bulletin of American Mathematical Society, 49, pp.1-23, 1943 Here the concepts are outlined using some mechanical interpretation. For mathematical details, see lecture of Axel Klawonn. Content
    [Show full text]