Matrices : Theory & Applications Additional Exercises

Total Page:16

File Type:pdf, Size:1020Kb

Matrices : Theory & Applications Additional Exercises Matrices : Theory & Applications Additional exercises Denis Serre Ecole´ Normale Sup´erieure de Lyon Contents Topics 2 Themes of the exercises 3 Exercises 17 Index of scientists (mostly mathematicians) 185 Index of stamps 189 Notation Unless stated otherwise, a letter k or K denoting a set of scalars, actually denotes a (commu- tative) field. Related web pages See the solutions to the exercises in the book on http://www.umpa.ens-lyon.fr/ ~serre/exercises.pdf, and the errata about the book on http://www.umpa.ens-lyon.fr/ ~serre/errata.pdf. See also a few open problems on http://www.umpa.ens-lyon.fr/ ~serre/open.pdf. 1 Topics Calculus of variations, differential calculus: Exercises 2, 3, 49, 55, 80, 109, 124, 184, 215, 249, 250, 292, 321, 322, 334 Complex analysis: Exercises 7, 74, 82, 89, 104, 180, 245, 354 Commutator: 81, 102, 115, 170, 203, 232, 236, 244, 256, 289, 306, 310, 315, 321 Combinatorics, complexity, P.I.D.s: Exercises 9, 25, 28, 29, 45, 62, 68, 85, 115, 120, 134, 135, 142, 154, 188, 194, 278, 314, 315, 316, 317, 320, 332, 333, 337, 358 Algebraic identities: Exercises 6, 11, 24, 50, 68, 80, 84, 93, 94, 95, 114, 115, 119, 120, 127, 129, 135, 144, 156, 159, 172, 173, 174, 203, 206, 242, 243, 244, 265, 288, 289, 335, 349 Inequalities: Exercises 12, 27, 35, 40, 49, 58, 63, 69, 77, 82, 87, 101, 106, 110, 111, 125, 128, 139, 143, 155, 162, 168, 170, 182, 183, 198, 209, 210, 218, 219, 221, 222, 253, 254, 264, 272, 279, 285, 334, 336, 341, 361, 363, 364 Bistochastic or non-negative matrices: Exercises 9, 22, 25, 28, 76, 96, 101, 117, 138, 139, 145, 147, 148, 150, 152, 154, 155, 164, 165, 192, 193, 212, 230, 231, 251, 322, 323, 332, 333, 341, 347, 350, 365 Determinants, minors and Pfaffian: Exercises 8, 10,11, 24, 25, 50, 56, 84, 93, 94, 95, 112, 113, 114, 119, 120, 127, 129, 143, 144, 146, 151, 159, 163, 172, 181, 188, 190, 195, 206, 207, 208, 213, 214, 216, 221, 222, 228, 234, 246, 270, 271, 272, 273, 274, 278, 279, 285, 286, 290, 291, 292, 308, 335, 336, 349, 357 Hermitian or real symmetric matrices: Exercises 12, 13, 14, 15, 16, 27, 40, 41, 51, 52, 54, 58, 63, 70, 74, 75, 77, 86, 88, 90, 92, 102, 105, 106, 110, 113, 116, 118, 126, 128, 143, 149, 151, 157, 164, 168, 170, 180, 182, 184, 192, 198, 199, 200, 201, 209, 210, 211, 215, 216, 217, 218, 219, 223, 227, 229, 230, 231, 239, 241, 248, 258, 259, 263, 264, 271, 272, 273, 274, 279, 280, 285, 291, 292, 293, 294, 301, 307, 308, 312, 313, 319, 326, 328{330, 334, 336, 347, 351, 352, 353, 355, 360, 361, 364 Orthogonal or unitary matrices: Exercises 21, 64, 72, 73, 81, 82, 91, 101, 116, 158, 226, 236, 255, 257, 276, 277, 281, 297, 302, 314, 343, 344, 348, 357, 358, 366 Norms, convexity: Exercises 7, 17, 19, 20, 21, 40, 65, 69, 70, 71, 77, 81, 82, 87, 91, 96, 97, 100, 103, 104, 105, 106, 107, 108, 109, 110, 111, 117, 125, 128, 131, 136, 137, 138, 140, 141, 153, 155, 162, 170, 183, 199, 223, 227, 242, 243, 261, 297, 299, 300, 301, 302, 303, 313, 323, 361, 366 Eigenvalues, characteristic or minimal polynomial: Exercises 22, 31, 37, 41, 47, 61, 69, 71, 83, 88, 92, 98, 99, 101, 113, 116, 121, 123, 126, 130, 132, 133, 145, 146, 147, 156, 158, 168, 175, 177, 185, 186, 187, 194, 195, 205, 252, 256, 260, 267, 269, 276, 277, 294, 295, 304, 305, 339, 340, 344, 359, 362 2 Diagonalization, trigonalization, similarity, Jordanization: Exercices 4, 5, 85, 100, 187, 189, 191, 205, 212, 240, 282, 284, 287, 324, 325, 326{330, 338, 339, 340, 345, 359, 360 Singular values, polar decomposition, square root: Exercises 30, 46, 51, 52, 69, 82, 100, 103, 108, 109, 110, 111, 139, 147, 162, 164, 185, 198, 202, 226, 240, 254, 261, 262, 281, 282, 296, 304, 354, 363 Classical groups, exponential: Exercises 30, 32, 48, 53, 57, 66, 72, 73, 78, 79, 106, 123, 148, 153, 176, 179, 196, 201, 202, 241, 246, 268, 295, 316, 321 Numerical analysis: Exercises 33, 42, 43, 65, 67, 140, 194, 197, 344, 353, 355 Matrix equations: Exercises 16, 34, 38, 59, 60, 64, 166, 167, 237, 238, 319, 325, 331 Other real or complex matrices: Exercises 26, 35, 39, 46, 80, 103, 118, 131, 166, 171, 185, 220, 221, 232, 242, 243, 245, 247, 249, 250, 253, 268, 269, 284, 290, 298, 303, 304, 306, 309, 311, 324, 331, 351, 356, 360 Differential equations: Exercises 34, 44, 122, 123, 124, 145, 195, 196, 200, 236, 241, 263, 295, 365 General scalar fields: Exercises 1, 2, 4, 5, 6, 10, 18, 23, 36, 115, 133, 160, 161, 167, 169, 173, 174, 178, 179, 181, 186, 187, 204, 206, 207, 213, 214, 224, 225, 237, 238, 239, 244, 256, 265, 266, 283, 288, 289, 310, 316, 317, 318, 320, 339, 340, 345, 349, 359 Algorithms: 11, 24, 26, 42, 43, 67, 88, 95, 112, 344, 353, 355 Just linear algebra: 233, 235 More specific topics Numerical range/radius: 21, 65, 100, 131, 223, 253, 269, 306, 309, 356 H 7! det H, over SPDn or HPDn: 58, 75, 209, 218, 219, 271, 292, 308 Contractions: 82, 104, 220, 221, 272, 300 Hadamard inequality: 279, 285 Hadamard product: 250, 279, 285, 322, 335, 336, 342, 343 Functions of matrices: 51, 52, 63, 74, 80, 82, 107, 110, 111, 128, 179, 249, 250, 280, 283, 334, 354 Commutation: 38, 115, 120, 202, 213, 237, 238, 255, 256, 277, 281, 297, 345, 346 Weyl's inequalities and improvements: 12, 27, 69, 168, 275, 363 Hessenberg matrices: 26, 113, 305 3 Themes of the exercises 1. Similarity within various fields. 2. Rank-one perturbations. 3. Minors ; rank-one convexity. 4, 5. Diagonalizability. 6. 2 × 2 matrices are universaly similar to their matrices of cofactors. 7. Riesz{Thorin by bare hands. 8. Orthogonal polynomials. 9. Birkhoff's Theorem ; wedding lemma. 10. Pfaffian ; generalization of Corollary 7.6.1. 11. Expansion formula for the Pfaffian. 12. Multiplicative Weyl's inequalities. 13, 14. Semi-simplicity of the null eigenvalue for a product of Hermitian matrices ; a contrac- tion. 15, 16. Toepliz matrices ; the matrix equation X + A∗X−1A = H. 17. An other proof of Proposition 3.4.2. 18. A family of symplectic matrices. 19, 20. Banach{Mazur distance. 21. Numerical range. 22. Jacobi matrices ; sign changes in eigenvectors ; a generalization of Perron-Frobenius for tridiagonal matrices. 23. An other second canonical form of square matrices. 24. A recursive algorithm for the computation of the determinant. 25. Self-avoiding graphs and totally positive matrices. 26. Schur parametrization of upper Hessenberg matrices. 27. Weyl's inequalities for weakly hyperbolic systems. + 28.SL 2 (Z) is a free monoid. 4 29. Sign-stable matrices. 30. For a classical group G, G \ Un is a maximal compact subgroup of G. 31. Nearly symmetric matrices. 32. Eigenvalues of elements of O(1; m). 33. A kind of reciprocal of the analysis of the relaxation method. ∗ 34. The matrix equation A X + XA = 2γX − In. 35. A characterization of unitary matrices through an equality case. 36. Elementary divisors and lattices. 37. Companion matrices of polynomials having a common root. 38. Matrices A 2 M 3(k) commuting with the exterior product. ∗ ∗ 39. Kernel and range of In − P P and 2In − P − P when P is a projector. 40. Multiplicative inequalities for unitarily invariant norms. 41. Largest eigenvalue of Hermitian band-matrices. 42, 43. Preconditioned Conjugate Gradient. 44. Controllability, Kalman's criterion. 45. Nash equilibrium. The game \scissors, stone,. ". 46. Polar decomposition of a companion matrix. 47. Invariant subspaces and characteristic polynomials. 48. Eigenvalues of symplectic matrices. 49. From Compensated-Compactness. 50. Relations (syzygies) between minors. 51. The square root map is analytic. 52. The square root map is (globally) H¨olderian. 53. Lorentzian invariants of electromagnetism. 54. Spectrum of blockwise Hermitian matrices. 55. Rank-one connections, again. 5 56. The transpose of cofactors. Identities. 57. A positive definite quadratic form in Lorentzian geometry. 58. A convex function on HPDn. 59. When elements of N + Hn have a real spectrum. 60. When elements of M + Hn are diagonalizable. 61. A sufficient condition for a square matrix to have bounded powers. 62. Euclidean distance matrices. 63. A Jensen's trace inequality. 64. A characterization of normal matrices. 65. Pseudo-spectrum. 66. Squares in GLn(R) are exponentials. 67. The Le Verrier{Fadeev method for computing the characteristic polynomial. 68. An explicit formula for the resolvent. 69. Eigenvalues vs singular values (New formulation.) 70. Horn & Schur's theorem. 71. A theorem of P. Lax. 72, 73. The exchange map. 74. Monotone matrix functions. 75. S 7! log det S is concave, again. 76. An application of Perron{Frobenius. 77. A vector-valued form of the Hahn{Banach theorem, for symmetric operators. 78. Compact subgroups of GLn(C). 79. The action of U(p; q) over the unit ball of Mq×p(C). 80. Balian's formula. 81. The \projection" onto normal matrices. 82. Von Neumann inequality. 6 83. Spectral analysis of the matrix of cofactors. 84. A determinantal identity. 85. Flags. 86. A condition for self-adjointness. 87. Parrott's Lemma. 88. The signs of the eigenvalues of a Hermitian matrix. 89. The necessary part in Pick's Theorem. 90. The borderline case for Pick matrices. 91. Normal matrices are far from triangular.
Recommended publications
  • 18.06 Linear Algebra, Problem Set 2 Solutions
    18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular −1 U : 2 3 2 3 2 3 1 a b 1 0 0 −1 4 5 4 5 4 5 UU = I 0 1 c x1 x2 x3 = 0 1 0 : 0 0 1 0 0 1 −1 Solution (4 points): Row reduce [U I] to get [I U ] as follows (here Ri stands for the ith row): 2 3 2 3 1 a b 1 0 0 (R1 = R1 − aR2) 1 0 b − ac 1 −a 0 4 5 4 5 0 1 c 0 1 0 −! (R2 = R2 − cR2) 0 1 0 0 1 −c 0 0 1 0 0 1 0 0 1 0 0 1 ( ) 2 3 R1 = R1 − (b − ac)R3 1 0 0 1 −a ac − b −! 40 1 0 0 1 −c 5 : 0 0 1 0 0 1 Section 2.5. Problem 40: (Recommended) A is a 4 by 4 matrix with 1's on the diagonal and −1 −a; −b; −c on the diagonal above. Find A for this bidiagonal matrix. −1 Solution (12 points): Row reduce [A I] to get [I A ] as follows (here Ri stands for the ith row): 2 3 1 −a 0 0 1 0 0 0 6 7 60 1 −b 0 0 1 0 07 4 5 0 0 1 −c 0 0 1 0 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + aR2) 1 0 −ab 0 1 a 0 0 6 7 (R2 = R2 + bR2) 60 1 0 −bc 0 1 b 07 −! 4 5 (R3 = R3 + cR4) 0 0 1 0 0 0 1 c 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + abR3) 1 0 0 0 1 a ab abc (R = R + bcR ) 60 1 0 0 0 1 b bc 7 −! 2 2 4 6 7 : 40 0 1 0 0 0 1 c 5 0 0 0 1 0 0 0 1 Alternatively, write A = I − N.
    [Show full text]
  • [Math.RA] 19 Jun 2003 Two Linear Transformations Each Tridiagonal with Respect to an Eigenbasis of the Ot
    Two linear transformations each tridiagonal with respect to an eigenbasis of the other; comments on the parameter array∗ Paul Terwilliger Abstract Let K denote a field. Let d denote a nonnegative integer and consider a sequence ∗ K p = (θi,θi , i = 0...d; ϕj , φj, j = 1...d) consisting of scalars taken from . We call p ∗ ∗ a parameter array whenever: (PA1) θi 6= θj, θi 6= θj if i 6= j, (0 ≤ i, j ≤ d); (PA2) i−1 θh−θd−h ∗ ∗ ϕi 6= 0, φi 6= 0 (1 ≤ i ≤ d); (PA3) ϕi = φ1 + (θ − θ )(θi−1 − θ ) h=0 θ0−θd i 0 d i−1 θh−θd−h ∗ ∗ (1 ≤ i ≤ d); (PA4) φi = ϕ1 + (Pθ − θ )(θd−i+1 − θ0) (1 ≤ i ≤ d); h=0 θ0−θd i 0 −1 ∗ ∗ ∗ ∗ −1 (PA5) (θi−2 − θi+1)(θi−1 − θi) P, (θi−2 − θi+1)(θi−1 − θi ) are equal and independent of i for 2 ≤ i ≤ d − 1. In [13] we showed the parameter arrays are in bijection with the isomorphism classes of Leonard systems. Using this bijection we obtain the following two characterizations of parameter arrays. Assume p satisfies PA1, PA2. Let ∗ ∗ A, B, A ,B denote the matrices in Matd+1(K) which have entries Aii = θi, Bii = θd−i, ∗ ∗ ∗ ∗ ∗ ∗ Aii = θi , Bii = θi (0 ≤ i ≤ d), Ai,i−1 = 1, Bi,i−1 = 1, Ai−1,i = ϕi, Bi−1,i = φi (1 ≤ i ≤ d), and all other entries 0. We show the following are equivalent: (i) p satisfies −1 PA3–PA5; (ii) there exists an invertible G ∈ Matd+1(K) such that G AG = B and G−1A∗G = B∗; (iii) for 0 ≤ i ≤ d the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θ0)(λ − θ1) · · · (λ − θn−1)(θi − θ0)(θi − θ1) · · · (θi − θn−1) ϕ1ϕ2 · · · ϕ nX=0 n is a scalar multiple of the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θd)(λ − θd−1) · · · (λ − θd−n+1)(θ − θ )(θ − θ ) · · · (θ − θ ) i 0 i 1 i n−1 .
    [Show full text]
  • Derived Categories. Winter 2008/09
    Derived categories. Winter 2008/09 Igor V. Dolgachev May 5, 2009 ii Contents 1 Derived categories 1 1.1 Abelian categories .......................... 1 1.2 Derived categories .......................... 9 1.3 Derived functors ........................... 24 1.4 Spectral sequences .......................... 38 1.5 Exercises ............................... 44 2 Derived McKay correspondence 47 2.1 Derived category of coherent sheaves ................ 47 2.2 Fourier-Mukai Transform ...................... 59 2.3 Equivariant derived categories .................... 75 2.4 The Bridgeland-King-Reid Theorem ................ 86 2.5 Exercises ............................... 100 3 Reconstruction Theorems 105 3.1 Bondal-Orlov Theorem ........................ 105 3.2 Spherical objects ........................... 113 3.3 Semi-orthogonal decomposition ................... 121 3.4 Tilting objects ............................ 128 3.5 Exercises ............................... 131 iii iv CONTENTS Lecture 1 Derived categories 1.1 Abelian categories We assume that the reader is familiar with the concepts of categories and func- tors. We will assume that all categories are small, i.e. the class of objects Ob(C) in a category C is a set. A small category can be defined by two sets Mor(C) and Ob(C) together with two maps s, t : Mor(C) → Ob(C) defined by the source and the target of a morphism. There is a section e : Ob(C) → Mor(C) for both maps defined by the identity morphism. We identify Ob(C) with its image under e. The composition of morphisms is a map c : Mor(C) ×s,t Mor(C) → Mor(C). There are obvious properties of the maps (s, t, e, c) expressing the axioms of associativity and the identity of a category. For any A, B ∈ Ob(C) we denote −1 −1 by MorC(A, B) the subset s (A) ∩ t (B) and we denote by idA the element e(A) ∈ MorC(A, A).
    [Show full text]
  • Self-Interlacing Polynomials Ii: Matrices with Self-Interlacing Spectrum
    SELF-INTERLACING POLYNOMIALS II: MATRICES WITH SELF-INTERLACING SPECTRUM MIKHAIL TYAGLOV Abstract. An n × n matrix is said to have a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are distributed as follows n−1 λ1 > −λ2 > λ3 > ··· > (−1) λn > 0: A method for constructing sign definite matrices with self-interlacing spectra from totally nonnegative ones is presented. We apply this method to bidiagonal and tridiagonal matrices. In particular, we generalize a result by O. Holtz on the spectrum of real symmetric anti-bidiagonal matrices with positive nonzero entries. 1. Introduction In [5] there were introduced the so-called self-interlacing polynomials. A polynomial p(z) is called self- interlacing if all its roots are real, semple and interlacing the roots of the polynomial p(−z). It is easy to see that if λk, k = 1; : : : ; n, are the roots of a self-interlacing polynomial, then the are distributed as follows n−1 (1.1) λ1 > −λ2 > λ3 > ··· > (−1) λn > 0; or n (1.2) − λ1 > λ2 > −λ3 > ··· > (−1) λn > 0: The polynomials whose roots are distributed as in (1.1) (resp. in (1.2)) are called self-interlacing of kind I (resp. of kind II). It is clear that a polynomial p(z) is self-interlacing of kind I if, and only if, the polynomial p(−z) is self-interlacing of kind II. Thus, it is enough to study self-interlacing of kind I, since all the results for self-interlacing of kind II will be obtained automatically. Definition 1.1. An n × n matrix is said to possess a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are real, simple, are distributed as in (1.1).
    [Show full text]
  • Arxiv:1306.4805V3 [Math.OC] 6 Feb 2015 Used Greedy Techniques to Reorder Matrices
    CONVEX RELAXATIONS FOR PERMUTATION PROBLEMS FAJWEL FOGEL, RODOLPHE JENATTON, FRANCIS BACH, AND ALEXANDRE D’ASPREMONT ABSTRACT. Seriation seeks to reconstruct a linear order between variables using unsorted, pairwise similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We write seri- ation as an optimization problem by proving the equivalence between the seriation and combinatorial 2-SUM problems on similarity matrices (2-SUM is a quadratic minimization problem over permutations). The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we derive several convex relax- ations for 2-SUM to improve the robustness of seriation solutions in noisy settings. These convex relaxations also allow us to impose structural constraints on the solution, hence solve semi-supervised seriation problems. We derive new approximation bounds for some of these relaxations and present numerical experiments on archeological data, Markov chains and DNA assembly from shotgun gene sequencing data. 1. INTRODUCTION We study optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper is centered on the seriation problem: we are given a similarity matrix between a set of n variables and assume that the variables can be ordered along a chain, where the similarity between variables decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear ordering based on unsorted, possibly noisy, pairwise similarity information. This problem has its roots in archeology [Robinson, 1951] and also has direct applications in e.g.
    [Show full text]
  • Signed Exceptional Sequences and the Cluster Morphism Category
    SIGNED EXCEPTIONAL SEQUENCES AND THE CLUSTER MORPHISM CATEGORY KIYOSHI IGUSA AND GORDANA TODOROV Abstract. We introduce signed exceptional sequences as factorizations of morphisms in the cluster morphism category. The objects of this category are wide subcategories of the module category of a hereditary algebra. A morphism [T ]: A!B is an equivalence class of rigid objects T in the cluster category of A so that B is the right hom-ext perpendicular category of the underlying object jT j 2 A. Factorizations of a morphism [T ] are given by totally orderings of the components of T . This is equivalent to a \signed exceptional sequences." For an algebra of finite representation type, the geometric realization of the cluster morphism category is the Eilenberg-MacLane space with fundamental group equal to the \picture group" introduced by the authors in [IOTW15b]. Contents Introduction 2 1. Definition of cluster morphism category 4 1.1. Wide subcategories 4 1.2. Composition of cluster morphisms 7 1.3. Proof of Proposition 1.8 8 2. Signed exceptional sequences 16 2.1. Definition and basic properties 16 2.2. First main theorem 17 2.3. Permutation of signed exceptional sequences 19 2.4. c -vectors 20 3. Classifying space of the cluster morphism category 22 3.1. Statement of the theorem 23 3.2. HNN extensions and outline of proof 24 3.3. Definitions and proofs 26 3.4. Classifying space of a category and Lemmas 3.18, 3.19 29 3.5. Key lemma 30 3.6. G(S) is an HNN extension of G(S0) 32 4.
    [Show full text]
  • Representations of Semisimple Lie Algebras in Prime Characteristic and the Noncommutative Springer Resolution
    Annals of Mathematics 178 (2013), 835{919 http://dx.doi.org/10.4007/annals.2013.178.3.2 Representations of semisimple Lie algebras in prime characteristic and the noncommutative Springer resolution By Roman Bezrukavnikov and Ivan Mirkovic´ To Joseph Bernstein with admiration and gratitude Abstract We prove most of Lusztig's conjectures on the canonical basis in homol- ogy of a Springer fiber. The conjectures predict that this basis controls numerics of representations of the Lie algebra of a semisimple algebraic group over an algebraically closed field of positive characteristic. We check this for almost all characteristics. To this end we construct a noncom- mutative resolution of the nilpotent cone which is derived equivalent to the Springer resolution. On the one hand, this noncommutative resolution is closely related to the positive characteristic derived localization equiva- lences obtained earlier by the present authors and Rumynin. On the other hand, it is compatible with the t-structure arising from an equivalence with the derived category of perverse sheaves on the affine flag variety of the Langlands dual group. This equivalence established by Arkhipov and the first author fits the framework of local geometric Langlands duality. The latter compatibility allows one to apply Frobenius purity theorem to deduce the desired properties of the basis. We expect the noncommutative counterpart of the Springer resolution to be of independent interest from the perspectives of algebraic geometry and geometric Langlands duality. Contents 0. Introduction 837 0.1. Notations and conventions 841 1. t-structures on cotangent bundles of flag varieties: statements and preliminaries 842 R.B.
    [Show full text]
  • Doubly Stochastic Matrices Whose Powers Eventually Stop
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 330 (2001) 25–30 www.elsevier.com/locate/laa Doubly stochastic matrices whose powers eventually stopୋ Suk-Geun Hwang a,∗, Sung-Soo Pyo b aDepartment of Mathematics Education, Kyungpook National University, Taegu 702-701, South Korea bCombinatorial and Computational Mathematics Center, Pohang University of Science and Technology, Pohang, South Korea Received 22 June 2000; accepted 14 November 2000 Submitted by R.A. Brualdi Abstract In this note we characterize doubly stochastic matrices A whose powers A, A2,A3,... + eventually stop, i.e., Ap = Ap 1 =···for some positive integer p. The characterization en- ables us to determine the set of all such matrices. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A51 Keywords: Doubly stochastic matrix; J-potent 1. Introduction Let R denote the real field. For positive integers m, n,letRm×n denote the set of all m × n matrices with real entries. As usual let Rn denote the set Rn×1.We call the members of Rn the n-vectors. The n-vector of 1’s is denoted by e,andthe identity matrix of order n is denoted by In. For two matrices A, B of the same size, let A B denote that all the entries of A − B are nonnegative. A matrix A is called nonnegative if A O. A nonnegative square matrix is called a doubly stochastic matrix if all of its row sums and column sums equal 1.
    [Show full text]
  • Accurate Singular Values of Bidiagonal Matrices
    d d Accurate Singular Values of Bidiagonal Matrices (Appeared in the SIAM J. Sci. Stat. Comput., v. 11, n. 5, pp. 873-912, 1990) James Demmel W. Kahan Courant Institute Computer Science Division 251 Mercer Str. University of California New York, NY 10012 Berkeley, CA 94720 Abstract Computing the singular values of a bidiagonal matrix is the ®nal phase of the standard algo- rithm for the singular value decomposition of a general matrix. We present a new algorithm which computes all the singular values of a bidiagonal matrix to high relative accuracy indepen- dent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may com- pute sm all singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm, and frequently faster. Keywords: singular value decomposition, bidiagonal matrix, QR iteration AMS(MOS) subject classi®cations: 65F20, 65G05, 65F35 1. Introduction The standard algorithm for computing the singular value decomposition (SVD ) of a gen- eral real matrix A has two phases [7]: = T 1) Compute orthogonal matrices P 11and Q such that B PAQ 11is in bidiagonal form, i.e. has nonzero entries only on its diagonal and ®rst superdiagonal. Σ= T 2) Compute orthogonal matrices P 22and Q such that PBQ 22is diagonal and nonnega- σ Σ tive. The diagonal entries i of are the singular values of A. We will take them to be σ≥σ = sorted in decreasing order: ii+112. The columns of Q QQ are the right singular vec- = tors,andthecolumnsofP PP12are the left singular vectors.
    [Show full text]
  • From Triangulated Categories to Cluster Algebras
    FROM TRIANGULATED CATEGORIES TO CLUSTER ALGEBRAS PHILIPPE CALDERO AND BERNHARD KELLER Abstract. The cluster category is a triangulated category introduced for its combinato- rial similarities with cluster algebras. We prove that a cluster algebra A of finite type can be realized as a Hall algebra, called exceptional Hall algebra, of the cluster category. This realization provides a natural basis for A. We prove new results and formulate conjectures on ‘good basis’ properties, positivity, denominator theorems and toric degenerations. 1. Introduction Cluster algebras were introduced by S. Fomin and A. Zelevinsky [12]. They are subrings of the field Q(u1, . , um) of rational functions in m indeterminates, and defined via a set of generators constructed inductively. These generators are called cluster variables and are grouped into subsets of fixed finite cardinality called clusters. The induction process begins with a pair (x,B), called a seed, where x is an initial cluster and B is a rectangular matrix with integer coefficients. The first aim of the theory was to provide an algebraic framework for the study of total positivity and of Lusztig/Kashiwara’s canonical bases of quantum groups. The first result is the Laurent phenomenon which asserts that the cluster variables, and thus the cluster ±1 ±1 algebra they generate, are contained in the Laurent polynomial ring Z[u1 , . , um ]. Since its foundation, the theory of cluster algebras has witnessed intense activity, both in its own foundations and in its connections with other research areas. One important aim has been to prove, as in [1], [31], that many algebras encountered in the theory of reductive Lie groups have (at least conjecturally) a structure of cluster algebra with an explicit seed.
    [Show full text]
  • Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Universitat Politècnica De Catalunya
    Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Author: Josep Lluis Larriba Pey Advisor: Juan José Navarro Guerrero Barcelona, January 1995 UPC Universitat Politècnica de Catalunya Departament d'Arquitectura de Computadors UNIVERSITAT POLITÈCNICA DE CATALU NYA B L I O T E C A X - L I B R I S Tesi doctoral presentada per Josep Lluis Larriba Pey per tal d'aconseguir el grau de Doctor en Informàtica per la Universitat Politècnica de Catalunya UNIVERSITAT POLITÈCNICA DE CATAl U>'YA ADA'UNÍIEÏRACÍÓ ;;//..;,3UMPf';S AO\n¿.-v i S -i i::-.« c:¿fCM or,re: iïhcc'a a la pàgina .rf.S# a;; 3 b el r;ú¡; Barcelona, 6 N L'ENCARREGAT DEL REGISTRE. Barcelona, de de 1995 To my wife Marta To my parents Elvira and José Luis "A journey of a thousand miles must begin with a single step" Lao-Tse Acknowledgements I would like to thank my parents, José Luis and Elvira, for their unconditional help and love to me. I will always be indebted to you. I also want to thank my wife, Marta, for being the sparkle in my life, for her support and for understanding my (good) moods. I thank Juanjo, my advisor, for his friendship, patience and good advice. Also, I want to thank Àngel Jorba for his unconditional collaboration, work and support in some of the contributions of this work. I thank the members of "Comissió de Doctorat del DAG" for their comments and specially Miguel Valero for his suggestions on the topics of chapter 4. I thank Mateo Valero for his friendship and always good advice.
    [Show full text]
  • Computing the Moore-Penrose Inverse for Bidiagonal Matrices
    УДК 519.65 Yu. Hakopian DOI: https://doi.org/10.18523/2617-70802201911-23 COMPUTING THE MOORE-PENROSE INVERSE FOR BIDIAGONAL MATRICES The Moore-Penrose inverse is the most popular type of matrix generalized inverses which has many applications both in matrix theory and numerical linear algebra. It is well known that the Moore-Penrose inverse can be found via singular value decomposition. In this regard, there is the most effective algo- rithm which consists of two stages. In the first stage, through the use of the Householder reflections, an initial matrix is reduced to the upper bidiagonal form (the Golub-Kahan bidiagonalization algorithm). The second stage is known in scientific literature as the Golub-Reinsch algorithm. This is an itera- tive procedure which with the help of the Givens rotations generates a sequence of bidiagonal matrices converging to a diagonal form. This allows to obtain an iterative approximation to the singular value decomposition of the bidiagonal matrix. The principal intention of the present paper is to develop a method which can be considered as an alternative to the Golub-Reinsch iterative algorithm. Realizing the approach proposed in the study, the following two main results have been achieved. First, we obtain explicit expressions for the entries of the Moore-Penrose inverse of bidigonal matrices. Secondly, based on the closed form formulas, we get a finite recursive numerical algorithm of optimal computational complexity. Thus, we can compute the Moore-Penrose inverse of bidiagonal matrices without using the singular value decomposition. Keywords: Moor-Penrose inverse, bidiagonal matrix, inversion formula, finite recursive algorithm.
    [Show full text]