Small-Energy Asymptotics of the Scattering Matrix for the Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Small-Energy Asymptotics of the Scattering Matrix for the Matrix JOURNAL OF MATHEMATICAL PHYSICS VOLUME 42, NUMBER 10 OCTOBER 2001 Small-energy asymptotics of the scattering matrix for the matrix Schro¨ dinger equation on the line Tuncay Aktosuna) Department of Mathematics and Statistics, Mississippi State University, Mississippi State, Mississippi 39762 Martin Klausb) Department of Mathematics, Virginia Tech, Blacksburg, Virginia 24061 Cornelis van der Meec) Department of Mathematics, University of Cagliari, Cagliari, Italy ͑Received 1 June 1999; accepted for publication 10 July 2001͒ The one-dimensional matrix Schro¨dinger equation is considered when the matrix potential is self-adjoint with entries that are integrable and have finite first mo- ments. The small-energy asymptotics of the scattering coefficients are derived, and the continuity of the scattering coefficients at zero energy is established. When the entries of the potential have also finite second moments, some more detailed asymptotic expansions are presented. © 2001 American Institute of Physics. ͓DOI: 10.1063/1.1398059͔ I. INTRODUCTION Consider the matrix Schro¨dinger equation ␺Љ͑k,x͒ϩk2␺͑k,x͒ϭQ͑x͒␺͑k,x͒, x෈R, ͑1.1͒ where x෈R is the spatial coordinate, the prime denotes the derivative with respect to x, k2 is the energy, Q(x)isannϫn self-adjoint matrix potential, i.e., Q(x)†ϭQ(x) with the dagger standing for the matrix conjugate transpose, and ␺(k,x) is either an nϫ1orannϫn matrix function. We ʈ ʈ ͑ ͒ use • to denote the Euclidean norm of a vector or the operator norm of a matrix. Let 1 nϫn у ϫ Lm(R;C ) with m 0 denote the Banach space of all measurable n n matrix functions f for ϩ͉ ͉ mʈ ʈ ϭ 1 which (1 x ) f (x) is integrable on R.Ifn 1, we denote this space by Lm(R). In this paper 1 nϫn we always assume that Q is self-adjoint and belongs to L1(R;C ). Certain results will be ෈ 1 nϫn obtained under the assumption that Q L2(R;C ), but we will clearly indicate when this stronger assumption is needed. We use Cϩ to denote the upper-half complex plane and write Cϩ for CϩഫR. ϫ ͑ ͒ Among the n n solutions of 1.1 are the so-called Jost solution from the left, f 1(k,x), and the Jost solution from the right, f r(k,x), satisfying the asymptotic boundary conditions Ϫikx ͑ ͒ϭ ϩ ͑ ͒ Ϫikx Ј͑ ͒ϭ ϩ ͑ ͒ !ϩϱ ͑ ͒ e f 1 k,x In o 1 and e f l k,x ikIn o 1 , x , 1.2 ikx ͑ ͒ϭ ϩ ͑ ͒ ikx Ј͑ ͒ϭϪ ϩ ͑ ͒ !Ϫϱ ͑ ͒ e f r k,x In o 1 and e f r k,x ikIn o 1 , x , 1.3 where In denotes the identity matrix of order n. The existence of the Jost solutions can be established as in the scalar (nϭ1) case1,2 by using the appropriate integral equations3,4 ͓cf. ͑2.2͒, ͑2.3͒, and Theorem 2.1 in our paper͔. we have 0͖͕گFor each k෈R ͒ϭ ͒ ikxϩ ͒ Ϫikxϩ ͒ !Ϫϱ ͑ ͒ f l͑k,x a1͑k e bl͑k e o͑1 , x , 1.4 a͒Electronic mail: [email protected] b͒Electronic mail: [email protected] c͒Electronic mail: [email protected] 0022-2488/2001/42(10)/4627/26/$18.004627 © 2001 American Institute of Physics Downloaded 07 Oct 2001 to 130.18.24.42. Redistribution subject to AIP license or copyright, see http://ojps.aip.org/jmp/jmpcr.jsp 4628 J. Math. Phys., Vol. 42, No. 10, October 2001 Aktosun, Klaus, and van der Mee ͒ϭ ͒ Ϫikxϩ ͒ ikxϩ ͒ !ϩϱ ͑ ͒ f r͑k,x ar͑k e br͑k e o͑1 , x , 1.5 ϫ where al(k), bl(k), ar(k), and br(k) are some n n matrix functions of k. These matrix functions enter the scattering matrix S(k) defined in ͑2.22͒, and our primary aim is the analysis of the small-k behavior of S(k). The motivation for this paper comes from our interest in the inverse scattering problem for ͑1.1͒, namely the recovery of Q from an appropriate set of data involving the scattering matrix. As is known from the scalar case, it is important to have detailed information about the behavior of S(k) for small k. For example,1,2 this information is used to characterize the scattering data, so as to ensure that the potential Q constructed from the data at hand belongs to a certain class of 1 1 ͑ ͒ Ͼ functions such as L1(R)orL2(R). The inverse scattering problem for 1.1 when n 1 has been considered by several authors,4–10 but we are not aware of any in-depth study of the small-k behavior of S(k). Not even the continuity of the scattering matrix at kϭ0 seems to have been ෈ 1 nϫn ͑ ͒ ϭ established when Q L1(R;C ); for example, in Ref. 6 p. 294 , the continuity at k 0 of the transmission coefficients is assumed. In the scalar case it is well known1,2,11,12 that the continuity ϭ ෈ 1 ෈ 1 of S(k)atk 0 is easy to establish if Q L2(R), but not if only Q L1(R). In the matrix case, the situation is somewhat different. The decay of Q(x)asx!Ϯϱ plays an important role, but there are further complications due to the particular structure of the solution space of ͑1.1͒ at k ϭ0. From the scalar case it is known1,2,11 that the behavior of the solutions of ͑1.1͒ at kϭ0 makes it necessary to distinguish between two cases, the generic case and the exceptional case, and that the small-k behavior of S(k) is different in each case. If nϾ1, the situation is more complicated because the exceptional case gives rise to a variety of possibilities depending on the Jordan structure of a certain matrix associated with the solution space of ͑1.1͒ at kϭ0. In this paper we clarify the connection between the solutions of ͑1.1͒ at kϭ0 and the behavior of S(k) near k ϭ0. As a result, we are able to prove the continuity of the scattering matrix at kϭ0 when Q ෈ 1 nϫn ෈ 1 nϫn L1(R;C ) and to obtain more detailed asymptotic expansions when Q L2(R;C ). The inverse problem is not considered here; we may report on it elsewhere. This paper is organized as follows. In Sec. II we establish our notations and review some basic known results on the solutions of ͑1.1͒. Since this material is standard, we refer the reader to the literature for proofs and more details. In Sec. II we also give various characterizations of the generic and exceptional cases. In Sec. III we prove the continuity of the scattering matrix at k ϭ0 in the generic case, and we obtain some more detailed asymptotic results when Q ෈ 1 nϫn L2(R;C ). The exceptional case is treated in Sec. IV; the main results are contained in ෈ 1 nϫn ෈ 1 nϫn Theorem 4.6 when Q L1(R;C ) and in Theorem 4.7 when Q L2(R;C ), where we prove the continuity and differentiability of S(k)atkϭ0, respectively. In Sec. V we discuss some special cases that illustrate the results of Sec. IV. Finally, the Appendix contains the proof of Proposition 4.2, which is a key result needed to establish Theorems 4.6 and 4.7. II. SCATTERING COEFFICIENTS AND A CASE DISTINCTION In this section we review some basic results about those solutions of ͑1.1͒ that are relevant to scattering theory, and we define the scattering coefficients and some related quantities. We also elaborate on the distinction between the generic case and the exceptional case which will play an important role in the subsequent sections. We define the Faddeev functions ml(k,x) and mr(k,x)by ͒ϭ Ϫikx ͒ ͒ϭ ikx ͒ ͑ ͒ ml͑k,x e f l͑k,x , mr͑k,x e f r͑k,x . 2.1 From ͑1.2͒, ͑1.3͒, and ͑2.1͒ it follows that 1 ϱ ͑ ͒ϭ ϩ ͵ ͓ 2ik͑yϪx͒Ϫ ͔ ͑ ͒ ͑ ͒ ͑ ͒ ml k,x In dy e 1 Q y ml k,y , 2.2 2ik x Downloaded 07 Oct 2001 to 130.18.24.42. Redistribution subject to AIP license or copyright, see http://ojps.aip.org/jmp/jmpcr.jsp J. Math. Phys., Vol. 42, No. 10, October 2001 Small-energy asymptotics 4629 1 x ͑ ͒ϭ ϩ ͵ ͓ 2ik͑xϪy͒Ϫ ͔ ͑ ͒ ͑ ͒ ͑ ͒ mr k,x In dy e 1 Q y mr k,y . 2.3 2ik Ϫϱ Some properties of the matrix functions ml(k,x) and mr(k,x) are summarized in the next theorem and its corollary. The proofs of these results can be obtained as in the scalar case and we refer the reader to the literature;2–4,11 in particular, see Theorem 1.4.1 in Ref. 3 and Theorem 1 in Ref. 4. We denote differentiation with respect to k by an overdot and use C for suitable constants that do not depend on x or k. ෈ 1 nϫn ෈ Theorem 2.1: If Q L1(R;C ), then, for each x R, the functions ml(k,x), mr(k,x), Ј Ј ෈ ϩ ෈ ϩ ml (k,x), and mr (k,x) are analytic in k C and continuous in k C ; moreover ͑ ͒ϭ ϩ ͑ ͒ Ј͑ ͒ϭ ͑ ͒ !ϩϱ ͑ ͒ ml k,x In o 1 , m1 k,x o 1/x , x , 2.4 ͑ ͒ϭ ϩ ͑ ͒ Ј͑ ͒ϭ ͑ ͒ !Ϫϱ mr k,x In o 1 , mr k,x o 1/x , x , ʈ ͒ʈр ϩ Ϫ ʈ ͒ʈр ϩ ෈ ϩ ͑ ͒ ml͑k,x C͓1 max͕0, x͖͔, mr͑k,x C͓1 max͕0,x͖͔, k C .
Recommended publications
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]
  • More Gaussian Elimination and Matrix Inversion
    Week 7 More Gaussian Elimination and Matrix Inversion 7.1 Opening Remarks 7.1.1 Introduction * View at edX 289 Week 7. More Gaussian Elimination and Matrix Inversion 290 7.1.2 Outline 7.1. Opening Remarks..................................... 289 7.1.1. Introduction..................................... 289 7.1.2. Outline....................................... 290 7.1.3. What You Will Learn................................ 291 7.2. When Gaussian Elimination Breaks Down....................... 292 7.2.1. When Gaussian Elimination Works........................ 292 7.2.2. The Problem.................................... 297 7.2.3. Permutations.................................... 299 7.2.4. Gaussian Elimination with Row Swapping (LU Factorization with Partial Pivoting) 304 7.2.5. When Gaussian Elimination Fails Altogether................... 311 7.3. The Inverse Matrix.................................... 312 7.3.1. Inverse Functions in 1D.............................. 312 7.3.2. Back to Linear Transformations.......................... 312 7.3.3. Simple Examples.................................. 314 7.3.4. More Advanced (but Still Simple) Examples................... 320 7.3.5. Properties...................................... 324 7.4. Enrichment......................................... 325 7.4.1. Library Routines for LU with Partial Pivoting................... 325 7.5. Wrap Up.......................................... 327 7.5.1. Homework..................................... 327 7.5.2. Summary...................................... 327 7.1. Opening
    [Show full text]
  • Pattern Avoidance in Alternating Sign Matrices
    PATTERN AVOIDANCE IN ALTERNATING SIGN MATRICES ROBERT JOHANSSON AND SVANTE LINUSSON Abstract. We generalize the definition of a pattern from permu- tations to alternating sign matrices. The number of alternating sign matrices avoiding 132 is proved to be counted by the large Schr¨oder numbers, 1, 2, 6, 22, 90, 394 . .. We give a bijection be- tween 132-avoiding alternating sign matrices and Schr¨oder-paths, which gives a refined enumeration. We also show that the 132, 123- avoiding alternating sign matrices are counted by every second Fi- bonacci number. 1. Introduction For some time, we have emphasized the use of permutation matrices when studying pattern avoidance. This and the fact that alternating sign matrices are generalizations of permutation matrices, led to the idea of studying pattern avoidance in alternating sign matrices. The main results, concerning 132-avoidance, are presented in Section 3. In Section 5, we enumerate sets of alternating sign matrices avoiding two patterns of length three at once. In Section 6 we state some open problems. Let us start with the basic definitions. Given a permutation π of length n we will represent it with the n×n permutation matrix with 1 in positions (i, π(i)), 1 ≤ i ≤ n and 0 in all other positions. We will also suppress the 0s. 1 1 3 1 2 1 Figure 1. The permutation matrix of 132. In this paper, we will think of permutations in terms of permutation matrices. Definition 1.1. (Patterns in permutations) A permutation π is said to contain a permutation τ as a pattern if the permutation matrix of τ is a submatrix of π.
    [Show full text]
  • Matrices That Commute with a Permutation Matrix
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Matrices That Commute With a Permutation Matrix Jeffrey L. Stuart Department of Mathematics University of Southern Mississippi Hattiesburg, Mississippi 39406-5045 and James R. Weaver Department of Mathematics and Statistics University of West Florida Pensacola. Florida 32514 Submitted by Donald W. Robinson ABSTRACT Let P be an n X n permutation matrix, and let p be the corresponding permuta- tion. Let A be a matrix such that AP = PA. It is well known that when p is an n-cycle, A is permutation similar to a circulant matrix. We present results for the band patterns in A and for the eigenstructure of A when p consists of several disjoint cycles. These results depend on the greatest common divisors of pairs of cycle lengths. 1. INTRODUCTION A central theme in matrix theory concerns what can be said about a matrix if it commutes with a given matrix of some special type. In this paper, we investigate the combinatorial structure and the eigenstructure of a matrix that commutes with a permutation matrix. In doing so, we follow a long tradition of work on classes of matrices that commute with a permutation matrix of a specified type. In particular, our work is related both to work on the circulant matrices, the results for which can be found in Davis’s compre- hensive book [5], and to work on the centrosymmetric matrices, which can be found in the book by Aitken [l] and in the papers by Andrew [2], Bamett, LZNEAR ALGEBRA AND ITS APPLICATIONS 150:255-265 (1991) 255 Q Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 256 JEFFREY L.
    [Show full text]
  • The Waldspurger Transform of Permutations and Alternating Sign Matrices
    Séminaire Lotharingien de Combinatoire 78B (2017) Proceedings of the 29th Conference on Formal Power Article #41, 12 pp. Series and Algebraic Combinatorics (London) The Waldspurger Transform of Permutations and Alternating Sign Matrices Drew Armstrong∗ and James McKeown Department of Mathematics, University of Miami, Coral Gables, FL Abstract. In 2005 J. L. Waldspurger proved the following theorem: Given a finite real reflection group G, the closed positive root cone is tiled by the images of the open weight cone under the action of the linear transformations 1 g. Shortly after this − E. Meinrenken extended the result to affine Weyl groups and then P. V. Bibikov and V. S. Zhgoon gave a uniform proof for a discrete reflection group acting on a simply- connected space of constant curvature. In this paper we show that the Waldspurger and Meinrenken theorems of type A give an interesting new perspective on the combinatorics of the symmetric group. In particular, for each permutation matrix g S we define a non-negative integer 2 n matrix WT(g), called the Waldspurger transform of g. The definition of the matrix WT(g) is purely combinatorial but it turns out that its columns are the images of the fundamental weights under 1 g, expressed in simple root coordinates. The possible − columns of WT(g) (which we call UM vectors) biject to many interesting structures including: unimodal Motzkin paths, abelian ideals in the Lie algebra sln(C), Young diagrams with maximum hook length n, and integer points inside a certain polytope. We show that the sum of the entries of WT(g) is half the entropy of the corresponding permutation g, which is known to equal the rank of g in the MacNeille completion of the Bruhat order.
    [Show full text]
  • Applied Linear Algebra and Differential Equations
    Applied Linear Algebra and Differential Equations Lecture notes for MATH 2350 Jeffrey R. Chasnov The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon Hong Kong Copyright ○c 2017-2019 by Jeffrey Robert Chasnov This work is licensed under the Creative Commons Attribution 3.0 Hong Kong License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/hk/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Preface What follows are my lecture notes for a mathematics course offered to second-year engineering students at the the Hong Kong University of Science and Technology. Material from our usual courses on linear algebra and differential equations have been combined into a single course (essentially, two half-semester courses) at the request of our Engineering School. I have tried my best to select the most essential and interesting topics from both courses, and to show how knowledge of linear algebra can improve students’ understanding of differential equations. All web surfers are welcome to download these notes and to use the notes and videos freely for teaching and learning. I also have some online courses on Coursera. You can click on the links below to explore these courses. If you want to learn differential equations, have a look at Differential Equations for Engineers If your interests are matrices and elementary linear algebra, try Matrix Algebra for Engineers If you want to learn vector calculus (also known as multivariable calculus, or calcu- lus three), you can sign up for Vector Calculus for Engineers And if your interest is numerical methods, have a go at Numerical Methods for Engineers Jeffrey R.
    [Show full text]
  • Numerically Stable Coded Matrix Computations Via Circulant and Rotation Matrix Embeddings
    1 Numerically stable coded matrix computations via circulant and rotation matrix embeddings Aditya Ramamoorthy and Li Tang Department of Electrical and Computer Engineering Iowa State University, Ames, IA 50011 fadityar,[email protected] Abstract Polynomial based methods have recently been used in several works for mitigating the effect of stragglers (slow or failed nodes) in distributed matrix computations. For a system with n worker nodes where s can be stragglers, these approaches allow for an optimal recovery threshold, whereby the intended result can be decoded as long as any (n − s) worker nodes complete their tasks. However, they suffer from serious numerical issues owing to the condition number of the corresponding real Vandermonde-structured recovery matrices; this condition number grows exponentially in n. We present a novel approach that leverages the properties of circulant permutation matrices and rotation matrices for coded matrix computation. In addition to having an optimal recovery threshold, we demonstrate an upper bound on the worst-case condition number of our recovery matrices which grows as ≈ O(ns+5:5); in the practical scenario where s is a constant, this grows polynomially in n. Our schemes leverage the well-behaved conditioning of complex Vandermonde matrices with parameters on the complex unit circle, while still working with computation over the reals. Exhaustive experimental results demonstrate that our proposed method has condition numbers that are orders of magnitude lower than prior work. I. INTRODUCTION Present day computing needs necessitate the usage of large computation clusters that regularly process huge amounts of data on a regular basis. In several of the relevant application domains such as machine learning, arXiv:1910.06515v4 [cs.IT] 9 Jun 2021 datasets are often so large that they cannot even be stored in the disk of a single server.
    [Show full text]
  • Contents September 1
    Contents September 1 : Examples 2 September 3 : The Gessel-Lindstr¨om-ViennotLemma 3 September 15 : Parametrizing totally positive unipotent matrices { Part 1 6 September 17 : Parametrizing totally positive unipotent matrices { part 2 7 September 22 : Introduction to the 0-Hecke monoid 9 September 24 : The 0-Hecke monoid continued 11 September 29 : The product has the correct ranks 12 October 1 : Bruhat decomposition 14 October 6 : A unique Bruhat decomposition 17 October 8: B−wB− \ N − + as a manifold 18 October 13: Grassmannians 21 October 15: The flag manifold, and starting to invert the unipotent product 23 October 20: Continuing to invert the unipotent product 28 October 22 { Continuing to invert the unipotent product 30 October 27 { Finishing the inversion of the unipotent product 32 October 29 { Chevalley generators in GLn(R) 34 November 5 { LDU decomposition, double Bruhat cells, and total positivity for GLn(R) 36 November 10 { Kasteleyn labelings 39 November 12 { Kasteleyn labellings continued 42 November 17 { Using graphs in a disc to parametrize the totally nonnegative Grassmannian 45 November 19 { cyclic rank matrices 45 December 1 { More on cyclic rank matrices 48 December 3 { Bridges and Chevalley generators 51 December 8 { finishing the proof 55 1 2 September 1 : Examples We write [m] = f1; 2; : : : ; mg. Given an m × n matrix M, and subsets I =⊆ [m] and I J ⊆ [n] with #(I) = #(J), we define ∆J (M) = det(Mij)i2I; j2J , where we keep the elements of I and the elements of J in the same order. For example, 13 M12 M15 ∆25(M) = det : M32 M35 I The ∆J (M) are called the minors of M.
    [Show full text]
  • Review of Matrices and Block Structures
    APPENDIX A Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations with matrices having millions of components. The key to understanding how to implement such algorithms is to exploit underlying structure within the matrices. In these notes we touch on a few ideas and tools for dissecting matrix structure. Specifically we are concerned with the block structure matrices. 1. Rows and Columns m n Let A R ⇥ so that A has m rows and n columns. Denote the element of A in the ith row and jth column 2 as Aij. Denote the m rows of A by A1 ,A2 ,A3 ,...,Am and the n columns of A by A 1,A2,A3,...,An. For example, if · · · · · · · · 32 15 73 − A = 2 27 32 100 0 0 , 2 −89 0 47− 22 21 333 − − then A = 100, 4 5 2,4 − A1 = 32 1573,A2 = 2 27 32 100 0 0 ,A3 = 89 0 47 22 21 33 · − · − − · − − and ⇥ ⇤ ⇥ ⇤ ⇥ ⇤ 3 2 1 5 7 3 − A 1 = 2 ,A2 = 27 ,A3 = 32 ,A4 = 100 ,A5 = 0 ,A6 = 0 . · 2 −893 · 2 0 3 · 2 47 3 · 2−22 3 · 2 213 · 2333 − − 4 5 4 5 4 5 4 5 4 5 4 5 Exercise 1.1. If 3 41100 220010− 2 3 C = 100001, − 6 0002147 6 7 6 0001037 6 7 4 5 4 −2 2 3 what are C4,4, C 4 and C4 ? For example, C2 = 220010and C 2 = 0 .
    [Show full text]