Majorization Results for Zeros of Orthogonal Polynomials

Total Page:16

File Type:pdf, Size:1020Kb

Majorization Results for Zeros of Orthogonal Polynomials Majorization results for zeros of orthogonal polynomials ∗ Walter Van Assche KU Leuven September 4, 2018 Abstract We show that the zeros of consecutive orthogonal polynomials pn and pn−1 are linearly connected by a doubly stochastic matrix for which the entries are explicitly computed in terms of Christoffel numbers. We give similar results for the zeros of pn (1) and the associated polynomial pn−1 and for the zeros of the polynomial obtained by deleting the kth row and column (1 ≤ k ≤ n) in the corresponding Jacobi matrix. 1 Introduction 1.1 Majorization Recently S.M. Malamud [3, 4] and R. Pereira [6] have given a nice extension of the Gauss- Lucas theorem that the zeros of the derivative p′ of a polynomial p lie in the convex hull of the roots of p. They both used the theory of majorization of sequences to show that if x =(x1, x2,...,xn) are the zeros of a polynomial p of degree n, and y =(y1,...,yn−1) are the zeros of its derivative p′, then there exists a doubly stochastic (n − 1) × n matrix S such that y = Sx ([4, Thm. 4.7], [6, Thm. 5.4]). A rectangular (n − 1) × n matrix S =(si,j) is said to be doubly stochastic if si,j ≥ 0 and arXiv:1607.03251v1 [math.CA] 12 Jul 2016 n n− 1 n − 1 s =1, s = . i,j i,j n j=1 i=1 X X In this paper we will use the notion of majorization to rephrase some known results for the zeros of orthogonal polynomials on the real line. It is an open problem to see what can be said about the zeros of orthogonal polynomials in the complex plane in terms of majorization. Let us begin by defining the notion of majorization. An excellent source of information on majorization is the book by Marshall and Olkin (and Arnold, who was added for the second edition) [5]. See also Horn and Johnson [2, Def. 4.3.24]. ∗This research was supported by KU Leuven research grant OT/12/073 and FWO research project G.0864.16N. 1 n n Definition 1.1. let x = (x1, x2,...,xn) ∈ R and y = (y1,y2,...,yn) ∈ R . Then the vector y is said to majorize the vector x, or x is majorized by y (notation: x ≺ y) if xˆ1 + ··· +ˆxj ≤ yˆ1 + ··· +ˆyj, 1 ≤ j ≤ n − 1, and xˆ1 + ··· +ˆxn =y ˆ1 + ··· +ˆyn, where xˆ = (ˆx1,..., xˆn) and yˆ = (ˆy1,..., yˆn) contain the components of x and y in decreas- ing order. The following result (of Hardy, Littlewood and P´olya) gives an alternative way to define majorization in terms of doubly stochastic matrices. Recall that a square matrix n A =(ai,j)i,j=1 of order n is doubly stochastic if ai,j ≥ 0 and n n ai,j =1, ai,j =1, i=1 j=1 X X i.e., the elements in every row and every column add up to 1. Theorem 1.1. Let x =(x1,...,xn) and y =(y1,...,yn). Then x ≺ y if and only if there exists a doubly stochastic matrix A such that x = Ay. See [5, Thm. 2.B.2], [2, Thm. 4.3.33]. Note that the doubly stochastic matrix A need not be unique. 1.2 Orthogonal polynomials Let µ be a positive measure on the real line for which all the moments exist. To simplify notation, we will assume that µ is a probability measure so that µ(R) = 1. The orthogonal polynomials pn for this measure satisfy the orthogonality relations pn(x)pm(x) dµ(x)= δm,n, Z n and we denote the orthonormal polynomials by pn(x) = γnx + ··· , with the conven- tion that γn > 0. Orthogonal polynomials on the real line always satisfy a three term recurrence relation xpn(x)= an+1pn+1(x)+ bnpn(x)+ anpn−1(x), (1.1) with an > 0 for n ≥ 1 and bn ∈ R for n ≥ 0. These recurrence coefficients are given by γ a = xp (x)p (x) dµ(x)= n−1 , (1.2) n n n−1 γ Z n 2 bn = xpn(x) dµ(x). (1.3) Z The zeros of pn are all real and simple and we will denote them by x1,n < x2,n < ··· < xn,n, hence in increasing order. It is well known that the zeros of pn−1 and pn interlace, i.e., xj,n < xj,n−1 < xj+1,n, 1 ≤ j ≤ n − 1. 2 The zeros of pn are also the eigenvalues of the tridiagonal matrix (Jacobi matrix) b0 a1 0 0 ··· 0 a b a 0 ··· 0 1 1 1 0 a2 b2 a3 0 Jn = . (1.4) . .. .. .. 0 ··· 0 an−2 bn−2 an−1 0 ··· 0 0 an−1 bn−1 and the interlacing of the zeros of pn and pn−1 hence corresponds to the interlacing of the eigenvalues of Jn and Jn−1. Another consequence is that for the trace of Jn one has Tr Jn = x1,n + x2,n + ··· xn,n = b0 + b1 + ··· + bn−1, which gives a relation between the sum of the zeros of pn and a partial sum of the recurrence coefficients (bn)n≥0. In particular we find x1,n−1 + x2,n−1 + ··· xn−1,n−1 + bn−1 = x1,n + x2,n + ··· + xn,n. (1.5) The interlacing of the zeros of pn−1 and pn, together with (1.5) then imply that the vector (x1,n−1,...,xn−1,n−1, bn−1) is majorized by (x1,n, x2,n,...,xn,n). In Section 2 we will give an explicit expression for a doubly stochastic matrix A for which x1,n−1 x1,n x2,n−1 x2,n . . . = A . xn−1,n−1 xn−1,n bn−1 xn,n Another interesting interlacing property involves the zeros of the associated polynomial p (z) − p (x) p(1) (z)= a n n dµ(x), n−1 1 z − x Z see, e.g., [8]. This polynomial appears naturally as the numerator of the Pad´eapproximant to the function ∞ dµ(x) m f(z)= a = a k , m = xk dµ(x), 1 z − x 1 zk+1 k k Z X=0 Z near infinity, i.e., (1) n+1 pn(z)f(z) − pn−1(z)= O(1/z ), z →∞. (1) The zeros of pn−1 are again real and simple and we denote them by y1,n−1 < y2,n−1 < ··· <yn−1,n−1. They interlace with the zeros of pn in the sense that xj,n <yj,n−1 < xj+1,n, 1 ≤ j ≤ n − 1. 3 In fact, the partial fractions decomposition of the Pad´eapproximant is (1) n pn−1(z) λk,n = a1 , (1.6) pn(z) z − xk,n k X=1 where the λk,n are the Christoffel numbers 1 1 λk,n = = . (1.7) n−1 2 ′ j=0 pj (xk,n) anpn(xk,n)pn−1(xk,n) (1) P (1) The zeros of pn−1 are equal to the eigenvalues of the Jacobi matrix Jn obtained from Jn by deleting the first row and first column: b1 a2 0 0 ··· 0 a b a 0 ··· 0 2 2 3 (1) 0 a3 b3 a4 0 Jn = . . (1.8) . .. .. .. 0 ··· 0 an−2 bn−2 an−1 0 ··· 0 0 an−1 bn−1 For the trace we therefore have (1) Tr Jn = y1,n−1 + y2,n−1 + ··· + yn−1,n−1 = b1 + b2 + ··· + bn−1, and hence y1,n−1 + y2,n−1 + ··· + yn−1,n−1 + b0 = x1,n + x2,n + ··· + xn,n. (1.9) Combining the interlacing property and (1.9) we then see that (y1,n−1,...,yn−1,n−1, b0) is majorized by (x1,n, x2,n,...,xn,n). In Section 3 we will give an explicit expression for a doubly stochastic matrix B for which y1,n−1 x1,n y2,n−1 x2,n . . . = B . yn−1,n−1 xn−1,n b0 xn,n Finally, in Section 4 we will prove a similar result for the zeros of the characteristic polynomial of the matrix obtained by deleting the kth row and column of the Jacobi matrix Jn. We will frequently use Gaussian quadrature using the zeros of orthogonal polynomials. Suppose that supp(µ) ⊂ [a, b] and that {xj,n, 1 ≤ j ≤ n} are the zeros of the orthogonal polynomial pn and {λj,n, 1 ≤ j ≤ n} the corresponding Christoffel numbers, then for every function f ∈ C2n([a, b]) n f (2n)(ξ) f(x) dµ(x)= λ p(x )+ , (1.10) j,n j,n (2n)!γ2 j=1 n Z X 4 where γn is the leading coefficient of pn and ξ ∈ (a, b) (see, e.g., [1, §1.4.2], [7, §3.4]). The quadrature sum therefore gives the integral exactly whenever f is a polynomial of degree ≤ 2n − 1. The Christoffel numbers λj,n given in (1.7) are also given by b p2 (x) λ = n dµ(x) > 0. (1.11) j,n (x − x )2[p′ (x ]2 Za j,n n j,n 2 Zeros of consecutive orthogonal polynomials Theorem 2.1. Let (x1,n−1,...,xn−1,n−1) be the zeros of pn−1 and (x1,n,...,xn,n) be the zeros of pn, and let (an+1, bn)n≥0 be the recurrence coefficients in (1.1) and (1.4). Then x1,n−1 x1,n x2,n−1 x2,n . . . = A . , (2.1) xn−1,n−1 xn−1,n bn−1 xn,n n where A =(ai,j)i,j=1 is a doubly stochastic matrix with entries 2 2 2 a λj,np (xj,n)λk,n p (xi,n ) n n−1 −1 n −1 , 1 ≤ i ≤ n − 1, (x − x )2 ai,j = j,n i,n−1 (2.2) 2 λj,npn−1(xj,n), i = n.
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • Arxiv:1306.4805V3 [Math.OC] 6 Feb 2015 Used Greedy Techniques to Reorder Matrices
    CONVEX RELAXATIONS FOR PERMUTATION PROBLEMS FAJWEL FOGEL, RODOLPHE JENATTON, FRANCIS BACH, AND ALEXANDRE D’ASPREMONT ABSTRACT. Seriation seeks to reconstruct a linear order between variables using unsorted, pairwise similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We write seri- ation as an optimization problem by proving the equivalence between the seriation and combinatorial 2-SUM problems on similarity matrices (2-SUM is a quadratic minimization problem over permutations). The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we derive several convex relax- ations for 2-SUM to improve the robustness of seriation solutions in noisy settings. These convex relaxations also allow us to impose structural constraints on the solution, hence solve semi-supervised seriation problems. We derive new approximation bounds for some of these relaxations and present numerical experiments on archeological data, Markov chains and DNA assembly from shotgun gene sequencing data. 1. INTRODUCTION We study optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper is centered on the seriation problem: we are given a similarity matrix between a set of n variables and assume that the variables can be ordered along a chain, where the similarity between variables decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear ordering based on unsorted, possibly noisy, pairwise similarity information. This problem has its roots in archeology [Robinson, 1951] and also has direct applications in e.g.
    [Show full text]
  • Solutions to Math 53 Practice Second Midterm
    Solutions to Math 53 Practice Second Midterm 1. (20 points) dx dy (a) (10 points) Write down the general solution to the system of equations dt = x+y; dt = −13x−3y in terms of real-valued functions. (b) (5 points) The trajectories of this equation in the (x; y)-plane rotate around the origin. Is this dx rotation clockwise or counterclockwise? If we changed the first equation to dt = −x + y, would it change the direction of rotation? (c) (5 points) Suppose (x1(t); y1(t)) and (x2(t); y2(t)) are two solutions to this equation. Define the Wronskian determinant W (t) of these two solutions. If W (0) = 1, what is W (10)? Note: the material for this part will be covered in the May 8 and May 10 lectures. 1 1 (a) The corresponding matrix is A = , with characteristic polynomial (1 − λ)(−3 − −13 −3 λ) + 13 = 0, so that λ2 + 2λ + 10 = 0, so that λ = −1 ± 3i. The corresponding eigenvector v to λ1 = −1 + 3i must be a solution to 2 − 3i 1 −13 −2 − 3i 1 so we can take v = Now we just need to compute the real and imaginary parts of 3i − 2 e3it cos(3t) + i sin(3t) veλ1t = e−t = e−t : (3i − 2)e3it −2 cos(3t) − 3 sin(3t) + i(3 cos(3t) − 2 sin(3t)) The general solution is thus expressed as cos(3t) sin(3t) c e−t + c e−t : 1 −2 cos(3t) − 3 sin(3t) 2 (3 cos(3t) − 2 sin(3t)) dx (b) We can examine the direction field.
    [Show full text]
  • Similarity-Based Clustering by Left-Stochastic Matrix Factorization
    JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem.
    [Show full text]
  • Doubly Stochastic Matrices Whose Powers Eventually Stop
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 330 (2001) 25–30 www.elsevier.com/locate/laa Doubly stochastic matrices whose powers eventually stopୋ Suk-Geun Hwang a,∗, Sung-Soo Pyo b aDepartment of Mathematics Education, Kyungpook National University, Taegu 702-701, South Korea bCombinatorial and Computational Mathematics Center, Pohang University of Science and Technology, Pohang, South Korea Received 22 June 2000; accepted 14 November 2000 Submitted by R.A. Brualdi Abstract In this note we characterize doubly stochastic matrices A whose powers A, A2,A3,... + eventually stop, i.e., Ap = Ap 1 =···for some positive integer p. The characterization en- ables us to determine the set of all such matrices. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A51 Keywords: Doubly stochastic matrix; J-potent 1. Introduction Let R denote the real field. For positive integers m, n,letRm×n denote the set of all m × n matrices with real entries. As usual let Rn denote the set Rn×1.We call the members of Rn the n-vectors. The n-vector of 1’s is denoted by e,andthe identity matrix of order n is denoted by In. For two matrices A, B of the same size, let A B denote that all the entries of A − B are nonnegative. A matrix A is called nonnegative if A O. A nonnegative square matrix is called a doubly stochastic matrix if all of its row sums and column sums equal 1.
    [Show full text]
  • Alternating Sign Matrices and Polynomiography
    Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics.
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Representations of Stochastic Matrices
    Rotational (and Other) Representations of Stochastic Matrices Steve Alpern1 and V. S. Prasad2 1Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom. email: [email protected] 2Department of Mathematics, University of Massachusetts Lowell, Lowell, MA. email: [email protected] May 27, 2005 Abstract Joel E. Cohen (1981) conjectured that any stochastic matrix P = pi;j could be represented by some circle rotation f in the following sense: Forf someg par- tition Si of the circle into sets consisting of …nite unions of arcs, we have (*) f g pi;j = (f (Si) Sj) = (Si), where denotes arc length. In this paper we show how cycle decomposition\ techniques originally used (Alpern, 1983) to establish Cohen’sconjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue proba- bility space. Representations by pointwise and setwise periodic automorphisms are also established. While this paper is largely expository, all the proofs, and some of the results, are new. Keywords: rotational representation, stochastic matrix, cycle decomposition MSC 2000 subject classi…cations. Primary: 60J10. Secondary: 15A51 1 Introduction An automorphism of a Lebesgue probability space (X; ; ) is a bimeasurable n bijection f : X X which preserves the measure : If S = Si is a non- ! f gi=1 trivial (all (Si) > 0) measurable partition of X; we can generate a stochastic n matrix P = pi;j by the de…nition f gi;j=1 (f (Si) Sj) pi;j = \ ; i; j = 1; : : : ; n: (1) (Si) Since the partition S is non-trivial, the matrix P has a positive invariant (stationary) distribution v = (v1; : : : ; vn) = ( (S1) ; : : : ; (Sn)) ; and hence (by de…nition) is recurrent.
    [Show full text]
  • An Introduction to the HESSIAN
    An introduction to the HESSIAN 2nd Edition Photograph of Ludwig Otto Hesse (1811-1874), circa 1860 https://upload.wikimedia.org/wikipedia/commons/6/65/Ludwig_Otto_Hesse.jpg See page for author [Public domain], via Wikimedia Commons I. As the students of the first two years of mathematical analysis well know, the Wronskian, Jacobian and Hessian are the names of three determinants and matrixes, which were invented in the nineteenth century, to make life easy to the mathematicians and to provide nightmares to the students of mathematics. II. I don’t know how the Hessian came to the mind of Ludwig Otto Hesse (1811-1874), a quiet professor father of nine children, who was born in Koenigsberg (I think that Koenigsberg gave birth to a disproportionate 1 number of famous men). It is possible that he was studying the problem of finding maxima, minima and other anomalous points on a bi-dimensional surface. (An alternative hypothesis will be presented in section X. ) While pursuing such study, in one variable, one first looks for the points where the first derivative is zero (if it exists at all), and then examines the second derivative at each of those points, to find out its “quality”, whether it is a maximum, a minimum, or an inflection point. The variety of anomalies on a bi-dimensional surface is larger than for a one dimensional line, and one needs to employ more powerful mathematical instruments. Still, also in two dimensions one starts by looking for points where the two first partial derivatives, with respect to x and with respect to y respectively, exist and are both zero.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]