Positive Definite Matrices.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Matrix Analysis (Lecture 1)
Matrix Analysis (Lecture 1) Yikun Zhang∗ March 29, 2018 Abstract Linear algebra and matrix analysis have long played a fundamen- tal and indispensable role in fields of science. Many modern research projects in applied science rely heavily on the theory of matrix. Mean- while, matrix analysis and its ramifications are active fields for re- search in their own right. In this course, we aim to study some fun- damental topics in matrix theory, such as eigen-pairs and equivalence relations of matrices, scrutinize the proofs of essential results, and dabble in some up-to-date applications of matrix theory. Our lecture will maintain a cohesive organization with the main stream of Horn's book[1] and complement some necessary background knowledge omit- ted by the textbook sporadically. 1 Introduction We begin our lectures with Chapter 1 of Horn's book[1], which focuses on eigenvalues, eigenvectors, and similarity of matrices. In this lecture, we re- view the concepts of eigenvalues and eigenvectors with which we are familiar in linear algebra, and investigate their connections with coefficients of the characteristic polynomial. Here we first outline the main concepts in Chap- ter 1 (Eigenvalues, Eigenvectors, and Similarity). ∗School of Mathematics, Sun Yat-sen University 1 Matrix Analysis and its Applications, Spring 2018 (L1) Yikun Zhang 1.1 Change of basis and similarity (Page 39-40, 43) Let V be an n-dimensional vector space over the field F, which can be R, C, or even Z(p) (the integers modulo a specified prime number p), and let the list B1 = fv1; v2; :::; vng be a basis for V. -
Frames in 2-Inner Product Spaces
Iranian Journal of Mathematical Sciences and Informatics Vol. 8, No. 2 (2013), pp 123-130 Frames in 2-inner Product Spaces Ali Akbar Arefijamaal∗ and Ghadir Sadeghi Department of Mathematics and Computer Sciences, Hakim Sabzevari University, Sabzevar, Iran E-mail: [email protected] E-mail: [email protected] Abstract. In this paper, we introduce the notion of a frame in a 2- inner product space and give some characterizations. These frames can be considered as a usual frame in a Hilbert space, so they share many useful properties with frames. Keywords: 2-inner product space, 2-norm space, Frame, Frame operator. 2010 Mathematics subject classification: Primary 46C50; Secondary 42C15. 1. Introduction and preliminaries The concept of frames in Hilbert spaces has been introduced by Duffin and Schaeffer [12] in 1952 to study some deep problems in nonharmonic Fourier se- ries. Various generalizations of frames have been proposed; frame of subspaces [2, 6], pseudo-frames [18], oblique frames [10], continuous frames [1, 4, 14] and so on. The concept of frames in Banach spaces have been introduced by Grochenig [16], Casazza, Han and Larson [5] and Christensen and Stoeva [11]. The concept of linear 2-normed spaces has been investigated by S. Gahler in 1965 [15] and has been developed extensively in different subjects by many authors [3, 7, 8, 13, 14, 17]. A concept which is related to a 2-normed space is 2-inner product space which have been intensively studied by many math- ematicians in the last three decades. A systematic presentation of the recent results related to the theory of 2-inner product spaces as well as an extensive ∗ Corresponding author Received 16 November 2011; Accepted 29 April 2012 c 2013 Academic Center for Education, Culture and Research TMU 123 124 Arefijamaal, Sadeghi list of the related references can be found in the book [7]. -
A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231. -
Stirling Matrix Via Pascal Matrix
LinearAlgebraanditsApplications329(2001)49–59 www.elsevier.com/locate/laa StirlingmatrixviaPascalmatrix Gi-SangCheon a,∗,Jin-SooKim b aDepartmentofMathematics,DaejinUniversity,Pocheon487-711,SouthKorea bDepartmentofMathematics,SungkyunkwanUniversity,Suwon440-746,SouthKorea Received11May2000;accepted17November2000 SubmittedbyR.A.Brualdi Abstract ThePascal-typematricesobtainedfromtheStirlingnumbersofthefirstkinds(n,k)and ofthesecondkindS(n,k)arestudied,respectively.Itisshownthatthesematricescanbe factorizedbythePascalmatrices.AlsotheLDU-factorizationofaVandermondematrixof theformVn(x,x+1,...,x+n−1)foranyrealnumberxisobtained.Furthermore,some well-knowncombinatorialidentitiesareobtainedfromthematrixrepresentationoftheStirling numbers,andthesematricesaregeneralizedinoneortwovariables.©2001ElsevierScience Inc.Allrightsreserved. AMSclassification:05A19;05A10 Keywords:Pascalmatrix;Stirlingnumber;Stirlingmatrix 1.Introduction Forintegersnandkwithnk0,theStirlingnumbersofthefirstkinds(n,k) andofthesecondkindS(n,k)canbedefinedasthecoefficientsinthefollowing expansionofavariablex(see[3,pp.271–279]): n n−k k [x]n = (−1) s(n,k)x k=0 and ∗ Correspondingauthor. E-mailaddresses:[email protected](G.-S.Cheon),[email protected](J.-S.Kim). 0024-3795/01/$-seefrontmatter2001ElsevierScienceInc.Allrightsreserved. PII:S0024-3795(01)00234-8 50 G.-S. Cheon, J.-S. Kim / Linear Algebra and its Applications 329 (2001) 49–59 n n x = S(n,k)[x]k, (1.1) k=0 where x(x − 1) ···(x − n + 1) if n 1, [x] = (1.2) n 1ifn = 0. It is known that for an n, k 0, the s(n,k), S(n,k) and [n]k satisfy the following Pascal-type recurrence relations: s(n,k) = s(n − 1,k− 1) + (n − 1)s(n − 1,k), S(n,k) = S(n − 1,k− 1) + kS(n − 1,k), (1.3) [n]k =[n − 1]k + k[n − 1]k−1, where s(n,0) = s(0,k)= S(n,0) = S(0,k)=[0]k = 0ands(0, 0) = S(0, 0) = 1, and moreover the S(n,k) satisfies the following formula known as ‘vertical’ recur- rence relation: n− 1 n − 1 S(n,k) = S(l,k − 1). -
A Completely Positive Maps
A Completely Positive Maps A.1. A linear map θ: A → B between C∗-algebrasiscalledcompletely positive if θ ⊗ id: A ⊗ Matn(C) → B ⊗ Matn(C) is positive for any n ∈ N. Such a map is automatically bounded since any ∗ positive linear functional on a C -algebra is bounded. ⊗ ∈ ⊗ C An operator T = i,j bij eij B Matn( ), where the eij’s are the usual C ∗ ≥ matrix units in Matn( ), is positive if and only if i,j bi bijbj 0 for any ∈ ⊗ C b1,...,bn B. Since every positive element in A Matn( )isthesumofn ∗ ⊗ → elements of the form i,j ai aj eij,weconcludethatθ: A B is completely positive if and only if n ∗ ∗ ≥ bi θ(ai aj)bj 0 i,j=1 for any n ∈ N, a1,...,an ∈ A and b1,...,bn ∈ B. A.2. If θ: A → B is completely positive and B ⊂ B(H), then there exist a Hilbert space K, a representation π: A → B(K) and a bounded operator V : H → K such that θ(a)=V ∗π(a)V and π(A)VH = K. The construction is as follows. Consider the algebraic tensor product AH, and define a sesquilinear form on it by (a ξ,c ζ)=(θ(c∗a)ξ,ζ). Complete positivity of θ means exactly that this form is positive definite. Thus taking the quotient by the space N = {x ∈ A H | (x, x)=0} we get a pre-Hilbert space. Denote its completion by K. Then define a representation π: A → B(K)by π(a)[c ζ]=[ac ζ], where [x] denotes the image of x in (A H)/N . -
Tensor Products of Convex Cones, Part I: Mapping Properties, Faces, and Semisimplicity
Tensor Products of Convex Cones, Part I: Mapping Properties, Faces, and Semisimplicity Josse van Dobben de Bruyn 24 September 2020 Abstract The tensor product of two ordered vector spaces can be ordered in more than one way, just as the tensor product of normed spaces can be normed in multiple ways. Two natural orderings have received considerable attention in the past, namely the ones given by the projective and injective (or biprojective) cones. This paper aims to show that these two cones behave similarly to their normed counterparts, and furthermore extends the study of these two cones from the algebraic tensor product to completed locally convex tensor products. The main results in this paper are the following: (i) drawing parallels with the normed theory, we show that the projective/injective cone has mapping properties analogous to those of the projective/injective norm; (ii) we establish direct formulas for the lineality space of the projective/injective cone, in particular providing necessary and sufficient conditions for the cone to be proper; (iii) we show how to construct faces of the projective/injective cone from faces of the base cones, in particular providing a complete characterization of the extremal rays of the projective cone; (iv) we prove that the projective/injective tensor product of two closed proper cones is contained in a closed proper cone (at least in the algebraic tensor product). 1 Introduction 1.1 Outline Tensor products of ordered (topological) vector spaces have been receiving attention for more than 50 years ([Mer64], [HF68], [Pop68], [PS69], [Pop69], [DS70], [vGK10], [Wor19]), but the focus has mostly been on Riesz spaces ([Sch72], [Fre72], [Fre74], [Wit74], [Sch74, §IV.7], [Bir76], [FT79], [Nie82], [GL88], [Nie88], [Bla16]) or on finite-dimensional spaces ([BL75], [Bar76], [Bar78a], [Bar78b], [Bar81], [BLP87], [ST90], [Tam92], [Mul97], [Hil08], [HN18], [ALPP19]). -
QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive. -
Pascal Matrices Alan Edelman and Gilbert Strang Department of Mathematics, Massachusetts Institute of Technology [email protected] and [email protected]
Pascal Matrices Alan Edelman and Gilbert Strang Department of Mathematics, Massachusetts Institute of Technology [email protected] and [email protected] Every polynomial of degree n has n roots; every continuous function on [0, 1] attains its maximum; every real symmetric matrix has a complete set of orthonormal eigenvectors. “General theorems” are a big part of the mathematics we know. We can hardly resist the urge to generalize further! Remove hypotheses, make the theorem tighter and more difficult, include more functions, move into Hilbert space,. It’s in our nature. The other extreme in mathematics might be called the “particular case”. One specific function or group or matrix becomes special. It obeys the general rules, like everyone else. At the same time it has some little twist that connects familiar objects in a neat way. This paper is about an extremely particular case. The familiar object is Pascal’s triangle. The little twist begins by putting that triangle of binomial coefficients into a matrix. Three different matrices—symmetric, lower triangular, and upper triangular—can hold Pascal’s triangle in a convenient way. Truncation produces n by n matrices Sn and Ln and Un—the pattern is visible for n = 4: 1 1 1 1 1 1 1 1 1 1 2 3 4 1 1 1 2 3 S4 = L4 = U4 = . 1 3 6 10 1 2 1 1 3 1 4 10 20 1 3 3 1 1 We mention first a very specific fact: The determinant of every Sn is 1. (If we emphasized det Ln = 1 and det Un = 1, you would write to the Editor. -
Absorbing Representations with Respect to Closed Operator Convex
ABSORBING REPRESENTATIONS WITH RESPECT TO CLOSED OPERATOR CONVEX CONES JAMES GABE AND EFREN RUIZ Abstract. We initiate the study of absorbing representations of C∗-algebras with re- spect to closed operator convex cones. We completely determine when such absorbing representations exist, which leads to the question of characterising when a representation is absorbing, as in the classical Weyl–von Neumann type theorem of Voiculescu. In the classical case, this was solved by Elliott and Kucerovsky who proved that a representation is nuclearly absorbing if and only if it induces a purely large extension. By considering a related problem for extensions of C∗-algebras, which we call the purely large problem, we ask when a purely largeness condition similar to the one defined by Elliott and Kucerovsky, implies absorption with respect to some given closed operator convex cone. We solve this question for a special type of closed operator convex cone induced by actions of finite topological spaces on C∗-algebras. As an application of this result, we give K-theoretic classification for certain C∗-algebras containing a purely infinite, two- sided, closed ideal for which the quotient is an AF algebra. This generalises a similar result by the second author, S. Eilers and G. Restorff in which all extensions had to be full. 1. Introduction The study of absorbing representations dates back, in retrospect, more than a century to Weyl [Wey09] who proved that any bounded self-adjoint operator on a separable Hilbert space is a compact perturbation of a diagonal operator. This implies that the only points in the spectrum of a self-adjoint operator which are affected by compact perturbation, are the isolated points of finite multiplicity. -
Arxiv:1904.01037V3 [Math.GR]
AN EFFECTIVE LIE–KOLCHIN THEOREM FOR QUASI-UNIPOTENT MATRICES THOMAS KOBERDA, FENG LUO, AND HONGBIN SUN Abstract. We establish an effective version of the classical Lie–Kolchin Theo- rem. Namely, let A, B P GLmpCq be quasi–unipotent matrices such that the Jordan Canonical Form of B consists of a single block, and suppose that for all k ě 0 the matrix ABk is also quasi–unipotent. Then A and B have a common eigenvector. In particular, xA, Byă GLmpCq is a solvable subgroup. We give applications of this result to the representation theory of mapping class groups of orientable surfaces. 1. Introduction Let V be a finite dimensional vector space over an algebraically closed field. In this paper, we study the structure of certain subgroups of GLpVq which contain “sufficiently many” elements of a relatively simple form. We are motivated by the representation theory of the mapping class group of a surface of hyperbolic type. If S be an orientable surface of genus 2 or more, the mapping class group ModpS q is the group of homotopy classes of orientation preserving homeomor- phisms of S . The group ModpS q is generated by certain mapping classes known as Dehn twists, which are defined for essential simple closed curves of S . Here, an essential simple closed curve is a free homotopy class of embedded copies of S 1 in S which is homotopically essential, in that the homotopy class represents a nontrivial conjugacy class in π1pS q which is not the homotopy class of a boundary component or a puncture of S . -
Seminar VII for the Course GROUP THEORY in PHYSICS Micael Flohr
Seminar VII for the course GROUP THEORY IN PHYSICS Mic~ael Flohr The classical Lie groups 25. January 2005 MATRIX LIE GROUPS Most Lie groups one ever encouters in physics are realized as matrix Lie groups and thus as subgroups of GL(n, R) or GL(n, C). This is the group of invertibel n × n matrices with coefficients in R or C, respectively. This is a Lie group, since it forms as an open subset of the vector space of n × n matrices a manifold. Matrix multiplication is certainly a differentiable map, as is taking the inverse via Cramer’s rule. The only condition defining the open 2 subset is that the determinat must not be zero, which implies that dimKGL(n, K) = n is the same as the one of the vector space Mn(K). However, GL(n, R) is not connected, because we cannot move continuously from a matrix with determinant less than zero to one with determinant larger than zero. It is worth mentioning that gl(n, K) is the vector space of all n × n matrices over the field K, equipped with the standard commutator as Lie bracket. We can describe most other Lie groups as subgroups of GL(n, K) for either K = R or K = C. There are two ways to do so. Firstly, one can give restricting equations to the coefficients of the matrices. Secondly, one can find subgroups of the automorphisms of V =∼ Kn, which conserve a given structure on Kn. In the following, we give some examples for this: SL(n, K). -
Inertia of the Matrix [(Pi + Pj) ]
isid/ms/2013/12 October 20, 2013 http://www.isid.ac.in/estatmath/eprints r Inertia of the matrix [(pi + pj) ] Rajendra Bhatia and Tanvi Jain Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi{110 016, India r INERTIA OF THE MATRIX [(pi + pj) ] RAJENDRA BHATIA* AND TANVI JAIN** Abstract. Let p1; : : : ; pn be positive real numbers. It is well r known that for every r < 0 the matrix [(pi + pj) ] is positive def- inite. Our main theorem gives a count of the number of positive and negative eigenvalues of this matrix when r > 0: Connections with some other matrices that arise in Loewner's theory of oper- ator monotone functions and in the theory of spline interpolation are discussed. 1. Introduction Let p1; p2; : : : ; pn be distinct positive real numbers. The n×n matrix 1 C = [ ] is known as the Cauchy matrix. The special case pi = i pi+pj 1 gives the Hilbert matrix H = [ i+j ]: Both matrices have been studied by several authors in diverse contexts and are much used as test matrices in numerical analysis. The Cauchy matrix is known to be positive definite. It possessesh ai ◦r 1 stronger property: for each r > 0 the entrywise power C = r (pi+pj ) is positive definite. (See [4] for a proof.) The object of this paper is to study positivity properties of the related family of matrices r Pr = [(pi + pj) ]; r ≥ 0: (1) The inertia of a Hermitian matrix A is the triple In(A) = (π(A); ζ(A); ν(A)) ; in which π(A); ζ(A) and ν(A) stand for the number of positive, zero, and negative eigenvalues of A; respectively.