Research Article Inversion of General Cyclic Heptadiagonal Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Research Article Inversion of General Cyclic Heptadiagonal Matrices Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2013, Article ID 321032, 9 pages http://dx.doi.org/10.1155/2013/321032 Research Article Inversion of General Cyclic Heptadiagonal Matrices A. A. Karawia Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt Correspondence should be addressed to A. A. Karawia; [email protected] Received 23 December 2012; Revised 26 February 2013; Accepted 27 February 2013 AcademicEditor:JoaoB.R.DoVal Copyright © 2013 A. A. Karawia. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We describe a reliable symbolic computational algorithm for inverting general cyclic heptadiagonal matrices by using parallel 2 computing along with recursion. The computational cost of it is 21 − 48 − 88 operations. The algorithm is implementable to the Computer Algebra System (CAS) such as MAPLE, MATLAB, and MATHEMATICA. Two examples are presented for the sake of illustration. 1. Introduction heptadiagonal matrices of the form (1) and for solving linear systems of the form: The ×general cyclic heptadiagonal matrices take the form: =, (2) 1 1 1 1 0 0 ⋅⋅⋅ 0 1 1 where =(1,2, ..., ) and =(1,2, ..., ) . To the best of our knowledge, the inversion of a general 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ( 2 2 2 2 2 2 ) ( ) cyclic heptadiagonal matrix of the form (1)hasnotbeen ( ) ( 0 ⋅⋅⋅ ⋅⋅⋅ 0 ) considered. Very recently in [5], the inversion of a general ( 3 3 3 3 3 3 ) ( ) ( ) cyclic pentadiagonal matrix using recursion is studied with- ( ) ( . ) out imposing any restrictive conditions on the elements of ( 0 ... ) ( 4 4 4 4 4 4 4 . ) ( . ) thematrix.Also,inthispaperwearegoingtocompute ( . ) ( 0 dddddddd . ) the inverse of a general cyclic heptadiagonal matrix of the =( ) , ( . ) ( . ) form (1) without imposing any restrictive conditions on the ( . dddddddd . ) ( 0 ⋅⋅⋅ 0 ) elements of the matrix in (1). Our approach is mainly ( −3 −3 −3 −3 −3 −3 −3 ) ( ) ( ) basedongettingtheelementsofthelastfivecolumnsof ( 0 ⋅⋅⋅ ⋅⋅⋅ 0 ) −1 ( −2 −2 −2 −2 −2 −2) ( ) in suitable forms via the Doolittle LU factorization [10] ( ) ( 0 ⋅⋅⋅ ⋅⋅⋅ 0 ) along with parallel computation [7]. Then the elements of −1 −1 −1 −1 −1 −1 −1 the remaining ( − 5) columns of may be obtained 0 ⋅⋅⋅ ⋅⋅⋅ 0 using relevant recursive relations. The inversion algorithm ( ) (1) of this paper is a natural generalization of the algorithm presented in [5]. The development of a symbolic algorithm is considered in order to remove all cases where the numerical algorithm fails. Many algorithms for solving banded linear where ≥7. systems need to pivoting, for example Gaussian elimination The inverses of cyclic heptadiagonal matrices are usually algorithm [10–12]. Overall, pivoting adds more operations required in science and engineering applications, for more to the computational cost of an algorithm. These additional details, see special cases, [1–9]. The motivation of the current operations are sometimes necessary for the algorithm to work paper is to establish efficient algorithms for inverting cyclic at all. 2 Mathematical Problems in Engineering { −1 The paper is organized as follows. In Section 2,new { , if =1, { 1 symbolic computational algorithm, that will not break, is { { 1 1 constructed. In Section 3, two illustrative examples are given. { , if =2, { 2 Conclusions of the work are given in Section 4. { + { 2 2 1 1 { , if =3, { 3 { + + { −1 −1 −2 −2 −3 −3 2. Main Results { , if =4(1) −5, = { { In this section we will focus on the construction of new { −1 −−5−5 −−6−6 −−7−7 { , =−4, { if symbolic computational algorithms for computing the deter- { −4 { minant and the inverse of general cyclic heptadiagonal { − − − { −1 −4 −4 −5 −5 −6 −6 , =−3, matrices. The solution of cyclic heptadiagonal linear systems { if { −3 of the form (2) will be taken into account. Firstly we begin { { − − − with computing the factorization of the matrix .Itisas { −1 −3 −3 −4 −4 −5 −5 , =−2, if in the following: { −2 { , =1, { if { 1 =, (3) { (ℎ − ) { 1 1 {− , if =2, { 2 { (ℎ +ℎ ) where { 2 2 1 1 {− , if =3, { { 3 1 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 { (ℎ−1−1 +ℎ−2−2 +ℎ−3−3) {− , if =4(1) −4, 2 1 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 ℎ ={ 3 3 1 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 { ( ) { −ℎ−4−4 −ℎ−5−5 −ℎ−6−6 ( 4 1 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0) { , if =−3, ( 4 4 ) { ( 1 ) { −3 ( ) { ( . .) { −ℎ−3−3 −ℎ−4−4 −ℎ−5−5 ( . dd d d d d d d .) { , =−2, ( . .) { if ( ) { −2 =( . .) , { ( . dd d d d d d d .) { −∑−2 ℎ ( ) { =1 ( 0⋅⋅⋅0 −3 1 dd0) { , if =−1, ( −3 −3 ) { ( −6 ) −1 ( ) ( 0 ⋅⋅⋅ ⋅⋅⋅ 0 −2 1 d 0) , =1, ( −2 −2 ) { 1 if −5 {− V + , =2, ⋅⋅⋅ ⋅⋅⋅ 10 { 2 1 2 if 1 2 3 −2 −1 −2 { ℎ ℎ ℎ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ℎ ℎ ℎ 1 {−3V2 −3V1, if =3, 1 2 3 −3 −2 −1 { { ( ) {−V−1 −V−2 − V−3, if =4(1) −4, { (4) { −3 { −3 V = −−3V−4 −−3V−5 − V−6 +−3, if =−3, 00⋅⋅⋅0 V { 1 1 1 1 1 1 { −6 02 2 2 2 0⋅⋅⋅0 2 V2 { { −2 003 3 3 3 ⋅⋅⋅ 0 3 V3 {− V − V − V + , =−2, ( 000 d 0 V ) { −2 −3 −2 −4 −5 −2 if ( 4 4 4 4 4 ) { −5 ( ) { ( . ) { −2 ( . ddddd d d d . ) { ( ) {−1 − ∑V, if =−1, =( . ) . (5) ( . ddddd d d d . ) { =1 ( . ) ( 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 −3 −3 −3 V−3) , =1, ( 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 V ) { 1 if −2 −2 −2 {− , =2, 0 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 V { 2 1 if −1 −1 {− − , =3, 0 0 0 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 { 3 2 3 1 if { ( ) {− − − , =4(1) −5, { −1 −2 −3 if { −3 { = − − − −4 + , =−4, The elements in the matrices L and U in (4)and(5)satisfy { −4 −5 −4 −6 −7 −4 if { −7 { { {− − − −3 + , =−3, { −3 −4 −3 −5 −6 −3 if , =1, { −6 { 1 if { { − , =2, { −2 { 2 2 1 if − − − + , =−2, { − − , =3, { −2 −3 −2 −4 −5 −2 if { 3 3 2 3 1 if −5 { − { − − − , =4(1) −2, 2 { −1 −2 −3 if { , if =2, = −3 { { −2 { 1 { { {−1 − ∑, if =−1, { ( − ) { 3 3 1 , =3, { =1 = { if { −1 { 2 { { { − ∑V ℎ , =, { ( − −(/ ) ) if { −2 −3 −3 { =1 , if =4(1) −2, { −1 Mathematical Problems in Engineering 3 3 { , =3, Set 3 =−(1 ∗1 +2 ∗2)/3 { if = 1 { ( −(/ ) ) ℎ =−(ℎ ∗ +ℎ ∗ )/ { −3 −3 Set 3 1 1 2 2 3 , if =4(1) −2, { −2 Set V3 =−3 ∗ V1 −3 ∗ V2 Set 3 =−3 ∗2 −3 ∗1 {1, if =1, = {2 −21, if =2, { −−1 −−2, if =3(1) −3, Step 2. Compute and simplify. 4 −2 1, if =1, For from to do ={ −−1, if =2(1) −4. =( − ∗ / )/ (6) −3 −3 −2 =( − ∗−3/−3 − ∗−2)/−1 We also have: −2=−2 −−2 ∗−3 −1 =−1 −−1 ∗−2 −−1 ∗−3 Det () = ∏. (7) =1 = − ∗−3/−3 − ∗−2 − ∗−1 Remark 1. It is not difficult to prove that the LU decomposi- If =0then =end if tion (3) exists only if ≠ 0, = 1(1) −1 (pivoting elements). Moreover the cyclic heptadiagonal matrix of the form (1) End do has an inverse if, in addition, =0̸ .Pivotingcanbeomitted by introducing auxiliary parameter in Algorithm 1 given Step 3. Compute and simplify. later. So no pivoting is included in our algorithm. For from 4 to −5do At this point it is convenient to formulate our first result. =−( ∗ + ∗ + ∗ )/ It is a symbolic algorithm for computing the determinant −3 −3 −2 −2 −1 −1 of a cyclic heptadiagonal matrix of the form (1)and =−( ∗−3/−3 + ∗−2 + ∗−1) can be considered as natural generalization of the symbolic algorithm DETCPENTA in [5]. End do Algorithm 1. To compute Det() for the cyclic heptadiagonal matrix in (1), we may proceed as follows. Step 4. Compute and simplify. Step 1. Set 1 =1 For from 4 to −4do If 1 =0then 1 =(is just a symbolic name) end if ℎ =−(ℎ−3 ∗−3 +ℎ−2 ∗−2 +ℎ−1 ∗−1)/ Set 1 =1, 1 =1 V =−( ∗ V−3/−3 + ∗ V−2 + ∗ V−1) Set 1 =−1/1 Set V1 =1 End do Set 1 =1 Step 5. Compute simplify. Set ℎ1 =/1 Set 1 =1 −4 =(−1−−5∗−5−−6∗−6−−7∗−7)/−4 Set 2 =2/1 −3 =(−1−−4∗−4−−5∗−5−−6∗−6)/−3 Set 3 =3/1 −2 =(−1−−3∗−3−−4∗−4−−5∗−5)/−2 Set 2 =2 −2 ∗1 −4 =−4−−4∗−7/−7−−4∗−6−−4∗−5 If 2 =0then 2 =end if −3 =−3−−3∗−6/−6−−3∗−5−−3∗−4 Set 2 =−1 ∗1/2 −2 =−2−−2∗−5/−5−−2∗−4−−2∗−3 Set V2 =2 −2 ∗ V1 ℎ−3 =(−ℎ−4∗−4−ℎ−5∗−5−ℎ−6∗−6)/−3 Set 2 =−2 ∗1 ℎ−2 =(−ℎ−3∗−3−ℎ−4∗−4−ℎ−5∗−5)/−2 Set ℎ2 =( −ℎ1 ∗1)/2 V−3 =−3 −−3 ∗V−6/−6 −−3 ∗V−5 −−3 ∗V−4 Set 2 =2 −2 ∗1 V = − ∗V / − ∗V − ∗V =( − ∗ )/ −2 −2 −2 −5 −5 −2 −4 −2 −3 Set 3 3 3 1 2 −2 V−1 =−1 −∑=1 V Set 3 =3 −3 ∗1 −3 ∗2 −2 If 3 =0then 3 =end if −1 =−1 −∑=1 4 Mathematical Problems in Engineering 00000 If −1 =0then −1 =end if 00000 ℎ =( − ∑−2 ℎ )/ (. .) −1 =1 −1 (. .) ( ) (. .) (. .) = − ∑−1 V ℎ ( ) =1 = (. .) , (. .) (. .) (00001) If =0then =end if ( ) (00010) 00100 Step 6. Compute Det() =∏ ( )=0. =1 01000 The symbolic Algorithm 1 will be referred to as 10000 DETCHEPTA. The computational cost of this algorithm ( ) is 52 − 195 operations. The new algorithm DETCHEPTA (9) is very useful to check the nonsingularity of the matrix we get when we consider, for example, the solution of the cyclic () (−1) (−2) (−3) (−4) heptadiagonal linear systems of the form (2).
Recommended publications
  • Solving Systems of Linear Equations by Gaussian Elimination
    Chapter 3 Solving Systems of Linear Equations By Gaussian Elimination 3.1 Mathematical Preliminaries In this chapter we consider the problem of computing the solution of a system of n linear equations in n unknowns. The scalar form of that system is as follows: a11x1 +a12x2 +... +... +a1nxn = b1 a x +a x +... +... +a x = b (S) 8 21 1 22 2 2n n 2 > ... ... ... ... ... <> an1x1 +an2x2 +... +... +annxn = bn > Written in matrix:> form, (S) is equivalent to: (3.1) Ax = b, where the coefficient square matrix A Rn,n, and the column vectors x, b n,1 n 2 2 R ⇠= R . Specifically, a11 a12 ... ... a1n a21 a22 ... ... a2n A = 0 1 ... ... ... ... ... B a a ... ... a C B n1 n2 nn C @ A 93 94 N. Nassif and D. Fayyad x1 b1 x2 b2 x = 0 1 and b = 0 1 . ... ... B x C B b C B n C B n C @ A @ A We assume that the basic linear algebra property for systems of linear equa- tions like (3.1) are satisfied. Specifically: Proposition 3.1. The following statements are equivalent: 1. System (3.1) has a unique solution. 2. det(A) =0. 6 3. A is invertible. In this chapter, our objective is to present the basic ideas of a linear system solver. It consists of two main procedures allowing to solve efficiently (3.1). 1. The first, referred to as Gauss elimination (or reduction) reduces (3.1) into an equivalent system of linear equations, which matrix is upper triangular. Specifically one shows in section 4 that Ax = b Ux = c, () where c Rn and U Rn,n is given by: 2 2 u11 u12 ..
    [Show full text]
  • Implementation and Performance Analysis of a Parallel Oil Reservoir Simulator Tool Using a CG Method on a GPU-Based System
    2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation Implementation and Performance Analysis of a Parallel Oil Reservoir Simulator Tool using a CG Method on a GPU-Based System Leila Ismail Jamal Abou-Kassem Bibrak Qamar Computer and Software Engineering/ Department of Chemical and Petroleum HPGCL Research Lab HPGCL Research Lab Engineering College of IT College of IT College of Engineering UAE University UAE University UAE University Al-Ain, UAE Al-Ain, UAE Al-Ain, UAE Email: [email protected] (Correspondence Email: [email protected] Author) 978-1-4799-4923-6/14 $31.00 © 2014 IEEE 374 DOI 10.1109/UKSim.2014.113 described in section IV. Our experiments and analysis of The above formula is the transmissibility in the x-direction results are presented in section V. Finally, VI concludes our where μ is the viscosity, B is the formation volume factor, work. Δx is the dimension of a grid block in the x-direction, Ax is the area of a grid block (y-dimension * height), Kx is the II. OVERALL IMPLEMENTATION OF THE OIL RESERVOIR permeability in the x-direction and βc is the transmissibility SIMULATOR conversion factor. Based on the values computed for the transmissibility and the production rates of the wells, the simulator then generates the system Ax = d, where A is the coefficient matrix, x is the solution vector for the value of pressure in each grid block of the reservoir and d is is a known vector which represents the right hand side of the generated system of equations.
    [Show full text]
  • Fast Singular Value Thresholding Without Singular Value Decomposition∗
    METHODS AND APPLICATIONS OF ANALYSIS. c 2013 International Press Vol. 20, No. 4, pp. 335–352, December 2013 002 FAST SINGULAR VALUE THRESHOLDING WITHOUT SINGULAR VALUE DECOMPOSITION∗ JIAN-FENG CAI† AND STANLEY OSHER‡ Abstract. Singular value thresholding (SVT) is a basic subroutine in many popular numerical schemes for solving nuclear norm minimization that arises from low-rank matrix recovery prob- lems such as matrix completion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the singular values. However, such an approach is time-consuming under some circumstances, especially when the rank of the resulting matrix is not significantly low compared to its dimension. In this paper, we propose a fast algorithm for directly computing SVT for general dense matrices without using SVDs. Our algorithm is based on matrix Newton iteration for matrix functions, and the convergence is theoretically guaranteed. Numerical experiments show that our proposed algorithm is more efficient than the SVD-based approaches for general dense matrices. Key words. Low rank matrix, nuclear norm minimization, matrix Newton iteration. AMS subject classifications. 65F30, 65K99. 1. Introduction. Singular value thresholding (SVT) introduced in [7] is a key subroutine in many popular numerical schemes (e.g. [7, 12, 13, 52, 54, 66]) for solving nuclear norm minimization that arises from low-rank matrix recovery problems such as matrix completion [13–15,60]. Let Y Rm×n be a given matrix, and Y = UΣV T be its singular value decomposition (SVD),∈ where U and V are orthonormal matrices and Σ = diag(σ1, σ2,...,σs) is the diagonal matrix with diagonals being the singular values of Y .
    [Show full text]
  • Spectral Properties of Anti-Heptadiagonal Persymmetric
    Spectral properties of anti-heptadiagonal persymmetric Hankel matrices Jo˜ao Lita da Silva1 Department of Mathematics and GeoBioTec Faculty of Sciences and Technology NOVA University of Lisbon Quinta da Torre, 2829-516 Caparica, Portugal Abstract In this paper we express the eigenvalues of anti-heptadiagonal persymmetric Hankel matrices as the zeros of explicit polynomials giving also a representation of its eigenvectors. We present also an expression depending on localizable parameters to compute its integer powers. In particular, an explicit formula not depending on any unknown parameter for the inverse of anti-heptadiagonal persymmetric Hankel matrices is provided. Key words: Anti-heptadiagonal matrix, Hankel matrix, eigenvalue, eigenvector, diagonalization 2010 Mathematics Subject Classification: 15A18, 15B05 1 Introduction The importance of Hankel matrices in computational mathematics and engineering is well-known. As a matter of fact, these type of matrices have not only a varied and numerous relations with polyno- mial computations (see, for instance, [2]) but also applications in engineering problems of system and control theory (see [4], [12] or [14] and the references therein). Recently, several authors have studied particular cases of these matrices in order to derive explicit expressions for its powers (see [7], [11], [16], [17], [19], [20] among others). The aim of this paper is to explore spectral properties of general anti-heptadiagonal persymmetric Hankel matrices, namely locating its eigenvalues and getting an explicit representation of its eigenvec- tors. Additionally, it is our purpose announce formulae for the computation of its integer powers using, thereunto, a diagonalization for the referred matrices. Particularly, an expression free of any unknown parameter to calculate the inverse of any anti-heptadiagonal persymmetric Hankel matrix (assuming arXiv:1907.00260v1 [math.RA] 29 Jun 2019 its nonsingularity) is made available.
    [Show full text]
  • A Simplex-Type Voronoi Algorithm Based on Short Vector Computations of Copositive Quadratic Forms
    Simons workshop Lattices Geometry, Algorithms and Hardness Berkeley, February 21st 2020 A simplex-type Voronoi algorithm based on short vector computations of copositive quadratic forms Achill Schürmann (Universität Rostock) based on work with Mathieu Dutour Sikirić and Frank Vallentin ( for Q n positive definite ) Perfect Forms 2 S>0 DEF: min(Q)= min Q[x] is the arithmetical minimum • x n 0 2Z \{ } Q is uniquely determined by min(Q) and Q perfect • , n MinQ = x Z : Q[x]=min(Q) { 2 } V(Q)=cone xxt : x MinQ is Voronoi cone of Q • { 2 } (Voronoi cones are full dimensional if and only if Q is perfect!) THM: Voronoi cones give a polyhedral tessellation of n S>0 and there are only finitely many up to GL n ( Z ) -equivalence. Voronoi’s Reduction Theory n t GLn(Z) acts on >0 by Q U QU S 7! Georgy Voronoi (1868 – 1908) Task of a reduction theory is to provide a fundamental domain Voronoi’s algorithm gives a recipe for the construction of a complete list of such polyhedral cones up to GL n ( Z ) -equivalence Ryshkov Polyhedron The set of all positive definite quadratic forms / matrices with arithmeticalRyshkov minimum Polyhedra at least 1 is called Ryshkov polyhedron = Q n : Q[x] 1 for all x Zn 0 R 2 S>0 ≥ 2 \{ } is a locally finite polyhedron • R is a locally finite polyhedron • R Vertices of are perfect forms Vertices of are• perfect R • R 1 n n ↵ (det(Q + ↵Q0)) is strictly concave on • 7! S>0 Voronoi’s algorithm Voronoi’s Algorithm Start with a perfect form Q 1.
    [Show full text]
  • Recursion Formulae for the Characteristic Polynomial of Symmetric Banded Matrices
    Recursion Formulae for the Characteristic Polynomial of Symmetric Banded Matrices Werner Kratz und Markus Tentler Preprint Series: 2007-15 Fakult¨at fur¨ Mathematik und Wirtschaftswissenschaften UNIVERSITAT¨ ULM Recursion Formulae for the Characteristic Polynomial of Symmetric Banded Matrices Werne Kratz und Markus Tentler Preprint Series: 2007-15 Fakult¨at fur¨ Mathematik und Wirtschaftswissenschaften UNIVERSITAT¨ ULM Recursion formulae for the characteristic polynomial of symmetric banded matrices Werner Kratz and Markus Tentler Abstract . In this article we treat the algebraic eigenvalue problem for real, symmetric, and banded matrices of size N × N , say. For symmetric, tridi- agonal matrices, there is a well-known two-term recursion to evaluate the characteristic polynomials of its principal submatrices. This recursion is of complexity O(N) and it requires additions and multiplications only. More- over, it is used as the basis for a numerical algorithm to compute particular eigenvalues of the matrix via bisection. We derive similar recursion formu- lae with the same complexity O(N) for symmetric matrices with arbitrary bandwidth, containing divisions. The main results are divisionfree recursions for penta- and heptadiagonal symmetric matrices. These recursions yield sim- ilarly as in the tridiagonal case effective (with complexity O(N) ), fast, and stable algorithms to compute their eigenvalues. Running head: Recursion formulae for banded matrices Key words: Banded matrix, eigenvalue problem, Sturm-Liouville equation, pentadiagonal matrix, heptadiagonal matrix, bisection method AMS subject classi¯cation: 15A18; 65F15, 15A15, 39A10, 15A24 W. Kratz and M. Tentler Universit¨atUlm Institut f¨urAngewandte Analysis D - 89069 Ulm, Germany e-mail: [email protected] e-mail: [email protected] Recursion formulae for the characteristic polynomial of symmetric banded matrices Werner Kratz and Markus Tentler 1 Introduction In this article we consider the algebraic eigenvalue problem for real, sym- metric, and banded matrices of size N × N , say.
    [Show full text]
  • Arxiv:2009.05100V2
    THE COMPLETE POSITIVITY OF SYMMETRIC TRIDIAGONAL AND PENTADIAGONAL MATRICES LEI CAO 1,2, DARIAN MCLAREN 3, AND SARAH PLOSKER 3 Abstract. We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix A is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results, and providing two different decom- positions sufficient for complete positivity. We illustrate our constructions with a number of examples. 1. Preliminaries All matrices herein will be real-valued. Let A be an n n symmetric tridiagonal matrix: × a1 b1 b1 a2 b2 . .. .. .. . A = .. .. .. . bn 3 an 2 bn 2 − − − bn 2 an 1 bn 1 − − − bn 1 an − We are often interested in the case where A is also doubly stochastic, in which case we have ai = 1 bi 1 bi for i = 1, 2,...,n, with the convention that b0 = bn = 0. It is easy to see that− if a− tridiagonal− matrix is doubly stochastic, it must be symmetric, so the additional hypothesis of symmetry can be dropped in that case. We are interested in positivity conditions for symmetric tridiagonal and pentadiagonal matrices. A stronger condition than positive semidefiniteness, known as complete positivity, arXiv:2009.05100v2 [math.CO] 10 Mar 2021 has applications in a variety of areas of study, including block designs, maximin efficiency- robust tests, modelling DNA evolution, and more [5, Chapter 2], as well as recent use in mathematical optimization and quantum information theory (see [14] and the references therein).
    [Show full text]
  • Combinatorial Optimization, Packing and Covering
    Combinatorial Optimization: Packing and Covering G´erard Cornu´ejols Carnegie Mellon University July 2000 1 Preface The integer programming models known as set packing and set cov- ering have a wide range of applications, such as pattern recognition, plant location and airline crew scheduling. Sometimes, due to the spe- cial structure of the constraint matrix, the natural linear programming relaxation yields an optimal solution that is integer, thus solving the problem. Sometimes, both the linear programming relaxation and its dual have integer optimal solutions. Under which conditions do such integrality properties hold? This question is of both theoretical and practical interest. Min-max theorems, polyhedral combinatorics and graph theory all come together in this rich area of discrete mathemat- ics. In addition to min-max and polyhedral results, some of the deepest results in this area come in two flavors: “excluded minor” results and “decomposition” results. In these notes, we present several of these beautiful results. Three chapters cover min-max and polyhedral re- sults. The next four cover excluded minor results. In the last three, we present decomposition results. We hope that these notes will encourage research on the many intriguing open questions that still remain. In particular, we state 18 conjectures. For each of these conjectures, we offer $5000 as an incentive for the first correct solution or refutation before December 2020. 2 Contents 1Clutters 7 1.1 MFMC Property and Idealness . 9 1.2 Blocker . 13 1.3 Examples .......................... 15 1.3.1 st-Cuts and st-Paths . 15 1.3.2 Two-Commodity Flows . 17 1.3.3 r-Cuts and r-Arborescences .
    [Show full text]
  • Inverse Eigenvalue Problems Involving Multiple Spectra
    Inverse eigenvalue problems involving multiple spectra G.M.L. Gladwell Department of Civil Engineering University of Waterloo Waterloo, Ontario, Canada N2L 3G1 [email protected] URL: http://www.civil.uwaterloo.ca/ggladwell Abstract If A Mn, its spectrum is denoted by σ(A).IfA is oscillatory (O) then σ(A∈) is positive and discrete, the submatrix A[r +1,...,n] is O and itsspectrumisdenotedbyσr(A). Itisknownthatthereisaunique symmetric tridiagonal O matrix with given, positive, strictly interlacing spectra σ0, σ1. It is shown that there is not necessarily a pentadiagonal O matrix with given, positive strictly interlacing spectra σ0, σ1, σ2, but that there is a family of such matrices with positive strictly interlacing spectra σ0, σ1. The concept of inner total positivity (ITP) is introduced, and it is shown that an ITP matrix may be reduced to ITP band form, or filled in to become TP. These reductions and filling-in procedures are used to construct ITP matrices with given multiple common spectra. 1Introduction My interest in inverse eigenvalue problems (IEP) stems from the fact that they appear in inverse vibration problems, see [7]. In these problems the matrices that appear are usually symmetric; in this paper we shall often assume that the matrices are symmetric: A Sn. ∈ If A Sn, its eigenvalues are real; we denote its spectrum by σ(A)= ∈ λ1, λ2,...,λn ,whereλ1 λ2 λn. The direct problem of finding σ{(A) from A is} well understood.≤ At≤ fi···rst≤ sight it appears that inverse eigenvalue T problems are trivial: every A Sn with spectrum σ(A) has the form Q Q ∈ ∧ where Q is orthogonal and = diag(λ1, λ2,...,λn).
    [Show full text]
  • The Rule of Hessenberg Matrix for Computing the Determinant of Centrosymmetric Matrices
    CAUCHY –Jurnal Matematika Murni dan Aplikasi Volume 6(3) (2020), Pages 140-148 p-ISSN: 2086-0382; e-ISSN: 2477-3344 The Rule of Hessenberg Matrix for Computing the Determinant of Centrosymmetric Matrices Nur Khasanah1, Agustin Absari Wahyu Kuntarini 2 1,2Department of Mathematics, Faculty of Science and Technology UIN Walisongo Semarang Email: [email protected] ABSTRACT The application of centrosymmetric matrix on engineering takes their part, particularly about determinant rule. This basic rule needs a computational process for determining the appropriate algorithm. Therefore, the algorithm of the determinant kind of Hessenberg matrix is used for computing the determinant of the centrosymmetric matrix more efficiently. This paper shows the algorithm of lower Hessenberg and sparse Hessenberg matrix to construct the efficient algorithm of the determinant of a centrosymmetric matrix. Using the special structure of a centrosymmetric matrix, the algorithm of this determinant is useful for their characteristics. Key Words : Hessenberg; Determinant; Centrosymmetric INTRODUCTION One of the widely used studies in the use of centrosymmetric matrices is how to get determinant from centrosymmetric matrices. Besides this special matrix has some applications [1], it also has some properties used for determinant purpose [2]. Special characteristic centrosymmetric at this entry is evaluated at [3] resulting in the algorithm of centrosymmetric matrix at determinant. Due to sparse structure of this entry, the evaluation of the determinant matrix has simpler operations than full matrix entries. One special sparse matrix having rules on numerical analysis and arise at centrosymmetric the determinant matrix is the Hessenberg matrix. The role of Hessenberg matrix decomposition is the important role of computing the eigenvalue matrix.
    [Show full text]
  • Quadratic Forms and Their Applications
    Quadratic Forms and Their Applications Proceedings of the Conference on Quadratic Forms and Their Applications July 5{9, 1999 University College Dublin Eva Bayer-Fluckiger David Lewis Andrew Ranicki Editors Published as Contemporary Mathematics 272, A.M.S. (2000) vii Contents Preface ix Conference lectures x Conference participants xii Conference photo xiv Galois cohomology of the classical groups Eva Bayer-Fluckiger 1 Symplectic lattices Anne-Marie Berge¶ 9 Universal quadratic forms and the ¯fteen theorem J.H. Conway 23 On the Conway-Schneeberger ¯fteen theorem Manjul Bhargava 27 On trace forms and the Burnside ring Martin Epkenhans 39 Equivariant Brauer groups A. FrohlichÄ and C.T.C. Wall 57 Isotropy of quadratic forms and ¯eld invariants Detlev W. Hoffmann 73 Quadratic forms with absolutely maximal splitting Oleg Izhboldin and Alexander Vishik 103 2-regularity and reversibility of quadratic mappings Alexey F. Izmailov 127 Quadratic forms in knot theory C. Kearton 135 Biography of Ernst Witt (1911{1991) Ina Kersten 155 viii Generic splitting towers and generic splitting preparation of quadratic forms Manfred Knebusch and Ulf Rehmann 173 Local densities of hermitian forms Maurice Mischler 201 Notes towards a constructive proof of Hilbert's theorem on ternary quartics Victoria Powers and Bruce Reznick 209 On the history of the algebraic theory of quadratic forms Winfried Scharlau 229 Local fundamental classes derived from higher K-groups: III Victor P. Snaith 261 Hilbert's theorem on positive ternary quartics Richard G. Swan 287 Quadratic forms and normal surface singularities C.T.C. Wall 293 ix Preface These are the proceedings of the conference on \Quadratic Forms And Their Applications" which was held at University College Dublin from 5th to 9th July, 1999.
    [Show full text]
  • MAT TRIAD 2019 Book of Abstracts
    MAT TRIAD 2019 International Conference on Matrix Analysis and its Applications Book of Abstracts September 8 13, 2019 Liblice, Czech Republic MAT TRIAD 2019 is organized and supported by MAT TRIAD 2019 Edited by Jan Bok, Computer Science Institute of Charles University, Prague David Hartman, Institute of Computer Science, Czech Academy of Sciences, Prague Milan Hladík, Department of Applied Mathematics, Charles University, Prague Miroslav Rozloºník, Institute of Mathematics, Czech Academy of Sciences, Prague Published as IUUK-ITI Series 2019-676ø by Institute for Theoretical Computer Science, Faculty of Mathematics and Physics, Charles University Malostranské nám. 25, 118 00 Prague 1, Czech Republic Published by MATFYZPRESS, Publishing House of the Faculty of Mathematics and Physics, Charles University in Prague Sokolovská 83, 186 75 Prague 8, Czech Republic Cover art c J. Na£eradský, J. Ne²et°il c Jan Bok, David Hartman, Milan Hladík, Miroslav Rozloºník (eds.) c MATFYZPRESS, Publishing House of the Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic, 2019 i Preface This volume contains the Book of abstracts of the 8th International Conference on Matrix Anal- ysis and its Applications, MAT TRIAD 2019. The MATTRIAD conferences represent a platform for researchers in a variety of aspects of matrix analysis and its interdisciplinary applications to meet and share interests and ideas. The conference topics include matrix and operator theory and computation, spectral problems, applications of linear algebra in statistics, statistical models, matrices and graphs as well as combinatorial matrix theory and others. The goal of this event is to encourage further growth of matrix analysis research including its possible extension to other elds and domains.
    [Show full text]