A Numerical Algorithm for Computing the Inverse of a Toeplitz Pentadiagonal Matrix

Total Page:16

File Type:pdf, Size:1020Kb

A Numerical Algorithm for Computing the Inverse of a Toeplitz Pentadiagonal Matrix Journal of Applied Mathematics and Computational Mechanics 2018, 17(3), 83-95 www.amcm.pcz.pl p-ISSN 2299-9965 DOI: 10.17512/jamcm.2018.3.08 e-ISSN 2353-0588 A NUMERICAL ALGORITHM FOR COMPUTING THE INVERSE OF A TOEPLITZ PENTADIAGONAL MATRIX 1 2 1 Boutaina Talibi , Ahmed Driss Aiat Hadj , Driss Sarsri 1 National School of Applied Sciences, Abdelmalek Essaadi University Tangier, Morocco 2 Regional Center of the Trades of Education and Training (CRMEF)-Tangier, Avenue My Abdelaziz Souani, BP: 3117, Tangier, Morocco [email protected], [email protected], [email protected] Received: 2 August 2018; Accepted: 27 November 2018 Abstract. In the current paper, we present a computationally efficient algorithm for obtain- ing the inverse of a pentadiogonal toeplitz matrix. Few conditions are required, and the al- gorithm is suited for implementation using computer algebra systems. MSC 2010: 15B05 65F40 33C45 Keywords: pentadiagonal matrix, Toeplitz matrix, K-Hessenberg matrix, inverse 1. Introduction Pentadiagonal Toeplitz matrices often occur when solving partial differential equations numerically using the finite difference method, the finite element method and the spectral method, and could be applied to the mathematical representation of high dimensional, nonlinear electromagnetic interference signals [1-7]. Pentadiagonal matrices are a certain class of special matrices, and other common types of special matrices are Jordan, Frobenius, generalized Vandermonde, Hermite, centrosymmetric, and arrowhead matrices [8-12]. Typically, after the original partial differential equations are processed with these numerical methods, we need to solve the pentadiagonal Toeplitz systems of linear equations for obtaining the numerical solutions of the original partial differ- ential equations. Thus, the inversion of pentadiagonal Toeplitz matrices is neces- sary to solve the partial differential equations in these cases. J. Jia, T. Sogabe and M. El-Mikkawy recently proposed an explicit inverse of the tridiagonal Toeplitz matrix which is strictly row diagonally dominant [13]. However, this result cannot be generalized to the case of the pentadiagonal Toeplitz matrix directly. This motivates us to investigate whether an explicit inverse of the pentadiagonal Toeplitz matrix also exists. 84 B. Talibi, A.D.A. Hadj, D. Sarsri In this paper, we present a new algorithm with a totally different approach for numerically inversing a pentadiagonal matrix. Few conditions are required, and the algorithm is suited for implementation using computer algebra systems. 2. Inverse of a Toeplitz pentadiagonal matrix algoritm 1 Definition: We consider the pentadiagonal Toeplitz matrix: a b c 0⋯ 0 a b c ⋱ ⋮ α β α ⋱ ⋱ ⋱ 0 Ρ = 0 ⋱ ⋱ ⋱ ⋱ c ⋮ ⋱ ⋱ ⋱ ⋱ b 0⋯ 0 β α a Ρ∈ Μ Κ a b c α Where: n, n ( ) . Also , , , and β are an arbitrary numbers. Assume that Ρ is non-singular and denote: −1 Ρ = (C1 , C 2 ,..., C n ) −1 Where (Ci ) 1≤ i ≤ n are the columns of the inverse Ρ . −1 From the relation Ρ Ρ = In . Where In denotes the identity matrix of order n. We deduce the relations: 1 C=( E − bC − aC ) n−2c nn − 1 n 1 C=( E − bC − aC − α C ) (1) n−3c nn −− 1 2 n − 1 n − 2 1 C=( E − bC −− aCα C − β C ) for j= n − 2,… ,3 j−2c jj − 1 jj ++ 12 j E =δ t ∈ k n kn Where j[( ij, ) 1 ≤ in ≤ ] is the vector of order j of the canonical basis of . Consider the sequence of numbers (Ai ) 1≤ i ≤ n and (Bi ) 1≤ i ≤ n characterized by a term recurrence relation: A0 = 0 A1 = 1 (2) aA0+ bA 1 + cA 2 = 0 α A0+ aA 1 + bA 2 + cA 3 = 0 A numerical algorithm for computing the inverse of a Toeplitz pentadiagonal matrix 85 And βAi−−−321+ α A i + aA i ++ bA ii cA + 1 = 0 for i ≥ 3 Also we have: B0 = 0 B1 = 1 (3) aB0+ bB 1 + cB 2 = 0 αB0+ aB 1 + bB 2 + cB 3 = 0 And βBi−−−321+ α B i + aB i ++ bB ii cB + 1 = 0 for i ≥ 3 We can give a matrix form to this term recurrence: ΡΑ = −AEnn−1 − A n + 1 E n (4) ΡΒ = −BEnn−1 − B n + 1 E n (5) t t Where Α = [,,,A0 A 1… A n− 1 ] and Β = [,,,B0 B 1… B n− 1 ] . Let’s define for each 0≤i ≤ n + 1 An+1 A i An A i X i = det and Yi = det (6) Bn+1 B i Bn B i Immediately we have: ΡΧ = − Xn E n −1 and ΡΥ = − Yn+1 E n (7) t t Where Χ = [,,,X0 X 1… X n− 1 ] and Υ = [,,,Y0 Y 1… Y n− 1 ] . Lemma: If X n = 0 the matrix Ρ is singular. Proof: If X n = 0. The Χ is non-null and by (7) we have ΡΧ = 0. We conclude that Ρ is singular. THEOREM 2.1: We supposed that X n ≠ 0 then Ρ is invertible. And: t −Y0 − Y 1 − Y n− 1 Cn = , ,… , Yn+1 Y n + 1 Y n + 1 t (8) −X0 − X 1 − X n− 1 Cn−1 = , ,… , Xn X n X n 86 B. Talibi, A.D.A. Hadj, D. Sarsri n−2 Proof: We have det()Ρ =c X n ≠ 0 [14]. So Ρ is invertible from the relations Equations (7) and (8) we have ΡCn= E n and ΡCn−1= E n − 1 . The proof is completed. Algorithm: New efficient computational algorithm for computing the inverse of the pentadiagonal matrix. INPUT: The dimension n. The arbitrary numbers: β, α ,a , b and c. OUTPUT: −1 The inverse matrix Ρ = (C1 , C 2 ,… , C n ). Step 1: t −Y0 − Y 1 − Y n− 1 Cn = , ,… , Yn+1 Y n + 1 Y n + 1 t −X0 − X 1 − X n− 1 Cn−1 = , ,… , Xn X n X n Step 2: 1 C=( E − bC − aC ), n−2c nn − 1 n 1 C=( E − bC − aC − α C ), n−3c nn − 1 − 2 n − 1 n Step 3: 1 C=( E − bC −− aCα C − β C ) for j= n − 2,… ,3 j−2c jj − 1 jj ++ 12 j 3. Inverse of a Toeplitz pentadiagonal matrix algoritm 2 H= h Definition: Hessenberg (or lower Hessenberg) matrices are the matrices [ij ] h = 0 − > 1 satisfying the condition ij for j i . More generally, the matrix is H = 0 − > k-Hessenberg if and only if ij for j i k . THEOREM [15]: Let H be a strict-Hessenberg matrix with the block decomposi- tion: B A H A∈Μ F B∈Μ F C∈ Μ F D∈ Μ F = , n− k ( ) , (n− k ) × k ( ) , k×( n − k ) ( ) , k ( ) . (9) D C A numerical algorithm for computing the inverse of a Toeplitz pentadiagonal matrix 87 The H is invertible if and only if CA−1 B− D is invertible and if H is invertible we have: 0 0 I H−1 = −k CA−1 B −− D − 1 CA − 1 I −1 − 1 ( ) k (10) A0 − A B R. Slowik [16] worked in the particular case of the Toeplitz-Hessenberg matrix. Our work is applied on pentdiagonal matrix Ρ of the form: a b c 0⋯ 0 a b c ⋱ ⋮ α β α ⋱ ⋱ ⋱ 0 Ρ = (11) 0 ⋱ ⋱ ⋱ ⋱ c ⋮ ⋱ ⋱ ⋱ ⋱ b 0⋯ 0 β α a Then we find: 0 0 I −1 Ρ−1 = −2 CA−1 B −− D CA − 1 I −1 − 1 () 2 (12) A0 − A B The blocks B, C, D and A are given as: a b a α β α B 0 = β (13) 0 0 ⋮ ⋮ 0 0 Also: 0⋯ 0 β α a b C = (14) 0⋯ ⋯ 0 β α a 0 0 D = (15) 0 0 And: 88 B. Talibi, A.D.A. Hadj, D. Sarsri c 0 0⋯ ⋯ ⋯ ⋯ 0 b c 0⋯ ⋯ ⋯ ⋯ 0 a b c 0⋯ ⋯ ⋯ 0 α a b c 0⋯ ⋯ 0 A = (16) β α a b c 0 ⋯ ⋮ 0β α a b c 0 ⋮ ⋮ ⋱ ⋱ ⋱⋱ ⋱ ⋱ 0 0⋯ 0 β α a b c Then the inverse of the matrix A will be: 1 n−2 n − 1 − k A−1 = l E (17) c ∑ ∑ k r+ kr, k=0 r = 1 1 Where l =1 and l=−( bl + al +α l + β l ) for k ≥ 1. 0 kc kk−1 − 2 k − 3 k − 4 −1 Proof: Since A is a triangular matrix, A as well. We prove (17) inductively on n. We have: 1 1 1 −b 01 1 = − l 1 (18) c 1 −b l2 l 1 1 0 1 −a c 0 1 c With: 2 b b a 1 l1 = − ; l2= −=−( bl 1 + al 0 ) (19) c c c c The first step of induction holds. Consider now n > 4. We have 1 b 1 − 1 1 c 0 1 l 1 a 1 0l 1 − 1 =l l 1 1 c 2 1 (20) ⋮⋮⋱ ⋮⋮⋱ 0 1 0ll⋯ 1 lll⋯ 1 n−3 1 ⋮ ⋱ n−2 n − 3 1 0 1 A numerical algorithm for computing the inverse of a Toeplitz pentadiagonal matrix 89 1 Then: l=− (bl + al +α l + β l ) n−2c nnnn −−−− 3456 4. Example In this section, we give a numerical example to illustrate the effectiveness of our algorithm. Our algorithm is tested by MATLAB R2014a. Consider the following 7-by-7 pentadiagonal matrix 2340000 5234000 6523400 Ρ 0652340 = (21) 0065234 0006523 0000652 −1 The columns of the inverse Ρ are: −1.0787 −0.1948 0.6886 −3.4400 −1.2309 2.1724 3.3693 1.0205 −1.9736 C = 0.5414 C = 0.3435 C = 0.4667 1 , 2 , 3 − 3.8273 1.0629 −2.1615 −2.1929 −0.3983 1.0629 −5.9997 −2.1929 3.8273 0.0305 0.5616 −0.6926 −0.0843 0.4866 1.7791 −1.9104 −0.6926 −0.3802 −1.6151 1.7791 0.5616 C = 0.0037 C = 0.3802 C = 0.4866 C = 0.0305 4 , 5 − , 6 , 7 −0.4667 −1.9736 2.1724 0.6886 0.3435 1.0205 −1.2309 −0.1948 0.5414 3.3693 −3.4400 −1.0787 Let’s employ the same example and give the blocks A, B, C, and D of the matrix Ρ 90 B.
Recommended publications
  • Solving Systems of Linear Equations by Gaussian Elimination
    Chapter 3 Solving Systems of Linear Equations By Gaussian Elimination 3.1 Mathematical Preliminaries In this chapter we consider the problem of computing the solution of a system of n linear equations in n unknowns. The scalar form of that system is as follows: a11x1 +a12x2 +... +... +a1nxn = b1 a x +a x +... +... +a x = b (S) 8 21 1 22 2 2n n 2 > ... ... ... ... ... <> an1x1 +an2x2 +... +... +annxn = bn > Written in matrix:> form, (S) is equivalent to: (3.1) Ax = b, where the coefficient square matrix A Rn,n, and the column vectors x, b n,1 n 2 2 R ⇠= R . Specifically, a11 a12 ... ... a1n a21 a22 ... ... a2n A = 0 1 ... ... ... ... ... B a a ... ... a C B n1 n2 nn C @ A 93 94 N. Nassif and D. Fayyad x1 b1 x2 b2 x = 0 1 and b = 0 1 . ... ... B x C B b C B n C B n C @ A @ A We assume that the basic linear algebra property for systems of linear equa- tions like (3.1) are satisfied. Specifically: Proposition 3.1. The following statements are equivalent: 1. System (3.1) has a unique solution. 2. det(A) =0. 6 3. A is invertible. In this chapter, our objective is to present the basic ideas of a linear system solver. It consists of two main procedures allowing to solve efficiently (3.1). 1. The first, referred to as Gauss elimination (or reduction) reduces (3.1) into an equivalent system of linear equations, which matrix is upper triangular. Specifically one shows in section 4 that Ax = b Ux = c, () where c Rn and U Rn,n is given by: 2 2 u11 u12 ..
    [Show full text]
  • Arxiv:2009.05100V2
    THE COMPLETE POSITIVITY OF SYMMETRIC TRIDIAGONAL AND PENTADIAGONAL MATRICES LEI CAO 1,2, DARIAN MCLAREN 3, AND SARAH PLOSKER 3 Abstract. We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix A is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results, and providing two different decom- positions sufficient for complete positivity. We illustrate our constructions with a number of examples. 1. Preliminaries All matrices herein will be real-valued. Let A be an n n symmetric tridiagonal matrix: × a1 b1 b1 a2 b2 . .. .. .. . A = .. .. .. . bn 3 an 2 bn 2 − − − bn 2 an 1 bn 1 − − − bn 1 an − We are often interested in the case where A is also doubly stochastic, in which case we have ai = 1 bi 1 bi for i = 1, 2,...,n, with the convention that b0 = bn = 0. It is easy to see that− if a− tridiagonal− matrix is doubly stochastic, it must be symmetric, so the additional hypothesis of symmetry can be dropped in that case. We are interested in positivity conditions for symmetric tridiagonal and pentadiagonal matrices. A stronger condition than positive semidefiniteness, known as complete positivity, arXiv:2009.05100v2 [math.CO] 10 Mar 2021 has applications in a variety of areas of study, including block designs, maximin efficiency- robust tests, modelling DNA evolution, and more [5, Chapter 2], as well as recent use in mathematical optimization and quantum information theory (see [14] and the references therein).
    [Show full text]
  • Inverse Eigenvalue Problems Involving Multiple Spectra
    Inverse eigenvalue problems involving multiple spectra G.M.L. Gladwell Department of Civil Engineering University of Waterloo Waterloo, Ontario, Canada N2L 3G1 [email protected] URL: http://www.civil.uwaterloo.ca/ggladwell Abstract If A Mn, its spectrum is denoted by σ(A).IfA is oscillatory (O) then σ(A∈) is positive and discrete, the submatrix A[r +1,...,n] is O and itsspectrumisdenotedbyσr(A). Itisknownthatthereisaunique symmetric tridiagonal O matrix with given, positive, strictly interlacing spectra σ0, σ1. It is shown that there is not necessarily a pentadiagonal O matrix with given, positive strictly interlacing spectra σ0, σ1, σ2, but that there is a family of such matrices with positive strictly interlacing spectra σ0, σ1. The concept of inner total positivity (ITP) is introduced, and it is shown that an ITP matrix may be reduced to ITP band form, or filled in to become TP. These reductions and filling-in procedures are used to construct ITP matrices with given multiple common spectra. 1Introduction My interest in inverse eigenvalue problems (IEP) stems from the fact that they appear in inverse vibration problems, see [7]. In these problems the matrices that appear are usually symmetric; in this paper we shall often assume that the matrices are symmetric: A Sn. ∈ If A Sn, its eigenvalues are real; we denote its spectrum by σ(A)= ∈ λ1, λ2,...,λn ,whereλ1 λ2 λn. The direct problem of finding σ{(A) from A is} well understood.≤ At≤ fi···rst≤ sight it appears that inverse eigenvalue T problems are trivial: every A Sn with spectrum σ(A) has the form Q Q ∈ ∧ where Q is orthogonal and = diag(λ1, λ2,...,λn).
    [Show full text]
  • The Rule of Hessenberg Matrix for Computing the Determinant of Centrosymmetric Matrices
    CAUCHY –Jurnal Matematika Murni dan Aplikasi Volume 6(3) (2020), Pages 140-148 p-ISSN: 2086-0382; e-ISSN: 2477-3344 The Rule of Hessenberg Matrix for Computing the Determinant of Centrosymmetric Matrices Nur Khasanah1, Agustin Absari Wahyu Kuntarini 2 1,2Department of Mathematics, Faculty of Science and Technology UIN Walisongo Semarang Email: [email protected] ABSTRACT The application of centrosymmetric matrix on engineering takes their part, particularly about determinant rule. This basic rule needs a computational process for determining the appropriate algorithm. Therefore, the algorithm of the determinant kind of Hessenberg matrix is used for computing the determinant of the centrosymmetric matrix more efficiently. This paper shows the algorithm of lower Hessenberg and sparse Hessenberg matrix to construct the efficient algorithm of the determinant of a centrosymmetric matrix. Using the special structure of a centrosymmetric matrix, the algorithm of this determinant is useful for their characteristics. Key Words : Hessenberg; Determinant; Centrosymmetric INTRODUCTION One of the widely used studies in the use of centrosymmetric matrices is how to get determinant from centrosymmetric matrices. Besides this special matrix has some applications [1], it also has some properties used for determinant purpose [2]. Special characteristic centrosymmetric at this entry is evaluated at [3] resulting in the algorithm of centrosymmetric matrix at determinant. Due to sparse structure of this entry, the evaluation of the determinant matrix has simpler operations than full matrix entries. One special sparse matrix having rules on numerical analysis and arise at centrosymmetric the determinant matrix is the Hessenberg matrix. The role of Hessenberg matrix decomposition is the important role of computing the eigenvalue matrix.
    [Show full text]
  • MAT TRIAD 2019 Book of Abstracts
    MAT TRIAD 2019 International Conference on Matrix Analysis and its Applications Book of Abstracts September 8 13, 2019 Liblice, Czech Republic MAT TRIAD 2019 is organized and supported by MAT TRIAD 2019 Edited by Jan Bok, Computer Science Institute of Charles University, Prague David Hartman, Institute of Computer Science, Czech Academy of Sciences, Prague Milan Hladík, Department of Applied Mathematics, Charles University, Prague Miroslav Rozloºník, Institute of Mathematics, Czech Academy of Sciences, Prague Published as IUUK-ITI Series 2019-676ø by Institute for Theoretical Computer Science, Faculty of Mathematics and Physics, Charles University Malostranské nám. 25, 118 00 Prague 1, Czech Republic Published by MATFYZPRESS, Publishing House of the Faculty of Mathematics and Physics, Charles University in Prague Sokolovská 83, 186 75 Prague 8, Czech Republic Cover art c J. Na£eradský, J. Ne²et°il c Jan Bok, David Hartman, Milan Hladík, Miroslav Rozloºník (eds.) c MATFYZPRESS, Publishing House of the Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic, 2019 i Preface This volume contains the Book of abstracts of the 8th International Conference on Matrix Anal- ysis and its Applications, MAT TRIAD 2019. The MATTRIAD conferences represent a platform for researchers in a variety of aspects of matrix analysis and its interdisciplinary applications to meet and share interests and ideas. The conference topics include matrix and operator theory and computation, spectral problems, applications of linear algebra in statistics, statistical models, matrices and graphs as well as combinatorial matrix theory and others. The goal of this event is to encourage further growth of matrix analysis research including its possible extension to other elds and domains.
    [Show full text]
  • Some Bivariate Stochastic Models Arising from Group Representation Theory
    Available online at www.sciencedirect.com ScienceDirect Stochastic Processes and their Applications ( ) – www.elsevier.com/locate/spa Some bivariate stochastic models arising from group representation theory Manuel D. de la Iglesiaa,∗, Pablo Románb a Instituto de Matemáticas, Universidad Nacional Autónoma de México, Circuito Exterior, C.U., 04510, Ciudad de México, Mexico b CIEM, FaMAF, Universidad Nacional de Córdoba, Medina Allende s/n Ciudad Universitaria, Córdoba, Argentina Received 14 September 2016; accepted 31 October 2017 Available online xxxx Abstract The aim of this paper is to study some continuous-time bivariate Markov processes arising from group representation theory. The first component (level) can be either discrete (quasi-birth-and-death processes) or continuous (switching diffusion processes), while the second component (phase) will always be discrete and finite. The infinitesimal operators of these processes will be now matrix-valued (eithera block tridiagonal matrix or a matrix-valued second-order differential operator). The matrix-valued spherical functions associated to the compact symmetric pair (SU(2) × SU(2); diag SU(2)) will be eigenfunctions of these infinitesimal operators, so we can perform spectral analysis and study directly some probabilistic aspects of these processes. Among the models we study there will be rational extensions of the one-server queue and Wright–Fisher models involving only mutation effects. ⃝c 2017 Elsevier B.V. All rights reserved. MSC: 60J10; 60J60; 33C45; 42C05 Keywords: Quasi-birth-and-death processes; Switching diffusions; Matrix-valued orthogonal polynomials; Wright–Fisher models 1. Introduction It is very well known that many important results of one-dimensional stochastic processes can be obtained by using spectral methods.
    [Show full text]
  • CS321 Numerical Analysis
    CS321 Numerical Analysis Lecture 5 System of Linear Equations Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506-0046 System of Linear Equations a11 x1 a12 x2 a1n xn b1 a21 x1 a22 x2 a2n xn b2 an1 x1 an2 x2 ann xn bn where aij are coefficients, xi are unknowns, and bi are right-hand sides. Written in a compact form is n aij x j bi , i 1,,n j1 The system can also be written in a matrix form Ax b where the matrix is a11 a12 a1n a a a A 21 22 2n an1 an2 ann and x [x , x ,, x ]T ,b [b ,b ,b ]T 1 2 n 1 2 n 2 An Upper Triangular System An upper triangular system a11 x1 a12 x2 a13 x3 a1n xn b1 a22 x2 a23 x3 a2n xn b2 a33 x3 a3n xn b3 an1,n1 xn1 an1,n xn bn1 ann xn bn is much easier to find the solution: bn xn ann from the last equation and substitute its value in other equations and repeat the process n 1 xi bi aij x j aii ji1 for i = n – 1, n – 2,…, 1 3 Karl Friedrich Gauss (April 30, 1777 – February 23, 1855) German Mathematician and Scientist 4 Gaussian Elimination Linear systems are solved by Gaussian elimination, which involves repeated procedure of multiplying a row by a number and adding it to another row to eliminate a certain variable For a particular step, this amounts to aik aij aij akj (k j n) akk aik bi bi bk akk th After this step, the variable xk, is eliminated in the (k + 1) and in the later equations The Gaussian elimination modifies a matrix into an upper triangular form such that aij = 0 for all i > j.
    [Show full text]
  • Quantum Phase Transitions Mediated by Clustered Non-Hermitian
    Quantum phase transitions mediated by clustered non-Hermitian degeneracies Miloslav Znojil The Czech Academy of Sciences, Nuclear Physics Institute, Hlavn´ı130, 250 68 Reˇz,ˇ Czech Republic and Department of Physics, Faculty of Science, University of Hradec Kr´alov´e, Rokitansk´eho 62, 50003 Hradec Kr´alov´e, Czech Republic e-mail: [email protected] Abstract The phenomenon of degeneracy of an N plet of bound states is studied in the framework of the − quasi-Hermitian (a.k.a. symmetric) formulation of quantum theory of closed systems. For PT − a general non-Hermitian Hamiltonian H = H(λ) such a degeneracy may occur at a real Kato’s exceptional point λ(EPN) of order N and of the geometric multiplicity alias clusterization index K. The corresponding unitary process of collapse (loss of observability) can be then interpreted as a generic quantum phase transition. The dedicated literature deals, predominantly, with the non-numerical benchmark models of the simplest processes where K = 1. In our present paper it is shown that in the “anomalous” dynamical scenarios with 1 <K N/2 an analogous approach ≤ is applicable. A multiparametric anharmonic-oscillator-type exemplification of such systems is constructed as a set of real-matrix N by N Hamiltonians which are exactly solvable, maximally non-Hermitian and labeled by specific ad hoc partitionings (N) of N. R arXiv:2102.12272v1 [quant-ph] 24 Feb 2021 1 1 Introduction The experimentally highly relevant phenomenon of a quantum phase transition during which at least one of the observables loses its observability status is theoretically elusive. In the conventional non-relativistic Schr¨odinger picture, for example, the observability of the energy represented by a self-adjoint Hamiltonian H = H(λ) appears too robust for the purpose.
    [Show full text]
  • Arxiv:2108.07942V1 [Math.CO]
    ALGORITHMIC TECHNIQUES FOR FINDING RESISTANCE DISTANCES ON STRUCTURED GRAPHS E. J. EVANS AND A. E. FRANCIS Abstract. In this paper we give a survey of methods used to calculate values of resistance distance (also known as effective resistance) in graphs. Resistance distance has played a prominent role not only in circuit theory and chemistry, but also in combinatorial matrix theory and spectral graph theory. Moreover resistance distance has applications ranging from quantifying biological struc- tures, distributed control systems, network analysis, and power grid systems. In this paper we discuss both exact techniques and approximate techniques and for each method discussed we provide an illustrative example of the technique. We also present some open questions and conjectures. 1. Introduction The resistance distance (occasionally referred to as the effective resistance) of a graph is a measure that quantifies its structural properties. Resistance distance has its origin in electrical circuit theory and its first known application to graph structure occurred in the analysis of chemical structure [22]. Resistance distance in graphs has played a prominent role not only in circuit theory and chemistry [18, 22, 30], but also in combinatorial matrix theory [4, 36] and spectral graph theory [2, 10, 13, 29]. A few specific examples of the use of resistance distance are: Spielman and Srivastava [29] have used resistance distance between nodes • of graphs to develop an algorithm to rapidly sparsify a given graph while maintaining spectral properties. Ghosh, Boyd, and Saberi [21] considered the problem of minimizing the • total resistance distance by allocating edge weights on a given graph. This problem has applications to Markov chains and continuous-time averaging arXiv:2108.07942v2 [math.CO] 13 Sep 2021 networks.
    [Show full text]
  • DETERMINANTS of MULTIDIAGONAL MATRICES∗ 1. Introduction. the First Purpose of This Paper Is to Present and Discuss Algo- Rithm
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 25, pp. 102-118, November 2012 ELA DETERMINANTS OF MULTIDIAGONAL MATRICES∗ KATARZYNA FILIPIAK† , AUGUSTYN MARKIEWICZ† , AND ANETA SAWIKOWSKA‡ Abstract. The formulas presented in [Molinari, L.G. Determinants of block tridiagonal matri- ces. Linear Algebra Appl., 2008; 429, 2221–2226] for evaluating the determinant of block tridiagonal matrices with (or without) corners are used to derive the determinant of any multidiagonal matri- ces with (or without) corners with some specified non-zero minors. Algorithms for calculation the determinant based on this method are given and properties of the determinants are studied. Some applications are presented. Key words. Multidiagonal matrix with corners, Multidiagonal matrix without corners, Deter- minant. AMS subject classifications. 15A15. 1. Introduction. The first purpose of this paper is to present and discuss algo- rithms for calculation of the determinant of multidiagonal matrices with (or without) corners. The second purpose is to compare them with some other algorithms in the case of pentadiagonal matrix with (or without) corners. We consider the determinant of a multidiagonal matrix of order n with and (p) (p) without corners, i.e., matrices A = (ai,j ) and B = (bi,j ) with ai,j = 0 if p 1 p 1 p 1 − < i j < n − and b = 0 if i j > − , where p (odd) denotes 2 | − | − 2 i,j | − | 2 the number of diagonals. In particular, tridiagonal and pentadiagonal matrices with corners are of the form: ∗Received by the editors on October 25, 2011. Accepted for publication on November 3, 2012 Handling Editor Natalia Bebiano.
    [Show full text]
  • Fiedler-Comrade and Fiedler-Chebyshev Pencils 1
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Essex Research Repository FIEDLER-COMRADE AND FIEDLER-CHEBYSHEV PENCILS VANNI NOFERINI† ¶ AND JAVIER PEREZ´ ‡¶ Abstract. Fiedler pencils are a family of strong linearizations for polynomials expressed in the monomial basis, that include the classical Frobenius companion pencils as special cases. We generalize the definition of a Fiedler pencil from monomials to a larger class of orthogonal polynomial bases. In particular, we derive Fiedler-comrade pencils for two bases that are extremely important in practical applications: the Chebyshev polynomials of the first and second kind. The new approach allows one to construct linearizations having limited bandwidth: a Chebyshev analogue of the pentadiagonal Fiedler pencils in the monomial basis. Moreover, our theory allows for linearizations of square matrix polynomials expressed in the Chebyshev basis (and in other bases), regardless of whether the matrix polynomial is regular or singular, and for recovery formulae for eigenvectors, and minimal indices and bases. Key words. Fiedler pencil, Chebyshev polynomial, linearization, matrix polynomial, singular matrix polynomial, eigenvector recovery, minimal basis, minimal indices AMS subject classifications. 15A22, 15A18, 15A23, 65H04, 65F15 1. Motivation. In computational mathematics, many applications require to compute the roots of a polynomial expressed in a nonstandard basis. Particularly relevant in practice [28] are the Chebyshev polynomials of the first kind, that we denote by Tk(x), see (3.1) for their formal definition. For example, suppose that we want to approximate numerically the roots of the polynomial T5(x) − 4T4(x)+4T2(x) − T1(x) (1.1) The roots of (1.1) are easy to compute analytically and they are ±1/2, ±1, and 2.
    [Show full text]
  • On the Inverting of a General Heptadiagonal Matrix
    British Journal of Applied Science & Technology 18(5): 1-12, 2016; Article no.BJAST.31213 ISSN: 2231-0843, NLM ID: 101664541 SCIENCEDOMAIN international www.sciencedomain.org On the Inverting of a General Heptadiagonal Matrix ∗ A. A. Karawia1;2 1Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt. 2Computer Science Unit, Deanship of Educational Services, Qassim University, P.O.Box 6595, Buraidah 51452, Saudi Arabia. Author’s contribution The sole author designed, analyzed and interpreted and prepared the manuscript. Article Information DOI: 10.9734/BJAST/2016/31213 Editor(s): (1) Orlando Manuel da Costa Gomes, Professor of Economics, Lisbon Accounting and Business School (ISCAL), Lisbon Polytechnic Institute, Portugal. Reviewers: (1) Jia Liu, University of West Florida, USA. (2) Hammad Khalil, University of Education (Attock Campus), Lahore, Pakistan. (3) Gerald Bourgeois, University of French Polynesia and University of Aix-Marseille, France. Complete Peer review History: http://www.sciencedomain.org/review-history/17657 Received: 26th December 2016 Accepted: 21st January 2017 Original Research Article Published: 28th January 2017 ABSTRACT In this paper, the author presents a new algorithm computing the inverse of any nonsingular heptadiagonal matrix. The computational cost of our algorithms is O(n2) operations in C. The algorithms are suitable for implementation using computer algebra system such as MAPLE, MATLAB and MATHEMATICA. Examples are given to illustrate the efficiency of the algorithms. Keywords: Heptadiagonal matrices; LU factorization; Determinants; Computer algebra systems(CAS). 2010 Mathematics Subject Classification: 15A15, 15A23, 68W30, 11Y05, 33F10, F.2.1, G.1.0. *Corresponding author: E-mail: [email protected]; Karawia; BJAST, 18(5), 1-12, 2016; Article no.BJAST.31213 1 INTRODUCTION The n × n general heptadiagonal matrices take the form: 0 1 d1 e1 f1 g1 B C B c2 d2 e2 f2 g2 C B C B b3 c3 d3 e3 f3 g3 0 C B C B a4 b4 c4 d4 e4 f4 g4 C B C B C B C B .
    [Show full text]