Determinants of the Block Arrowhead Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Determinants of the Block Arrowhead Matrices Determinants of the block arrowhead matrices Edyta HETMANIOK, Micha lRO´ ZA˙ NSKI,´ Damian SLOTA, Marcin SZWEDA, Tomasz TRAWINSKI´ and Roman WITULA Abstract. The paper is devoted to the considerations on determinants of the block arrowhead matrices. At first we discuss, in the wide technical aspect, the motivation of undertaking these investigations. Next we present the main theorem concerning the formulas describing the determinants of the block arrowhead matrices. We discuss also the application of these formulas by analyzing many specific examples. At the end of the paper we make an attempt to test the obtained formulas for the determinants of the block arrowhead matrices, but in case of replacing the standard inverses of the matrices by the Drazin inverses. Keywords: determinant, block matrix, arrowhead matrix, Drazin inverse. 2010 Mathematics Subject Classification: 15A15, 15A23, 15A09. 1. Technical introduction – motivation for discussion Observing the mathematical models, formulated for describing various dynami- cal systems used in the present engineering, one can notice that in many of them the block arrowhead matrices occur. One can find such matrices in the models of telecommunication systems [10], in robotics [18], in electrotechnics [17] and in auto- matic control [21]. In robotics, while modelling the dynamics of the kinematic chains of robots, or in electrotechnical problems, while modelling the electromechanical con- verters, there is often a need to formulate some equations in the coordinate systems different the ones in which the original model was formulated. Especially in the theory E. Hetmaniok, M. R´o˙za´nski, D. S lota, M. Szweda, R. Witu la Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland, e-mail: {edyta.hetmaniok, michal.rozanski, damian.slota, marcin.szweda, roman.witula}@polsl.pl T. Trawi´nski Department of Mechatronics, Silesian University of Technology, Akademicka 10A, 44-100 Gliwice, Poland, e-mail: [email protected] R. Witu la, B. Bajorska-Harapi´nska, E. Hetmaniok, D. S lota, T. Trawi´nski (eds.), Selected Problems on Experimental Mathematics. Wydawnictwo Politechniki Sl¸askiej,´ Gliwice 2017, pp. 73–88. 74 E. Hetmaniok, M. R´o˙za´nski, D. S lota, M. Szweda, T. Trawi´nski and R. Witu la of electromechanical converters one applies very widely the possibility of transform- ing the mathematical models of the electromechanical converters form the natural coordinate systems into the biaxial coordinate systems. The main goal of these trans- formations is to eliminate the time dependence occurring in the coefficients of mutual inductances, to increase the number of zero elements occurring in the matrices of the appropriate mathematical model and, in consequence, to simplify this model and to shorten the time needed for its solution. The forms of matrices transforming the variables, occurring in the electromechanical converters, from the natural coordinate systems into the biaxial coordinate systems can be deduced on the way of physical reasoning. These matrices can be also found by using the methods of determining the eigenvalues. Formulated matrices are applied, among others in the Park, Clark and Stanley transformations [12, 9]. Elements of the transformation matrices contain the eigenvalues of matrices of the electromechanical converter inductance coefficients. These matrices can have, for example, the following forms [7]: cos ϑ 1 cos ϑ 2 cos ϑ 3 2 s s s [Ks]= sin ϑs1 sin ϑs2 sin ϑs3 , (1) 3 1 1 1 r √2 √2 √2 1 cos αr cos2αr ... cos(Qr − 1)αr 0 − sin αr − sin 2αr ... − sin(Qr − 1)αr 2 1 cos2αr cos4αr ... cos2(Qr − 1)αr [Kr]= 0 − sin 2αr − sin 4αr ... − sin 2(Qr − 1)αr , (2) Qr r . . 1 1 1 1 1 √2 √2 √2 √2 √2 where t ϑs1 = ϑs10 + Ωx dt, (3) Z0 t t 2π ϑ 2 = ϑ 20 + Ω dt = ϑ 10 + + Ω dt, (4) s s x s 3 x Z0 Z0 t t 4π ϑ 3 = ϑ 30 + Ω dt = ϑ 10 + + Ω dt, (5) s s x s 3 x Z0 Z0 whereas ϑs10, ϑs20, ϑs30 denote the initial angles between the axes of respective phases of the stator and axis X of the biaxial system XY for moment of time t = 0; p means the number of pole pairs, αr denotes the rotor bar pitch and Ωx describes the angular velocity of rotation of the biaxial system XY around the stator. Matrix of the mutual and self inductance coefficients written in the natural coor- dinate system, that is in the phase system, possesses the block structure – it is the full matrix of the form Mss Msr M = T . (6) M Mrr sr Particular forms of matrices, of this type can be found, for example, in [7]. Determinants of the block arrowhead matrices 75 Method of transforming the matrix (6) of inductance coefficients into the new coordinate system XY by using the transformation matrices (1) and (2) is as follows XY T Mss = [Ks][Mss][Ks] , (7) XY T Mrr = [Kr][Mrr][Kr] , (8) XY T Msr = [Ks][Msr][Kr] . (9) By merging the obtained results we get the block arrowhead matrix [8], also in the form presented in paper [17]. Matrices (1) and (2) are often used for constructing the systems for regulating the electromechanical converters, such as the various versions of the vector control methods for the squirrel cage induction motors [13, 19]. Thus, there is a need for de- veloping the effective methods of calculating the determinants of the block arrowhead matrices. Only to this issue the second part of this paper (see [17]) will be devoted. 2. The block arrowhead matrices We use the following notation in this paper: ¼ denotes the zero matrix of the respective size (of dimensions m × n), ½ denotes the identity matrix of the respective order (of order n). We present now the main result of this paper concerning the form of determinants of the block arrowhead matrices. In Section 3 we examine few examples illustrating the application of the obtained relations. Theorem 2.1. Aa a Ba b 1. Let us consider the block matrix M2 = × × . The following implications Cb a Db b hold: × × 1 (a) If det D 6=0, then det M2 = det D det(A − BD− C). 1 (b) If det A 6=0, then det M2 = det A det(D − CA− B). In Remark 2.2, after the proof of the above theorem, we discuss the omitted situation when det A = det D =0 and a 6= b. Aa a Ba b Ea c × × × 2. Let us consider the block matrix M3 = Cb a Db b ¼b c . The following impli- × × × Fc a ¼c b Gc c × × × cations hold 1 (a) If det A 6=0 and det(D − CA− B) 6=0, then 1 1 det M3 = det A det(D − CA− B) det(G − F A− E − 1 1 1 1 − F A− B(D − CA− B)− CA− E). (10) 76 E. Hetmaniok, M. R´o˙za´nski, D. S lota, M. Szweda, T. Trawi´nski and R. Witu la 1 We note that if also det D 6=0, then for the inverse of D −CA− B the following Banachiewicz formula (see [1]), also called the Sherman-Morrison-Woodbury (SMW) formula (for some more historical information see [22, subchapter 0.8]), could be applied (see also S. Jose, K. C. Sivakumar, Moore-Penrose inverse of perturbated operators on Hilbert Spaces, 119-131, in [2]): 1 1 1 1 1 1 1 (D − CA− B)− = D− + D− C(A − BD− C)− BD− . 1 The matrix A − BD− C is of order a × a and the above formula is useful in situations when a is much smaller than b and in all other situations when certain 1 structural properties of D are much more simple than of D − CA− B. (b) If det D 6=0 and det G 6=0, then 1 1 det M3 = det D det G det(A − BD− C − EG− F ). (11) B E (c) If either rank + rank ≤ b + c − 1 or rank C D + rank F G ≤ D G b + c − 1, then det M3 =0. 1 (d) If det G 6=0, det(A − EG− F ) 6=0 and D = ¼, then 1 1 1 det M3 = det G det(A − EG− F ) det(−C(A − EG− F )− B). (12) In the sequel, if blocks B and C of M3 are the square matrices (that is a = b) and D = ¼, then a det M3 = (−1) det G det B det C. (13) At last, if A − EG 1F B A − EG 1F rank − = rank − + rank B C ¼ C 1 = rank C + rank A − EG− F B <a + b, then det M3 =0. 3. The determinant of the following arrowhead matrix (A11)b1 b1 (A12)b1 b2 (A13)b1 b3 ... (A1n)b1 bn × × × × ¼ (A21)b2 b1 (A22)b2 b2 ¼b2 b3 ... b2 bn × × × × ¼ (A31)b3 b1 ¼b3 b2 (A33)b3 b3 ... b3 bn Mn = × × × × (14) . . ¼ (An1)bn b1 ¼bn b2 bn b3 ... (Ann)bn bn × × × × is equal to (a) n n 1 det Mn = det A11 − A1iAii− Ai1 det Ajj , (15) i=2 ! j=2 X Y whenever det Aii 6=0 for i =2, 3,...,n, Determinants of the block arrowhead matrices 77 (b) n bi det Mn = (−1) det Ai1 det A1i det Ajj , (16) j=2 jY=i 6 whenever there exists i ∈{2, 3,...,n} such that Aii = ¼ and bi = b1. 4. The determinant of arrowhead matrix (14) is equal to n i 1 1 − 1 − det Mn = det A11 det Aii − Ai1 A11 − A1j Ajj− Aj1 A1i , (17) i=2 j=2 Y X i 1 − 1 whenever det Ajj 6=0 for each j =1, 2,...,n−1 and det A11− A1j Ajj− Aj1 6=0 j=2 for each i =3, 4,...,n − 1.
Recommended publications
  • A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices
    Appl. Math. Inf. Sci. 9, No. 2L, 319-324 (2015) 319 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/092L06 A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices Waldemar Hołubowski1, Dariusz Kurzyk1,∗ and Tomasz Trawi´nski2 1 Institute of Mathematics, Silesian University of Technology, Kaszubska 23, Gliwice 44–100, Poland 2 Mechatronics Division, Silesian University of Technology, Akademicka 10a, Gliwice 44–100, Poland Received: 6 Jul. 2014, Revised: 7 Oct. 2014, Accepted: 8 Oct. 2014 Published online: 1 Apr. 2015 Abstract: We propose an effective method to find the inverse of symmetric block arrowhead matrices which often appear in areas of applied science and engineering such as head-positioning systems of hard disk drives or kinematic chains of industrial robots. Block arrowhead matrices can be considered as generalisation of arrowhead matrices occurring in physical problems and engineering. The proposed method is based on LDLT decomposition and we show that the inversion of the large block arrowhead matrices can be more effective when one uses our method. Numerical results are presented in the examples. Keywords: matrix inversion, block arrowhead matrices, LDLT decomposition, mechatronic systems 1 Introduction thermal and many others. Considered subsystems are highly differentiated, hence formulation of uniform and simple mathematical model describing their static and A square matrix which has entries equal zero except for dynamic states becomes problematic. The process of its main diagonal, a one row and a column, is called the preparing a proper mathematical model is often based on arrowhead matrix. Wide area of applications causes that the formulation of the equations associated with this type of matrices is popular subject of research related Lagrangian formalism [9], which is a convenient way to with mathematics, physics or engineering, such as describe the equations of mechanical, electromechanical computing spectral decomposition [1], solving inverse and other components.
    [Show full text]
  • Accurate Eigenvalue Decomposition of Arrowhead Matrices and Applications
    Accurate eigenvalue decomposition of arrowhead matrices and applications Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovˇcevi´cStor FESB, University of Split joint work with Ivan Slapniˇcar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n2) operations. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n2) operations. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n2) operations. The algorithm is based on shift-and-invert technique and limited finite O(n) use of double precision arithmetic when necessary. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel.
    [Show full text]
  • High Performance Selected Inversion Methods for Sparse Matrices
    High performance selected inversion methods for sparse matrices Direct and stochastic approaches to selected inversion Doctoral Dissertation submitted to the Faculty of Informatics of the Università della Svizzera italiana in partial fulfillment of the requirements for the degree of Doctor of Philosophy presented by Fabio Verbosio under the supervision of Olaf Schenk February 2019 Dissertation Committee Illia Horenko Università della Svizzera italiana, Switzerland Igor Pivkin Università della Svizzera italiana, Switzerland Matthias Bollhöfer Technische Universität Braunschweig, Germany Laura Grigori INRIA Paris, France Dissertation accepted on 25 February 2019 Research Advisor PhD Program Director Olaf Schenk Walter Binder i I certify that except where due acknowledgement has been given, the work presented in this thesis is that of the author alone; the work has not been sub- mitted previously, in whole or in part, to qualify for any other academic award; and the content of the thesis is the result of work which has been carried out since the official commencement date of the approved research program. Fabio Verbosio Lugano, 25 February 2019 ii To my whole family. In its broadest sense. iii iv Le conoscenze matematiche sono proposizioni costruite dal nostro intelletto in modo da funzionare sempre come vere, o perché sono innate o perché la matematica è stata inventata prima delle altre scienze. E la biblioteca è stata costruita da una mente umana che pensa in modo matematico, perché senza matematica non fai labirinti. Umberto Eco, “Il nome della rosa” v vi Abstract The explicit evaluation of selected entries of the inverse of a given sparse ma- trix is an important process in various application fields and is gaining visibility in recent years.
    [Show full text]
  • Master of Science in Advanced Mathematics and Mathematical
    Master of Science in Advanced Mathematics and Mathematical Engineering Title: Cholesky-Factorized Sparse Kernel in Support Vector Machines Author: Alhasan Abdellatif Advisors: Jordi Castro, Lluís A. Belanche Departments: Department of Statistics and Operations Research and Department of Computer Science Academic year: 2018-2019 Universitat Polit`ecnicade Catalunya Facultat de Matem`atiquesi Estad´ıstica Master in Advanced Mathematics and Mathematical Engineering Master's thesis Cholesky-Factorized Sparse Kernel in Support Vector Machines Alhasan Abdellatif Supervised by Jordi Castro and Llu´ısA. Belanche June, 2019 I would like to express my gratitude to my supervisors Jordi Castro and Llu´ıs A. Belanche for the continuous support, guidance and advice they gave me during working on the project and writing the thesis. The enthusiasm and dedication they show have pushed me forward and will always be an inspiration for me. I am forever grateful to my family who have always been there for me. They have encouraged me to pursue my passion and seek my own destiny. This journey would not have been possible without their support and love. Abstract Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR.
    [Show full text]
  • The QR-Algorithm for Arrowhead Matrices
    The QR-algorithm for arrowhead matrices Nicola Mastronardi1;2; Marc Van Barel1, Ellen Van Camp1, Raf Vandebril1 Keywords: Arrowhead matrices, QR-algorithm, diagonal-plus-semi- separable matrices Abstract One of the most fundamental problems in numerical linear algebra is the computation of the eigenspectrum of matrices. Of special interest are the symmetric eigenproblems. The classical algorithm for computing the whole spectral decomposition first reduces the symmetric matrix into a symmetric tridiagonal matrix by means of orthogonal similarity transformations and secondly, the QR-algorithm is applied to this tridiagonal matrix. Another frequently used technique to find the eigendecomposition of this tridiagonal matrix is a divide and conquer strategy. Using this strategy, quite often the eigendecomposition of arrowhead matrices is needed. An arrowhead matrix consists of a diagonal matrix with one extra nonzero row and column. The most common way of computing the eigenvalues of an arrowhead matrix is solving a secular equation. In this talk, we will propose an al- ternative way for computing the eigendecomposition of arrowhead matrices, more precisely we will apply the QR-algorithm to them. When exploiting the special structure of arrowhead matrices, which actually is the structure of diagonal-plus-semiseparable matrices because an arrowhead matrix be- longs to the latter class of matrices, the QR-algorithm will be speeded up. A comparison between this QR-algorithm and solving the secular equation will be presented. 1 Department of Computer Science, Katholieke Universiteit Leuven, Celestij- nenlaan 200A, 3001 Heverlee, Belgium, email: [marc.vanbarel, ellen.vancamp, raf. vandebril, nicola.mastronardi]@cs.kuleuven.ac.be 2 Istituto per le Applicazioni del Calcolo.
    [Show full text]
  • Inversion of Selected Structures of Block Matrices of Chosen TECHNICAL SCIENCES, Vol
    BULLETINBULLETIN OF THE OF THE POLISH POLISH ACADEMY ACADEMY OF OF SCIENCES SCIENCES TECHNICALTECHNICAL SCIENCES, SCIENCES, Vol. Vol. XX, 64, No. No. Y, 4, 20162016 DOI: 10.1515/bpasts-2016-00ZZDOI: 10.1515/bpasts-2016-0093 BULLETIN OFInversion THE POLISH ACADEMY of OF selected SCIENCES structures of block matrices of chosen TECHNICAL SCIENCES, Vol. XX, No. Y, 2016 BULLETIN OF THE POLISH ACADEMY OF SCIENCES DOI: 10.1515/bpasts-2016-00ZZInversion of selected structures of block matrices of chosen TECHNICAL SCIENCES, Vol. XX, No. Y, 2016 mechatronic systems DOI: 10.1515/bpasts-2016-00ZZ mechatronic systems 1 1 1 2 InversionTomasz of Trawiselectednski´ ∗, structures Adam Kochan of, Paweł block Kielan matricesand Dariusz of Kurzyk chosen InversionT. of TRAWIŃSKI selected1* structures, A. KOCHAN1 of, P. KIELAN block1 matrices, and D. KURZYK of chosen2 1 Department of Mechatronics,mechatronic Silesian University of Technology, systems Akademicka 10A, 44-100 Gliwice, Poland 12DepartmentInstitute ofof Mathematics,Mechatronics,mechatronic Silesian Silesian University University of ofTechnology, Technology, systems 10A KaszubskaAkademicka 23, St., 44-100 44-100 Gliwice,Gliwice, Poland Poland 2Institute of Mathematics,1 Silesian University 1of Technology, 23 Kaszubska1 St., 44-100 Gliwice, Poland2 Tomasz Trawinski´ ∗, Adam Kochan , Paweł Kielan and Dariusz Kurzyk Tomasz Trawinski´ 1 , Adam Kochan 1, Paweł Kielan1 and Dariusz Kurzyk2 Abstract. This1 Department articleTomasz describes of Trawi Mechatronics, hownski´ to calculate∗, Silesian Adam the
    [Show full text]
  • Accurate Eigenvalue Decomposition of Arrowhead Matrices and Applications
    Accurate eigenvalue decomposition of arrowhead matrices and applications N. Jakovˇcevi´cStora,1,∗, I. Slapniˇcara,1, J. Barlowb,2 aFaculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Rudjera Boˇskovi´ca 32, 21000 Split, Croatia bDepartment of Computer Science and Engineering, The Pennsylvania State University, University Park, PA 16802-6822, USA Abstract We present a new algorithm for solving an eigenvalue problem for a real sym- metric arrowhead matrix. The algorithm computes all eigenvalues and all com- ponents of the corresponding eigenvectors with high relative accuracy in O(n2) operations. The algorithm is based on a shift-and-invert approach. Double precision is eventually needed to compute only one element of the inverse of the shifted matrix. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel com- puting. Our results extend to Hermitian arrowhead matrices, real symmetric diagonal-plus-rank-one matrices and singular value decomposition of real trian- gular arrowhead matrices. Keywords: eigenvalue decomposition, arrowhead matrix, high relative accuracy, singular value decomposition 2000 MSC: 65F15 1. Introduction and Preliminaries In this paper we consider eigenvalue problem for a real symmetric matrix A which is zero except for its main diagonal and one row and column. Since eigenvalues are invariant under similarity transformations, we can symmetrically arXiv:1302.7203v3 [math.NA] 11 Apr 2013 permute the rows and the columns of the given matrix. Therefore, we assume without loss of generality that the matrix A is a n n real symmetric arrowhead matrix of the form × ∗Corresponding author Email addresses: [email protected] (N.
    [Show full text]
  • This File Was Downloaded From
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Queensland University of Technology ePrints Archive This is the author’s version of a work that was submitted/accepted for pub- lication in the following source: Kumar, S., Guivant, J., Upcroft, B., & Durrant-Whyte, H. F. (2007) Sequen- tial nonlinear manifold learning. Intelligent Data Analysis, 11(2), pp. 203- 222. This file was downloaded from: http://eprints.qut.edu.au/40421/ c Copyright 2007 IOS Press Notice: Changes introduced as a result of publishing processes such as copy-editing and formatting may not be reflected in this document. For a definitive version of this work, please refer to the published source: Intelligent Data Analysis 11 (2007) 203–222 203 IOS Press Sequential nonlinear manifold learning S. Kumar, J. Guivant, B. Upcroft and H.F. Durrant-Whyte Australian Centre for Field Robotics, University of Sydney, NSW 2006, Australia Tel.: +612 93512186; Fax: +612 93517474; E-mail: [email protected] Received 5 December 2005 Revised 2 March 2006 Accepted 7 May 2006 Abstract. The computation of compact and meaningful representations of high dimensional sensor data has recently been addressed through the development of Nonlinear Dimensional Reduction (NLDR) algorithms. The numerical implementation of spectral NLDR techniques typically leads to a symmetric eigenvalue problem that is solved by traditional batch eigensolution algorithms. The application of such algorithms in real-time systems necessitates the development of sequential algorithms that perform feature extraction online. This paper presents an efficient online NLDR scheme, Sequential-Isomap, based on incremental singular value decomposition (SVD) and the Isomap method.
    [Show full text]
  • Accurate Eigenvalue Decomposition of Arrowhead Matrices and Applications
    Accurate eigenvalue decomposition of arrowhead matrices and applications N. Jakovˇcevi´cStora,1,∗, I. Slapniˇcara,1, J. Barlowb,2 aFaculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Rudjera Boˇskovi´ca32, 21000 Split, Croatia bDepartment of Computer Science and Engineering, The Pennsylvania State University, University Park, PA 16802-6822, USA Abstract We present a new algorithm for solving an eigenvalue problem for a real sym- metric arrowhead matrix. The algorithm computes all eigenvalues and all com- ponents of the corresponding eigenvectors with high relative accuracy in O(n2) operations. The algorithm is based on a shift-and-invert approach. Double precision is eventually needed to compute only one element of the inverse of the shifted matrix. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel com- puting. Our results extend to Hermitian arrowhead matrices, real symmetric diagonal-plus-rank-one matrices and singular value decomposition of real trian- gular arrowhead matrices. Keywords: eigenvalue decomposition, arrowhead matrix, high relative accuracy, singular value decomposition 2000 MSC: 65F15 1. Introduction and Preliminaries In this paper we consider eigenvalue problem for a real symmetric matrix A which is zero except for its main diagonal and one row and column. Since eigenvalues are invariant under similarity transformations, we can symmetrically permute the rows and the columns of the given matrix. Therefore, we assume without loss of generality that the matrix A is a n×n real symmetric arrowhead matrix of the form ∗Corresponding author Email addresses: [email protected] (N. Jakovˇcevi´cStor), [email protected] (I.
    [Show full text]
  • Explicit Minimum Polynomial, Eigenvector, and Inverse Formula for Nonsymmetric Arrowhead Matrix
    International Journal of Pure and Applied Mathematics Volume 108 No. 4 2016, 967-984 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu AP doi: 10.12732/ijpam.v108i4.21 ijpam.eu EXPLICIT MINIMUM POLYNOMIAL, EIGENVECTOR, AND INVERSE FORMULA FOR NONSYMMETRIC ARROWHEAD MATRIX Wiwat Wanicharpichat 1Department of Mathematics Faculty of Science and 2Research Center for Academic Excellence in Mathematics Naresuan University Phitsanulok 65000, THAILAND Abstract: In this paper, we treat the eigenvalue problem for a nonsymmetric arrowhead matrix which is the general form of a symmetric arrowhead matrix. The purpose of this paper is to present explicit formula of determinant, inverse, minimum polynomial, and eigenvector formula of some nonsymmetric arrowhead matrices are presented. AMS Subject Classification: 15A09, 15A15, 15A18, 15A23, 65F15, 65F40 Key Words: arrowhead matrix, Krylov matrix, Schur complement, nonderogatory matrix 1. Introduction and Preliminaries Let R be the field of real numbers and C be the field of complex numbers and ∗ C = C \{0}. For a positive integer n, let Mn be the set of all n × n matrices over C. The set of all vectors, or n × 1 matrices over C is denoted by Cn.A n nonzero vector v ∈ C is called an eigenvector of A ∈ Mn corresponding to a Received: June 6, 2016 c 2016 Academic Publications, Ltd. Published: August 16, 2016 url: www.acadpubl.eu 968 W. Wanicharpichat scalar λ ∈ C if Av = λv, and the scalar λ is an eigenvalue of the matrix A. The set of eigenvalues of A is called as the spectrum of A and is denoted by σ(A).
    [Show full text]
  • A Quantum Interior-Point Method for Second-Order Cone Programming Iordanis Kerenidis, Anupam Prakash, Daniel Szilagyi
    A Quantum Interior-Point Method for Second-Order Cone Programming Iordanis Kerenidis, Anupam Prakash, Daniel Szilagyi To cite this version: Iordanis Kerenidis, Anupam Prakash, Daniel Szilagyi. A Quantum Interior-Point Method for Second- Order Cone Programming. [Research Report] IRIF. 2019. hal-02138307 HAL Id: hal-02138307 https://hal.archives-ouvertes.fr/hal-02138307 Submitted on 23 May 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Quantum Interior-Point Method for Second-Order Cone Programming Technical report Iordanis Kerenidis1, Anupam Prakash1, and Dániel Szilágyi1,2 1CNRS, IRIF, Université Paris Diderot, Paris, France 2École Normale Supérieure de Lyon, Lyon, France 1. Introduction In the last 40 years, convex optimization has seen a tremendous rise in popularity, both as a research area, and as a tool used by practicioners across the industry. By and large, the community has realized that the proper division of the field is into convex (feasible, “easy”) and nonconvex (sometimes “as hard as it gets”). The reason[6] for this is twofold: 1. Starting with the work of Khachiyan[17] and Karmarkar[13], (theoretically and practi- cally) efficient algorithms for convex optimization have been developed.
    [Show full text]
  • Book of Abstracts Iwasep 9
    BOOK OF ABSTRACTS IWASEP 9 9th International Workshop on Accurate Solution of Eigenvalue Problems Napa Valley, June 4 - 7, 2012 Organizers: Jesse Barlow (Chair), Zlatko Drmac, Volker Mehrmann, Ivan Slapniˇcar, Kresimir Veselic. Local Organizing Committee: Jim Demmel, Esmond G. Ng, Oded Schwartz. 1 Deflations Preserving Relative Accuracy William Kahan Deflation turns a matrix eigenproblem into two of smaller dimensions by annihilating a block of off-diagonal elements. When can this be done without changing the eigenvalues of an Hermitian matrix beyond each one's last significant digit or two no matter how widespread are eigenval- ues' magnitudes? We seek practicable answers to this question, particularly for tridiagonals. Answers for bidiagonals' singular values were found by Ren-Cang Li in 1994. 2 Accurate Computation of Eigenvalues by an Analytical Level Set Method Pavel Grinfeld A numerical technique, based on an analytical level set method, is presented. The proposed technique yields eigenvalues for a broad range of smooth shapes with arbitrarily high accu- racy. The technique applies to most common linear operators, including the Laplace operator which will be the focus of the presentation. The method is characterized by utmost simplicity, exponential convergence, ease of analysis and the ability to produce an exact solution to an approximate problem. 3 Orthogonal Functions and Inverse Eigenvalue Problems Marc Van Barel Orthogonal polynomials on the real line satisfy a three term recurrence relation. This relation can be written in matrix notation by using a tridiagonal matrix. Similarly, orthogonal poly- nomials on the unit circle satisfy a Szeg}orecurrence relation that corresponds to an (almost) unitary Hessenberg matrix.
    [Show full text]