A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Geometric Polarimetry − Part I: Spinors and Wave States
1 Geometric Polarimetry Part I: Spinors and − Wave States David Bebbington, Laura Carrea, and Ernst Krogager, Member, IEEE Abstract A new approach to polarization algebra is introduced. It exploits the geometric properties of spinors in order to represent wave states consistently in arbitrary directions in three dimensional space. In this first expository paper of an intended series the basic derivation of the spinorial wave state is seen to be geometrically related to the electromagnetic field tensor in the spatio-temporal Fourier domain. Extracting the polarization state from the electromagnetic field requires the introduction of a new element, which enters linearly into the defining relation. We call this element the phase flag and it is this that keeps track of the polarization reference when the coordinate system is changed and provides a phase origin for both wave components. In this way we are able to identify the sphere of three dimensional unit wave vectors with the Poincar´esphere. Index Terms state of polarization, geometry, covariant and contravariant spinors and tensors, bivectors, phase flag, Poincar´esphere. I. INTRODUCTION arXiv:0804.0745v1 [physics.optics] 4 Apr 2008 The development of applications in radar polarimetry has been vigorous over the past decade, and is anticipated to continue with ambitious new spaceborne remote sensing missions (for example TerraSAR-X [1] and TanDEM-X [2]). As technical capabilities increase, and new application areas open up, innovations in data analysis often result. Because polarization data are This work was supported by the Marie Curie Research Training Network AMPER (Contract number HPRN-CT-2002-00205). D. Bebbington and L. -
Structured Factorizations in Scalar Product Spaces
STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES D. STEVEN MACKEY¤, NILOUFER MACKEYy , AND FRANC»OISE TISSEURz Abstract. Let A belong to an automorphism group, Lie algebra or Jordan algebra of a scalar product. When A is factored, to what extent do the factors inherit structure from A? We answer this question for the principal matrix square root, the matrix sign decomposition, and the polar decomposition. For general A, we give a simple derivation and characterization of a particular generalized polar decomposition, and we relate it to other such decompositions in the literature. Finally, we study eigendecompositions and structured singular value decompositions, considering in particular the structure in eigenvalues, eigenvectors and singular values that persists across a wide range of scalar products. A key feature of our analysis is the identi¯cation of two particular classes of scalar products, termed unitary and orthosymmetric, which serve to unify assumptions for the existence of structured factorizations. A variety of di®erent characterizations of these scalar product classes is given. Key words. automorphism group, Lie group, Lie algebra, Jordan algebra, bilinear form, sesquilinear form, scalar product, inde¯nite inner product, orthosymmetric, adjoint, factorization, symplectic, Hamiltonian, pseudo-orthogonal, polar decomposition, matrix sign function, matrix square root, generalized polar decomposition, eigenvalues, eigenvectors, singular values, structure preservation. AMS subject classi¯cations. 15A18, 15A21, 15A23, 15A57, -
Linear Algebra - Part II Projection, Eigendecomposition, SVD
Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Punit Shah's slides) 2019 Linear Algebra, Part II 2019 1 / 22 Brief Review from Part 1 Matrix Multiplication is a linear tranformation. Symmetric Matrix: A = AT Orthogonal Matrix: AT A = AAT = I and A−1 = AT L2 Norm: s X 2 jjxjj2 = xi i Linear Algebra, Part II 2019 2 / 22 Angle Between Vectors Dot product of two vectors can be written in terms of their L2 norms and the angle θ between them. T a b = jjajj2jjbjj2 cos(θ) Linear Algebra, Part II 2019 3 / 22 Cosine Similarity Cosine between two vectors is a measure of their similarity: a · b cos(θ) = jjajj jjbjj Orthogonal Vectors: Two vectors a and b are orthogonal to each other if a · b = 0. Linear Algebra, Part II 2019 4 / 22 Vector Projection ^ b Given two vectors a and b, let b = jjbjj be the unit vector in the direction of b. ^ Then a1 = a1 · b is the orthogonal projection of a onto a straight line parallel to b, where b a = jjajj cos(θ) = a · b^ = a · 1 jjbjj Image taken from wikipedia. Linear Algebra, Part II 2019 5 / 22 Diagonal Matrix Diagonal matrix has mostly zeros with non-zero entries only in the diagonal, e.g. identity matrix. A square diagonal matrix with diagonal elements given by entries of vector v is denoted: diag(v) Multiplying vector x by a diagonal matrix is efficient: diag(v)x = v x is the entrywise product. Inverting a square diagonal matrix is efficient: 1 1 diag(v)−1 = diag [ ;:::; ]T v1 vn Linear Algebra, Part II 2019 6 / 22 Determinant Determinant of a square matrix is a mapping to a scalar. -
Unit 6: Matrix Decomposition
Unit 6: Matrix decomposition Juan Luis Melero and Eduardo Eyras October 2018 1 Contents 1 Matrix decomposition3 1.1 Definitions.............................3 2 Singular Value Decomposition6 3 Exercises 13 4 R practical 15 4.1 SVD................................ 15 2 1 Matrix decomposition 1.1 Definitions Matrix decomposition consists in transforming a matrix into a product of other matrices. Matrix diagonalization is a special case of decomposition and is also called diagonal (eigen) decomposition of a matrix. Definition: there is a diagonal decomposition of a square matrix A if we can write A = UDU −1 Where: • D is a diagonal matrix • The diagonal elements of D are the eigenvalues of A • Vector columns of U are the eigenvectors of A. For symmetric matrices there is a special decomposition: Definition: given a symmetric matrix A (i.e. AT = A), there is a unique decomposition of the form A = UDU T Where • U is an orthonormal matrix • Vector columns of U are the unit-norm orthogonal eigenvectors of A • D is a diagonal matrix formed by the eigenvalues of A This special decomposition is known as spectral decomposition. Definition: An orthonormal matrix is a square matrix whose columns and row vectors are orthogonal unit vectors (orthonormal vectors). Orthonormal matrices have the property that their transposed matrix is the inverse matrix. 3 Proposition: if a matrix Q is orthonormal, then QT = Q−1 Proof: consider the row vectors in a matrix Q: 0 1 u1 B . C T u j ::: ju Q = @ . A and Q = 1 n un 0 1 0 1 0 1 u1 hu1; u1i ::: hu1; uni 1 ::: 0 T B . -
Accurate Eigenvalue Decomposition of Arrowhead Matrices and Applications
Accurate eigenvalue decomposition of arrowhead matrices and applications Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovˇcevi´cStor FESB, University of Split joint work with Ivan Slapniˇcar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n2) operations. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n2) operations. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n2) operations. The algorithm is based on shift-and-invert technique and limited finite O(n) use of double precision arithmetic when necessary. 2/31 Accurate eigenvalue decomposition of arrowhead matrices and applications Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. -
High Performance Selected Inversion Methods for Sparse Matrices
High performance selected inversion methods for sparse matrices Direct and stochastic approaches to selected inversion Doctoral Dissertation submitted to the Faculty of Informatics of the Università della Svizzera italiana in partial fulfillment of the requirements for the degree of Doctor of Philosophy presented by Fabio Verbosio under the supervision of Olaf Schenk February 2019 Dissertation Committee Illia Horenko Università della Svizzera italiana, Switzerland Igor Pivkin Università della Svizzera italiana, Switzerland Matthias Bollhöfer Technische Universität Braunschweig, Germany Laura Grigori INRIA Paris, France Dissertation accepted on 25 February 2019 Research Advisor PhD Program Director Olaf Schenk Walter Binder i I certify that except where due acknowledgement has been given, the work presented in this thesis is that of the author alone; the work has not been sub- mitted previously, in whole or in part, to qualify for any other academic award; and the content of the thesis is the result of work which has been carried out since the official commencement date of the approved research program. Fabio Verbosio Lugano, 25 February 2019 ii To my whole family. In its broadest sense. iii iv Le conoscenze matematiche sono proposizioni costruite dal nostro intelletto in modo da funzionare sempre come vere, o perché sono innate o perché la matematica è stata inventata prima delle altre scienze. E la biblioteca è stata costruita da una mente umana che pensa in modo matematico, perché senza matematica non fai labirinti. Umberto Eco, “Il nome della rosa” v vi Abstract The explicit evaluation of selected entries of the inverse of a given sparse ma- trix is an important process in various application fields and is gaining visibility in recent years. -
LU Decomposition - Wikipedia, the Free Encyclopedia
LU decomposition - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/LU_decomposition From Wikipedia, the free encyclopedia In linear algebra, LU decomposition (also called LU factorization) is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or calculate the determinant of a matrix. LU decomposition can be viewed as a matrix form of Gaussian elimination. LU decomposition was introduced by mathematician Alan Turing [1] 1 Definitions 2 Existence and uniqueness 3 Positive definite matrices 4 Explicit formulation 5 Algorithms 5.1 Doolittle algorithm 5.2 Crout and LUP algorithms 5.3 Theoretical complexity 6 Small example 7 Sparse matrix decomposition 8 Applications 8.1 Solving linear equations 8.2 Inverse matrix 8.3 Determinant 9 See also 10 References 11 External links Let A be a square matrix. An LU decomposition is a decomposition of the form where L and U are lower and upper triangular matrices (of the same size), respectively. This means that L has only zeros above the diagonal and U has only zeros below the diagonal. For a matrix, this becomes: An LDU decomposition is a decomposition of the form where D is a diagonal matrix and L and U are unit triangular matrices, meaning that all the entries on the diagonals of L and U LDU decomposition of a Walsh matrix 1 of 7 1/11/2012 5:26 PM LU decomposition - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/LU_decomposition are one. -
Arxiv:1105.1185V1 [Math.NA] 5 May 2011 Ento 2.2
ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU Abstract. We examine some numerical iterative methods for computing the eigenvalues and eigenvectors of real matrices. The five methods examined here range from the simple power iteration method to the more complicated QR iteration method. The derivations, procedure, and advantages of each method are briefly discussed. 1. Introduction Eigenvalues and eigenvectors play an important part in the applications of linear algebra. The naive method of finding the eigenvalues of a matrix involves finding the roots of the characteristic polynomial of the matrix. In industrial sized matrices, however, this method is not feasible, and the eigenvalues must be obtained by other means. Fortunately, there exist several other techniques for finding eigenvalues and eigenvectors of a matrix, some of which fall under the realm of iterative methods. These methods work by repeatedly refining approximations to the eigenvectors or eigenvalues, and can be terminated whenever the approximations reach a suitable degree of accuracy. Iterative methods form the basis of much of modern day eigenvalue computation. In this paper, we outline five such iterative methods, and summarize their derivations, procedures, and advantages. The methods to be examined are the power iteration method, the shifted inverse iteration method, the Rayleigh quotient method, the simultaneous iteration method, and the QR method. This paper is meant to be a survey over existing algorithms for the eigenvalue computation problem. Section 2 of this paper provides a brief review of some of the linear algebra background required to understand the concepts that are discussed. In section 3, the iterative methods are each presented, in order of complexity, and are studied in brief detail. -
Master of Science in Advanced Mathematics and Mathematical
Master of Science in Advanced Mathematics and Mathematical Engineering Title: Cholesky-Factorized Sparse Kernel in Support Vector Machines Author: Alhasan Abdellatif Advisors: Jordi Castro, Lluís A. Belanche Departments: Department of Statistics and Operations Research and Department of Computer Science Academic year: 2018-2019 Universitat Polit`ecnicade Catalunya Facultat de Matem`atiquesi Estad´ıstica Master in Advanced Mathematics and Mathematical Engineering Master's thesis Cholesky-Factorized Sparse Kernel in Support Vector Machines Alhasan Abdellatif Supervised by Jordi Castro and Llu´ısA. Belanche June, 2019 I would like to express my gratitude to my supervisors Jordi Castro and Llu´ıs A. Belanche for the continuous support, guidance and advice they gave me during working on the project and writing the thesis. The enthusiasm and dedication they show have pushed me forward and will always be an inspiration for me. I am forever grateful to my family who have always been there for me. They have encouraged me to pursue my passion and seek my own destiny. This journey would not have been possible without their support and love. Abstract Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR. -
Householder Matrix H( W )= E ( W , W ,2) = I − 2 Wwh for Wh W =1
Outline Elementary Matrix Full-rank Decomposition Triangular Matrix Decomposition QR Decomposition Schur Theorem and Normal Matrix 1 Elementary Matrix Elementary Matrix E(,, u vσσ )= I − uvH Identity matrix plus a rank-one matrix Properties on the Elementary Matrix Determinant Euv(,,σσ )= 1 − vH u σ inverse matrix Euv(,,σ )−1 = Euv (,, ) σ vuH −1 for any two nonzero vectors a and b, their exists an elementary matrix such that Euv(,,σ ) a= b 2 Elementary Matrix Determinant Euv(,,σσ )= 1 − vH u uH uuHH u H E(,, u v σσ )=−=−uvHH u σ v , u uu u H uuHH −=−uσσ v1, vu uu uH E(,,) u vσ = u HH if u u = 0. 0 00 σ Inverse Matrix Euv(,,σ )−1 = Euv (,, ) σ vuH −1 brute-force proof For any two nonzero vectors a and b, their exists an elementary matrix such that E(u,v,σ )a = b 3 Elementary Matrix For any two nonzero vectors a and b, their exists an elementary matrix such that Euv(,,σ ) a= b E(,, u vσσ ) a=−= a uvH a b ⇒=−σuvH a a b ⇒=−u a b,1σ vaH = 4 Elementary Matrix Householder Matrix H( w )= E ( w , w ,2) = I − 2 wwH for wH w =1 Properties on the Householder Matrix determinant det(Hw ( ))=−=− 1 2 wwH 1 inverse Hw()−1 = Hw ()H = Hw () 5 Full-Rank Decomposition Full-rank Decomposition Theorem: For m*n matrix A and det(A) = r, there exists m*r full- rank matrix B and r*n full-rank matrix C, such that A = BC. -
Electronic Reprint Wedge Reversion Antisymmetry and 41 Types of Physical Quantities in Arbitrary Dimensions Iucr Journals
electronic reprint ISSN: 2053-2733 journals.iucr.org/a Wedge reversion antisymmetry and 41 types of physical quantities in arbitrary dimensions Venkatraman Gopalan Acta Cryst. (2020). A76, 318–327 IUCr Journals CRYSTALLOGRAPHY JOURNALS ONLINE Copyright c International Union of Crystallography Author(s) of this article may load this reprint on their own web site or institutional repository provided that this cover page is retained. Republication of this article or its storage in electronic databases other than as specified above is not permitted without prior permission in writing from the IUCr. For further information see https://journals.iucr.org/services/authorrights.html Acta Cryst. (2020). A76, 318–327 Venkatraman Gopalan · Wedge reversion antisymmetry research papers Wedge reversion antisymmetry and 41 types of physical quantities in arbitrary dimensions ISSN 2053-2733 Venkatraman Gopalan* Department of Materials Science and Engineering, Department of Physics, and Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, PA 16802, USA. *Correspondence e-mail: [email protected] Received 18 October 2019 Accepted 16 February 2020 It is shown that there are 41 types of multivectors representing physical quantities in non-relativistic physics in arbitrary dimensions within the formalism of Clifford algebra. The classification is based on the action of three Edited by S. J. L. Billinge, Columbia University, symmetry operations on a general multivector: spatial inversion, 1, time- USA reversal, 10, and a third that is introduced here, namely wedge reversion, 1†.Itis shown that the traits of ‘axiality’ and ‘chirality’ are not good bases for extending Keywords: multivectors; wedge reversion antisymmetry; Clifford algebra. the classification of multivectors into arbitrary dimensions, and that introducing 1† would allow for such a classification. -
Matrix Decompositions
Matrix Decompositions Paul Johnson December 2012 1 Background In order to understand these notes, you should have some experience with matrix algebra. Perhaps not an en- tire semester or year of matrix algebra is necessary, but comfort with columns, rows, and their multiplication, is assumed. I assume you know about \inner products" of vectors. matrix multiplication (which is actually conducted by calculating the inner products of the rows and columns of the matrices). the transpose of a vector converts a row into a column (and vice versa) the transpose of a matrix is a \reflection" of that matrix on its main diagonal. Other than that, I try to define the terms as I go. Inverse and Identity Recall that an inverse of X is matrix X−1 such that XX−1 = I (1) I , the \identity matrix", is a matrix of 1's and 0's, 2 1 0 0 0 0 3 6 0 1 0 0 0 7 6 7 6 0 0 1 0 0 7 I = 6 7 (2) 6 .. 7 4 0 0 0 . 0 5 0 0 0 0 1 If you could grab the first row of X, and the first column of X−1, the inner product of row and column would be 1. And the product of that first row with any other column of X−1 will be 0. Disregard some Elementary Linear Algebra In statistics, we often need to calculate the inverse of X. We do that in order to\solve"a system or\calculate" some statistic. In elementary matrix algebra, we see a matrix A and vectors x and b, Ax = b (3) 1 solved by the introduction of an inverse matrix A−1, a matrix such that A−1A = I: A−1Ax = A−1b Ix = A−1b x = A−1b It is almost instinctual to think that, if we need to calculate x, we multiply A−1b.