Stat 309: Mathematical Computations I Fall 2014 Lecture 8

Total Page:16

File Type:pdf, Size:1020Kb

Stat 309: Mathematical Computations I Fall 2014 Lecture 8 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2014 LECTURE 8 • we look at LU factorization and some of its variants: condensed LU, LDU, LDLT, and Cholesky factorizations 1. existence of lu factorization • the solution method for a linear system Ax = b depends on the structure of A: A may be a sparse or dense matrix, or it may have one of many well-known structures, such as being a banded matrix, or a Hankel matrix • for the general case of a dense, unstructured matrix A, the most common method is to obtain a decomposition A = LU, where L is lower triangular and U is upper triangular • this decomposition is called the LU factorization or LU decomposition • we deduce its existence via a constructive proof, namely, Gaussian elimination • the motivation for this is something you learnt in middle school, i.e., solving Ax = b by eliminating variables a11x1 + ··· + a1nxn = b1 a21x1 + ··· + a2nxn = b2 . an1x1 + ··· + annxn = bn • we proceed by multiplying the first equation by −a21=a11 and adding it to the second equation, and in general multiplying the first equation by −ai1=a11 and adding it to equation i and this leaves you with the equivalent system a11x1 + a12x2 + ··· + a1nxn = b1 0 0 0 0x1 + a22x2 + ··· + a2nxn = b2 . 0 0 0 0x1 + an2x2 + ··· + annxn = bn • continuing in this fashion, adding multiples of the second equation to each subsequent equa- tion to make all elements below the diagonal equal to zero, you obtain an upper triangular system and may then solve for all xn; xn−1; : : : ; x1 by back substitution • getting the LU factorization A = LU is very similar, the main difference is that you want not just the final upper triangular matrix (which is your U) but also to keep track of all the elimination steps (which is your L) 2. Gaussian elimination revisited • we are going to look at Gaussian elimination in a slightly different light from what you learnt in your undergraduate linear algebra class • we think of Gaussian elimination as the process of transforming A to an upper triangular matrix U is equivalent to multiplying A by a sequence of matrices to obtain U Date: November 5, 2014, version 1.0. Comments, bug reports: [email protected]. 1 • but instead of elementary matrices, we consider again a rank-1 change to I, i.e., a matrix of the form I − uvT n where u; v 2 R • in Householder QR, we used Householder reflection matrices of the form H = I − 2uuT • in Gaussian elimination, we use so-called Gauss transformation or elimination matrices of the form T M = I − mei T where ei = [0;:::; 1;::: 0] is the ith standard basis vector • the same trick that led us to the appropriate u in Householder matrix can be applied to T find the appropriate m too: suppose we want M1 = I − m1e1 to `zero out' all the entries beneath the first in a vector a, i.e., 2 3 2 3 a1 γ 6a27 607 6 7 6 7 M1 6 . 7 = 6 . 7 4 . 5 4 . 5 an 0 i.e., T (I − m1e1 )a = γe1 T a − (e1 a1)m1 = γe1 a1m1 = a − γe1 and if a1 6= 0, then we may set 2 0 3 6a2=a17 6 7 γ = a1; m1 = 6 . 7 4 . 5 an=a1 • so we get 2 1 03 6−a2=a1 1 7 T 6 7 M1 = I − m1e1 = 6 . .. 7 4 . 0 . 5 −an=a1 1 and, as required, 2 3 2 3 2 3 1 0 a1 a1 6−a2=a1 1 7 6a27 6 0 7 6 7 6 7 6 7 M1a = 6 . .. 7 6 . 7 = 6 . 7 = a1e1 4 . 0 . 5 4 . 5 4 . 5 −an=a1 1 an 0 • applying this to zero out the entries beneath a11 in the first column of a matrix A, we get M1A = A2 where 2a11 a12 ··· a1n 3 (2) (2) 6 0 a ··· a 7 6 22 2n 7 A2 = 6 . 7 4 . 5 (2) (2) 0 an2 ··· ann 2 where the superscript in parenthesis denote that the entries have changed • we will write 2 1 03 6−`21 1 7 a 6 7 i1 M1 = 6 . .. 7 ; `i1 = 4 . 0 . 5 a11 −`n1 1 for i = 2; : : : ; n • if we do this recursively, defining M2 by 21 3 60 1 7 6 7 a(2) M = 60 −`32 1 7 ; ` = i2 2 6 7 i2 (2) 6. .. 7 a 4. 5 22 0 −`n2 1 for i = 3; : : : ; n, then 2 3 a11 a12 a13 ··· a1n 6 0 a(2) a(2) ··· a(2)7 6 22 23 2n 7 6 (3) (3)7 M2A2 = A3 = 6 0 0 a33 ··· a3n 7 6 . 7 6 . 7 4 . 5 (3) (3) 0 0 an3 ··· ann • in general, we have 21 3 6 .. 7 60 . 7 6. 7 (k) 6. .. 1 7 a M = 6 7 ; ` = ik k 6. 7 ik (k) 6. −` 1 7 a 6 k+1;k 7 kk 6. 7 4. .. 5 0 −`nk 1 for i = k + 1; : : : ; n, and 2 3 u11 ··· u1n 6 0 u22 ··· u2n 7 6 7 Mn−1Mn−2 ··· M1A = An ≡ 6 . .. 7 4 . 5 0 ··· 0 unn or, equivalently, −1 −1 −1 A = M1 M2 ··· Mn−1U −1 • it turns out that Mj is very easy to compute, we claim that 2 1 03 6`21 1 7 −1 6 7 M1 = 6 . .. 7 (2.1) 4 . 0 . 5 `n1 1 3 • to see this, consider the product 2 1 03 2 1 03 6−`21 1 7 6`21 1 7 −1 6 7 6 7 M1M1 = 6 . .. 7 6 . .. 7 4 . 0 . 5 4 . 0 . 5 −`n1 1 `n1 1 which can easily be verified to be equal to the identity matrix • in general, we have 21 3 6 .. 7 60 . 7 6. 7 6. .. 7 −1 6. 1 7 Mk = 6. 7 (2.2) 6. ` 1 7 6 k+1;k 7 6. 7 4. .. 5 0 `nk 1 • now, consider the product 2 1 3 21 3 6`21 1 7 60 1 7 6 7 6 7 −1 −1 6`31 0 1 7 60 `32 1 7 M1 M2 = 6 7 6 7 6 . .. 7 6. .. 7 4 . 5 4. 5 `n1 0 1 0 `n2 1 2 1 3 6`21 1 7 6 7 6`31 `32 1 7 = 6 7 6 . .. 7 4 . 5 `n1 `n2 1 • so inductively we get 2 1 3 . 6 .. 7 6`21 7 −1 −1 −1 6 . 7 M M ··· M = 6 . .. 7 1 2 n−1 6 . `32 7 6 . 7 4 . .. .. 5 `n1 `n2 ··· `n;n−1 1 • it follows that under proper circumstances, we can write A = LU where 2 1 0 0 ··· 03 2u11 u12 ······ u1n3 6`21 1 0 ··· 07 6 0 u22 ······ u2n7 6 . .7 6 . 7 6 .. .7 6 .. 7 L = 6`31 `32 .7 ;U = 6 0 0 . 7 6 . 7 6 . 7 4 . .. .. 05 4 . .. 5 `n1 `n2 ··· `n;n−1 1 0 0 ··· 0 unn • what exactly are proper circumstances? (k) • we must have akk 6= 0, or we cannot proceed with the decomposition 4 • for example, if 20 1 113 21 3 43 A = 43 7 2 5 or A = 42 6 45 2 9 3 7 1 2 Gaussian elimination will fail; note that both matrices are nonsingular • in the first case, it fails immediately; in the second case, it fails after the subdiagonal entries (k) in the first column are zeroed, and we find that a22 = 0 • in general, we must have det Aii 6= 0 for i = 1; : : : ; n where 2 3 a11 ··· a1i 6 . 7 Aii = 4 . 5 ai1 ··· aii for the LU factorization to exist • the existence of LU factorization (without pivoting) can be guaranteed by several conditions, 1 n×n one example is column diagonal dominance: if a nonsingular A 2 R satisfies n X jajjj ≥ jaijj; j = 1; : : : ; n; i=1 i6=j then one can guarantee that Gaussian elimination as described above produces A = LU with j`ijj ≤ 1 • there are necessary and sufficient conditions guaranteeing the existence of LU decomposition but those are difficult to check in practice and we do not state them here • how can we obtain the LU factorization for a general non-singular matrix? • if A is nonsingular, then some element of the first column must be nonzero • if ai1 6= 0, then we can interchange row i with row 1 and proceed • this is equivalent to multiplying A by a permutation matrix Π1 that interchanges row 1 and row i: 20 ······ 0 1 0 ··· 03 6 1 7 6 . 7 6 .. 7 6 7 6 7 6 1 7 Π1 = 6 7 61 0 ······ 0 ······ 07 6 7 6 1 7 6 . 7 4 .. 5 1 • thus M1Π1A = A2 (refer to earlier lecture notes for more information about permutation matrices) • then, since A2 is nonsingular, some element of column 2 of A2 below the diagonal must be nonzero • proceeding as before, we compute M2Π2A2 = A3, where Π2 is another permutation matrix • continuing, we obtain −1 A = (Mn−1Πn−1 ··· M1Π1) U • it can easily be shown that ΠA = LU where Π is a permutation matrix | easy but a bit of a pain because notation is cumbersome • so we will be informal but you'll get the idea 1 Pn the usual type of diagonal dominance, i.e., jaiij ≥ j=1;j6=ijaij j, i = 1; : : : ; n, is called row diagonal dominance 5 • for example if after two steps we get (recall that permutation matrices or orthogonal ma- trices), −1 A = (M2Π2M1Π1) A2 T −1 T −1 = Π1 M1 Π2 M2 A2 T T −1 T −1 = Π1 Π2 (Π2M1 Π2 )M2 A2 T = Π L1L2A2 then { Π = Π2Π1 is a permutation matrix −1 { L2 = M2 is a unit lower triangular matrix −1 T −1 { L1 = Π2M1 Π2 will always be a unit lower triangular matrix because M1 is of the form in (2.1) 1 M −1 = 1 ` I whereas Π2 must be of the form 1 Π2 = Πb 2 for some (n − 1) × (n − 1) permutation matrix Πb 2 and so −1 T 1 0 Π2M1 Π2 = Πb 2` I −1 T in other words Π2M1 Π2 also has the form in (2.1) • if we do one more steps we get −1 A = (M3Π3M2Π2M1Π1) A3 T −1 T −1 T −1 = Π1 M1 Π2 M2 Π3 M3 A2 T T T −1 T T −1 T −1 = Π1 Π2 Π3 (Π3Π2M1 Π2 Π3 )(Π3M2 Π3 )M3 A2 T = Π L1L2L3A2 where { Π = Π3Π2Π1 is a permutation matrix −1 { L3 = M3 is a unit lower triangular matrix −1 T −1 { L2 = Π3M2 Π3 will always be a unit lower triangular matrix because M2 is of the form in (2.2) 21 3 −1 M2 = 4 1 5 ` I whereas Π3 must be of the form 21 3 Π3 = 4 1 5 Πb 3 for some (n − 2) × (n − 2) permutation matrix Πb 3 and so 21 3 −1 T Π3M2 Π3 = 4 1 05 Πb 3` I −1 T in other words Π3M2 Π3 also has the form in (2.2) 6 −1 T T { L1 = Π3Π2M1 Π2 Π3 will always be a unit lower triangular matrix for the same reason above because Π3Π2 must have the form 21 3 1 1 Π3Π2 = 4 1 5 = Πb 2 Π32 Πb 3 for some (n − 1) × (n − 1) permutation matrix 1 Π32 = Πb 2 Πb 3 • more generally if we keep doing this, then T A = Π L1L2 ··· Ln−1An−1 where { Π = Πn−1Πn−2 ··· Π1 is a permutation matrix −1 { Ln−1 = Mn−1 is a unit lower triangular matrix −1 T T { Lk = Πn−1 ··· Πk+1Mk Πk+1 ··· Πn−1 is a unit lower triangular matrix for all k = 1; : : : ; n − 2 { An−1 = U is an upper triangular matrix { L = L1L2 ··· Ln−1 is a unit lower triangular matrix • this algorithm with the row permutations is called Gaussian elimination with partial pivoting or gepp for short; we will say more in the next section 3.
Recommended publications
  • The Inverse Along a Lower Triangular Matrix∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa.
    [Show full text]
  • LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
    LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and .
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Triangular Factorization
    Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix.
    [Show full text]
  • 8.3 Positive Definite Matrices
    8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number.
    [Show full text]
  • Wronskian Solutions to the Kdv Equation Via B\" Acklund
    Wronskian solutions to the KdV equation via B¨acklund transformation Qi-fei Xuan∗, Mei-ying Ou, Da-jun Zhang† Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China October 27, 2018 Abstract In the paper we discuss the B¨acklund transformation of the KdV equation between solitons and solitons, between negatons and negatons, between positons and positons, between rational solution and rational solution, and between complexitons and complexitons. We investigate the conditions that Wronskian entries satisfy for the bilinear B¨acklund transformation of the KdV equation. By choosing suitable Wronskian entries and the parameter in the bilinear B¨acklund transformation, we obtain transformations between many kinds of solutions. Keywords: the KdV equation, Wronskian solution, bilinear form, B¨acklund transformation 1 Introduction The Wronskian can be considered as a bridge connecting with many classical methods in soliton theory. This is not only because soliton solutions in Wronskian form can be obtained from the Darboux transformation[1], Sato theory[2, 3] and Wronskian technique[4]-[10], but also because the exponential polynomial for N-solitons derived from Hirota method[11, 12] and the matrix form given by the Inverse Scattering Transform[13, 14] can be transformed into a Wronskian by extracting some exponential factors. The special structure of a Wronskian contributes simple arXiv:0706.3487v1 [nlin.SI] 24 Jun 2007 forms of its derivatives, and this admits solution verification by direct substituting Wronskians into a bilinear soliton equation or a bilinear B¨acklund transformation(BT). This approach is re- ferred to as Wronskian technique[4]. In the approach a bilinear soliton equation is some algebraic identity provided that Wronskian entry vector satisfies some differential equation set which we call Wronskian condition.
    [Show full text]
  • Lecture 5: Matrix Operations: Inverse
    Lecture 5: Matrix Operations: Inverse • Inverse of a matrix • Computation of inverse using co-factor matrix • Properties of the inverse of a matrix • Inverse of special matrices • Unit Matrix • Diagonal Matrix • Orthogonal Matrix • Lower/Upper Triangular Matrices 1 Matrix Inverse • Inverse of a matrix can only be defined for square matrices. • Inverse of a square matrix exists only if the determinant of that matrix is non-zero. • Inverse matrix of 퐴 is noted as 퐴−1. • 퐴퐴−1 = 퐴−1퐴 = 퐼 • Example: 2 −1 0 1 • 퐴 = , 퐴−1 = , 1 0 −1 2 2 −1 0 1 0 1 2 −1 1 0 • 퐴퐴−1 = = 퐴−1퐴 = = 1 0 −1 2 −1 2 1 0 0 1 2 Inverse of a 3 x 3 matrix (using cofactor matrix) • Calculating the inverse of a 3 × 3 matrix is: • Compute the matrix of minors for A. • Compute the cofactor matrix by alternating + and – signs. • Compute the adjugate matrix by taking a transpose of cofactor matrix. • Divide all elements in the adjugate matrix by determinant of matrix 퐴. 1 퐴−1 = 푎푑푗(퐴) det(퐴) 3 Inverse of a 3 x 3 matrix (using cofactor matrix) 3 0 2 퐴 = 2 0 −2 0 1 1 0 −2 2 −2 2 0 1 1 0 1 0 1 2 2 2 0 2 3 2 3 0 Matrix of Minors = = −2 3 3 1 1 0 1 0 1 0 2 3 2 3 0 0 −10 0 0 −2 2 −2 2 0 2 2 2 1 −1 1 2 −2 2 Cofactor of A (퐂) = −2 3 3 .∗ −1 1 −1 = 2 3 −3 0 −10 0 1 −1 1 0 10 0 2 2 0 adj A = CT = −2 3 10 2 −3 0 2 2 0 0.2 0.2 0 1 1 A-1 = ∗ adj A = −2 3 10 = −0.2 0.3 1 |퐴| 10 2 −3 0 0.2 −0.3 0 4 Properties of Inverse of a Matrix • (A-1)-1 = A • (AB)-1 = B-1A-1 • (kA)-1 = k-1A-1 where k is a non-zero scalar.
    [Show full text]
  • Using Row Reduction to Calculate the Inverse and the Determinant of a Square Matrix
    Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n × n square matrix A is called invertible if there exists a matrix X such that AX = XA = I, where I is the n × n identity matrix. If such matrix X exists, one can show that it is unique. We call it the inverse of A and denote it by A−1 = X, so that AA−1 = A−1A = I holds if A−1 exists, i.e. if A is invertible. Not all matrices are invertible. If A−1 does not exist, the matrix A is called singular or noninvertible. Note that if A is invertible, then the linear algebraic system Ax = b has a unique solution x = A−1b. Indeed, multiplying both sides of Ax = b on the left by A−1, we obtain A−1Ax = A−1b. But A−1A = I and Ix = x, so x = A−1b The converse is also true, so for a square matrix A, Ax = b has a unique solution if and only if A is invertible. 2 Calculating the inverse To compute A−1 if it exists, we need to find a matrix X such that AX = I (1) Linear algebra tells us that if such X exists, then XA = I holds as well, and so X = A−1. 1 Now observe that solving (1) is equivalent to solving the following linear systems: Ax1 = e1 Ax2 = e2 ... Axn = en, where xj, j = 1, .
    [Show full text]
  • Package 'Ecodist'
    Package ‘ecodist’ August 28, 2020 Version 2.0.7 Date 2020-08-26 Title Dissimilarity-Based Functions for Ecological Analysis Author Sarah Goslee [aut, cre], Dean Urban [aut] Maintainer Sarah Goslee <[email protected]> Depends R (>= 3.0.0) LazyData true Imports stats, graphics Suggests knitr, testthat, markdown VignetteBuilder knitr Description Dissimilarity-based analysis functions including ordination and Mantel test functions, in- tended for use with spatial and community data. The original package descrip- tion is in Goslee and Urban (2007) <doi:10.18637/jss.v022.i07>, with further statistical de- tail in Goslee (2010) <doi:10.1007/s11258-009-9641-0>. License GPL (>= 2) BugReports https://github.com/phiala/ecodist/issues NeedsCompilation yes Repository CRAN Date/Publication 2020-08-28 06:30:09 UTC R topics documented: ecodist-package . .2 addord . .3 bcdist . .6 bump ............................................7 bump.pmgram . .8 cor2m . .9 corgen . 10 1 2 ecodist-package crosstab . 11 distance . 12 fixdmat . 14 full.............................................. 15 graze . 16 iris.fit . 18 iris.nmds . 19 iris.vf . 20 iris.vfrot . 21 lower ............................................ 22 mantel . 24 mgram . 26 mgroup . 28 min.nmds . 29 MRM ............................................ 31 nmds............................................. 32 pco.............................................. 35 plot.mgram . 36 plot.nmds . 37 plot.vf . 38 pmgram . 40 residuals.mgram . 43 vf .............................................. 44 xdistance . 46 xmantel . 47 xmgram . 49 z.no ............................................. 52 z.z1 ............................................. 53 Index 56 ecodist-package Dissimilarity-Based Functions for Ecological Analysis Description Dissimilarity-based analysis functions including ordination and Mantel test functions, intended for use with spatial and community data. Details This package contains well-established dissimilarity-based ecological analyses, such as nmds and mantel, and experimental/research analyses such as xmantel.
    [Show full text]
  • Linear Algebra
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-2. Orthogonal Diagonalization Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Orthogonal Matrices Orthogonal Diagonalization and Symmetric Matrices Quadratic Forms Orthogonal Matrices Definition An n × n matrix A is a orthogonal if its inverse is equal to its transpose, i.e., A−1 = AT. Example p 2 2 6 −3 3 2 1 1 1 and 3 2 6 2 −1 1 7 4 5 −6 3 2 are orthogonal matrices (verify). Theorem The following are equivalent for an n × n matrix A. 1. A is orthogonal. 2. The rows of A are orthonormal. 3. The columns of A are orthonormal. Proof. “(1) () (3)": Write A = [~a1; ···~an]. 0 T1 ~a1 T B . C A is orthogonal () A A = In () @ . A [~a1; ···~an] = In ~an 21 0 ··· 03 2~a1 · ~a1 ~a1 · ~a2 ··· ~a1 · ~an3 ~a · ~a ~a · ~a ··· ~a · ~a 6 .. 7 6 2 1 2 2 2 n7 60 1 . 07 () 6 . 7 = 6 7 6 . .. 7 6. .7 4 . 5 4. .. .. .5 ~an · ~a1 ~an · ~a2 ··· ~an · ~an 0 0 ··· 1 “(1) () (2)": Similarly (Try it yourself). Example 2 2 1 −2 3 A = 4 −2 1 2 5 1 0 8 has orthogonal columns, but its rows are not orthogonal (verify). Normalizing the columns of A gives us the matrix p p 2 2/3 1/ 2 −1/3 2 3 0 p p A = 4 −2/3 1/ 2 1/3p2 5 ; 1/3 0 4/3 2 which has orthonormal columns.
    [Show full text]
  • 2.5 Elementary Row Operations and the Determinant
    2.5 Elementary Row Operations and the Determinant a b Recall: Let A be a 2 × 2 matrtix : A = 0 1. The determinant of A, denoted by det(A) @ c d A or jAj, is the number ad − bc. So for example if 2 4 A = 0 1 ; det(A) = 2(5) − 4(1) = 6: @ 1 5 A The matrix A is invertible if and only if det(A) 6= 0, and in this case the inverse of A is given by − 1 d −b A 1 = 0 1 : det(A) @ −c a A d −b The matrix 0 1 is called the adjoint or adjugate of A, denoted adj(A). @ −c a A Determinants are defined for all square matrices. They have various interpretations and applications in algebra, analysis and geometry. For every square matrix A, we have that A is invertible if and only if det(A) 6= 0. a b If A is a 2 × 2 matrix, A = 0 1, then j det(A)j is the volume of the parallelogram @ c d A having the vectors ~v1 = (a; b) and ~v2 = (c; d) as edges. Similarly if 0 a b c 1 A = B d e f C B C B C @ g h i A is a 3 × 3 matrix we have j det(A)j = Volume of P , where P is the parallelepiped in R3 having ~v1 = (a; b; c); ~v2 = (d; e; f) and ~v3 = (g; h; i) as edges. The determinant of an n × n matrix can be defined recursively in terms of determinants of (n − 1) × (n − 1) matrices (which in turn are defined in terms of (n − 2) × (n − 2) determinants, etc.).
    [Show full text]
  • New Classes of Matrix Decompositions
    NEW CLASSES OF MATRIX DECOMPOSITIONS KE YE Abstract. The idea of decomposing a matrix into a product of structured matrices such as tri- angular, orthogonal, diagonal matrices is a milestone of numerical computations. In this paper, we describe six new classes of matrix decompositions, extending our work in [8]. We prove that every n × n matrix is a product of finitely many bidiagonal, skew symmetric (when n is even), companion matrices and generalized Vandermonde matrices, respectively. We also prove that a generic n × n centrosymmetric matrix is a product of finitely many symmetric Toeplitz (resp. persymmetric Hankel) matrices. We determine an upper bound of the number of structured matrices needed to decompose a matrix for each case. 1. Introduction Matrix decomposition is an important technique in numerical computations. For example, we have classical matrix decompositions: (1) LU: a generic matrix can be decomposed as a product of an upper triangular matrix and a lower triangular matrix. (2) QR: every matrix can be decomposed as a product of an orthogonal matrix and an upper triangular matrix. (3) SVD: every matrix can be decomposed as a product of two orthogonal matrices and a diagonal matrix. These matrix decompositions play a central role in engineering and scientific problems related to matrix computations [1],[6]. For example, to solve a linear system Ax = b where A is an n×n matrix and b is a column vector of length n. We can first apply LU decomposition to A to obtain LUx = b, where L is a lower triangular matrix and U is an upper triangular matrix.
    [Show full text]