Applied Mathematical Methods Contents I Contents II Contents

Total Page:16

File Type:pdf, Size:1020Kb

Applied Mathematical Methods Contents I Contents II Contents Applied Mathematical Methods 1, Applied Mathematical Methods 2, Contents I Preliminary Background Applied Mathematical Methods Matrices and Linear Transformations Operational Fundamentals of Linear Algebra Bhaskar Dasgupta Systems of Linear Equations Department of Mechanical Engineering Indian Institute of Technology Kanpur (INDIA) Gauss Elimination Family of Methods [email protected] Special Systems and Special Methods (Pearson Education 2006, 2007) Numerical Aspects in Linear Systems May 13, 2008 Applied Mathematical Methods 3, Applied Mathematical Methods 4, Contents II Contents III Eigenvalues and Eigenvectors Topics in Multivariate Calculus Diagonalization and Similarity Transformations Vector Analysis: Curves and Surfaces Jacobi and Givens Rotation Methods Scalar and Vector Fields Householder Transformation and Tridiagonal Matrices Polynomial Equations QR Decomposition Method Solution of Nonlinear Equations and Systems Eigenvalue Problem of General Matrices Optimization: Introduction Singular Value Decomposition Multivariate Optimization Vector Spaces: Fundamental Concepts* Methods of Nonlinear Optimization* Applied Mathematical Methods 5, Applied Mathematical Methods 6, Contents IV Contents V Constrained Optimization First Order Ordinary Differential Equations Linear and Quadratic Programming Problems* Second Order Linear Homogeneous ODE's Interpolation and Approximation Second Order Linear Non-Homogeneous ODE's Basic Methods of Numerical Integration Higher Order Linear ODE's Advanced Topics in Numerical Integration* Laplace Transforms Numerical Solution of Ordinary Differential Equations ODE Systems ODE Solutions: Advanced Issues Stability of Dynamic Systems Existence and Uniqueness Theory Series Solutions and Special Functions Applied Mathematical Methods 7, Applied Mathematical Methods 8, Contents VI Contents VII Variational Calculus* Sturm-Liouville Theory Epilogue Fourier Series and Integrals Selected References Fourier Transforms Minimax Approximation* Partial Differential Equations Analytic Functions Integrals in the Complex Plane Singularities of Complex Functions Applied Mathematical Methods Preliminary Background 9, Applied Mathematical Methods Preliminary Background 10, Theme of the Course Theme of the Course Outline Course Contents Theme of the Course Course Contents Sources for More Detailed Study Sources for More Detailed Study Logistic Strategy Logistic Strategy Expected Background To develop a firm mathematical backgroundExpnecessaected Backgroundry for graduate studies and research I a fast-paced recapitulation of UG mathematics Preliminary Background I extension with supplementary advanced ideas for a mature Theme of the Course and forward orientation Course Contents I exposure and highlighting of interconnections Sources for More Detailed Study Logistic Strategy Expected Background To pre-empt needs of the future challenges I trade-off between sufficient and reasonable I target mid-spectrum majority of students Notable beneficiaries (at two ends) I would-be researchers in analytical/computational areas I students who are till now somewhat afraid of mathematics Applied Mathematical Methods Preliminary Background 11, Applied Mathematical Methods Preliminary Background 12, Theme of the Course Theme of the Course Course Contents Course Contents Sources for More Detailed Study Course Contents Sources for More Detailed Study Sources for More Detailed Study Logistic Strategy Logistic Strategy Expected Background Expected Background If you have the time, need and interest, then you may consult I Applied linear algebra I individual books on individual topics; I Multivariate calculus and vector calculus I another \umbrella" volume, like Kreyszig, McQuarrie, O'Neil I Numerical methods or Wylie and Barrett; I Differential equations + + I a good book of numerical analysis or scientific computing, like Acton, Heath, Hildebrand, Krishnamurthy and Sen, Press et I Complex analysis al, Stoer and Bulirsch; I friends, in joint-study groups. Applied Mathematical Methods Preliminary Background 13, Applied Mathematical Methods Preliminary Background 14, Theme of the Course Theme of the Course Logistic Strategy Course Contents Logistic Strategy Course Contents Sources for More Detailed Study Sources for More Detailed Study Logistic Strategy Logistic Strategy Expected Background Expected Background I Study in the given sequence, to the extent possible. I Do not read mathematics. Tutorial Plan I Chapter Selection Tutorial Chapter Selection Tutorial Use lots of pen and paper. 2 2,3 3 26 1,2,4,6 4 Read \mathematics books" and do mathematics. 3 2,4,5,6 4,5 27 1,2,3,4 3,4 4 1,2,4,5,7 4,5 28 2,5,6 6 I Exercises are must. 5 1,4,5 4 29 1,2,5,6 6 6 1,2,4,7 4 30 1,2,3,4,5 4 I Use as many methods as you can think of, certainly including 7 1,2,3,4 2 31 1,2 1(d) 8 1,2,3,4,6 4 32 1,3,5,7 7 the one which is recommended. 9 1,2,4 4 33 1,2,3,7,8 8 I Consult the Appendix after you work out the solution. Follow 10 2,3,4 4 34 1,3,5,6 5 11 2,4,5 5 35 1,3,4 3 the comments, interpretations and suggested extensions. 12 1,3 3 36 1,2,4 4 I Think. Get excited. Discuss. Bore everybody in your known 13 1,2 1 37 1 1(c) 14 2,4,5,6,7 4 38 1,2,3,4,5 5 circles. 15 6,7 7 39 2,3,4,5 4 I Not enough time to attempt all? Want a selection ? 16 2,3,4,8 8 40 1,2,4,5 4 17 1,2,3,6 6 41 1,3,6,8 8 I Program implementation is needed in algorithmic exercises. 18 1,2,3,6,7 3 42 1,3,6 6 19 1,3,4,6 6 43 2,3,4 3 I Master a programming environment. 20 1,2,3 2 44 1,2,4,7,9,10 7,10 I Use mathematical/numerical library/software. 21 1,2,5,7,8 7 45 1,2,3,4,7,9 4,9 22 1,2,3,4,5,6 3,4 46 1,2,5,7 7 Take a MATLAB tutorial session? 23 1,2,3 3 47 1,2,3,5,8,9,10 9,10 24 1,2,3,4,5,6 1 48 1,2,4,5 5 25 1,2,3,4,5 5 Applied Mathematical Methods Preliminary Background 15, Applied Mathematical Methods Preliminary Background 16, Theme of the Course Theme of the Course Expected Background Course Contents Points to note Course Contents Sources for More Detailed Study Sources for More Detailed Study Logistic Strategy Logistic Strategy Expected Background Expected Background I moderate background of undergraduate mathematics I Put in effort, keep pace. I firm understanding of school mathematics and undergraduate I Stress concept as well as problem-solving. calculus I Follow methods diligently. I Ensure background skills. Take the preliminary test. Necessary Exercises: Prerequisite problem sets ?? Grade yourself sincerely. Prerequisite Problem Sets* Applied Mathematical Methods Matrices and Linear Transformations 17, Applied Mathematical Methods Matrices and Linear Transformations 18, Matrices Matrices Outline Geometry and Algebra Matrices Geometry and Algebra Linear Transformations Linear Transformations Matrix Terminology Matrix Terminology Question: What is a \matrix"? Answers: I a rectangular array of numbers/elements ? Matrices and Linear Transformations I a mapping f : M N F , where M = 1; 2; 3; ; m , × ! f · · · g Matrices N = 1; 2; 3; ; n and F is the set of real numbers or f · · · g Geometry and Algebra complex numbers ? Linear Transformations Matrix Terminology Question: What does a matrix do? Explore: With an m n matrix A, × y = a x + a x + + a nxn 1 11 1 12 2 · · · 1 y2 = a21x1 + a22x2 + + a2nxn . · · · . 9 or Ax = y . > => ym = am x + am x + + amnxn 1 1 2 2 · · · > ;> Applied Mathematical Methods Matrices and Linear Transformations 19, Applied Mathematical Methods Matrices and Linear Transformations 20, Matrices Matrices Matrices Geometry and Algebra Geometry and Algebra Geometry and Algebra Linear Transformations Linear Transformations Matrix Terminology Matrix Terminology Consider these definitions: T Let vector x = [x1 x2 x3] denote a point (x1; x2; x3) in I y = f (x) 3-dimensional space in frame of reference OX1X2X3. I y = f (x) = f (x ; x ; ; x ) 1 2 n Example: With m = 2 and n = 3, I · · · yk = fk (x) = fk (x1; x2; ; xn); k = 1; 2; ; m I · · · · · · y = f(x) y1 = a11x1 + a12x2 + a13x3 I : y = Ax y2 = a21x1 + a22x2 + a23x3 Further Answer: A matrix is the definition of a linear vector function of a Plot y1 and y2 in the OY1Y2 plane. ¤ ¤ vector variable. A: R 3 R 2 Anything deeper? X3 ¢ Y ¥ ¥ ¥ ¥ ¥ ¥ ¡ 2 ¢£ ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ x ¡ y ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ Y1 ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ ¥ ¥ ¥ ¥ ¥ ¥ ¦ ¦ ¦ ¦ ¦ ¦ O X2 O X1 Caution: Matrices do not define vector functions whose components are Domain Co−domain Figure: Linear transformation: schematic illustration of the form What is matrix A doing? yk = ak + ak x + ak x + + aknxn: 0 1 1 2 2 · · · Applied Mathematical Methods Matrices and Linear Transformations 21, Applied Mathematical Methods Matrices and Linear Transformations 22, Matrices Matrices Geometry and Algebra Geometry and Algebra Linear Transformations Geometry and Algebra Linear Transformations Linear Transformations Matrix Terminology Matrix Terminology 3 Operate A on a large number of points xi R . 3 2 2 2 Operating on point x in R , matrix A transforms it to y in R . Obtain corresponding images yi R . 2 The linear transformation represented by A implies the totality of Point y is the image of point x under the mapping defined by these correspondences. matrix A. 3 We decide to use a different frame of reference OX10 X20X30 for R . [And, possibly OY Y for R2 at the same time.] Note domain R 3, co-domain R 2 with reference to the figure and 10 20 verify that A : R 3 R2 fulfils the requirements of a mapping, by ! Coordinates change, i.e. xi changes to xi0 (and possibly yi to yi0). definition. Now, we need a different matrix, say A0, to get back the correspondence as y0 = A0x0. A matrix gives a definition of a linear transformation A matrix: just one description.
Recommended publications
  • Eigenvalues and Eigenvectors of Tridiagonal Matrices∗
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 15, pp. 115-133, April 2006 ELA http://math.technion.ac.il/iic/ela EIGENVALUES AND EIGENVECTORS OF TRIDIAGONAL MATRICES∗ SAID KOUACHI† Abstract. This paper is continuation of previous work by the present author, where explicit formulas for the eigenvalues associated with several tridiagonal matrices were given. In this paper the associated eigenvectors are calculated explicitly. As a consequence, a result obtained by Wen- Chyuan Yueh and independently by S. Kouachi, concerning the eigenvalues and in particular the corresponding eigenvectors of tridiagonal matrices, is generalized. Expressions for the eigenvectors are obtained that differ completely from those obtained by Yueh. The techniques used herein are based on theory of recurrent sequences. The entries situated on each of the secondary diagonals are not necessary equal as was the case considered by Yueh. Key words. Eigenvectors, Tridiagonal matrices. AMS subject classifications. 15A18. 1. Introduction. The subject of this paper is diagonalization of tridiagonal matrices. We generalize a result obtained in [5] concerning the eigenvalues and the corresponding eigenvectors of several tridiagonal matrices. We consider tridiagonal matrices of the form −α + bc1 00 ... 0 a1 bc2 0 ... 0 .. .. 0 a2 b . An = , (1) .. .. .. 00. 0 . .. .. .. . cn−1 0 ... ... 0 an−1 −β + b n−1 n−1 ∞ where {aj}j=1 and {cj}j=1 are two finite subsequences of the sequences {aj}j=1 and ∞ {cj}j=1 of the field of complex numbers C, respectively, and α, β and b are complex numbers. We suppose that 2 d1, if j is odd ajcj = 2 j =1, 2, ..., (2) d2, if j is even where d 1 and d2 are complex numbers.
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • 2 Homework Solutions 18.335 " Fall 2004
    2 Homework Solutions 18.335 - Fall 2004 2.1 Count the number of ‡oating point operations required to compute the QR decomposition of an m-by-n matrix using (a) Householder re‡ectors (b) Givens rotations. 2 (a) See Trefethen p. 74-75. Answer: 2mn2 n3 ‡ops. 3 (b) Following the same procedure as in part (a) we get the same ‘volume’, 1 1 namely mn2 n3: The only di¤erence we have here comes from the 2 6 number of ‡opsrequired for calculating the Givens matrix. This operation requires 6 ‡ops (instead of 4 for the Householder re‡ectors) and hence in total we need 3mn2 n3 ‡ops. 2.2 Trefethen 5.4 Let the SVD of A = UV : Denote with vi the columns of V , ui the columns of U and i the singular values of A: We want to …nd x = (x1; x2) and such that: 0 A x x 1 = 1 A 0 x2 x2 This gives Ax2 = x1 and Ax1 = x2: Multiplying the 1st equation with A 2 and substitution of the 2nd equation gives AAx2 = x2: From this we may conclude that x2 is a left singular vector of A: The same can be done to see that x1 is a right singular vector of A: From this the 2m eigenvectors are found to be: 1 v x = i ; i = 1:::m p2 ui corresponding to the eigenvalues = i. Therefore we get the eigenvalue decomposition: 1 0 A 1 VV 0 1 VV = A 0 p2 U U 0 p2 U U 3 2.3 If A = R + uv, where R is upper triangular matrix and u and v are (column) vectors, describe an algorithm to compute the QR decomposition of A in (n2) time.
    [Show full text]
  • A Fast Algorithm for the Recursive Calculation of Dominant Singular Subspaces N
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 218 (2008) 238–246 www.elsevier.com/locate/cam A fast algorithm for the recursive calculation of dominant singular subspaces N. Mastronardia,1, M. Van Barelb,∗,2, R. Vandebrilb,2 aIstituto per le Applicazioni del Calcolo, CNR, via Amendola122/D, 70126, Bari, Italy bDepartment of Computer Science, Katholieke Universiteit Leuven, Celestijnenlaan 200A, 3001 Leuven, Belgium Received 26 September 2006 Abstract In many engineering applications it is required to compute the dominant subspace of a matrix A of dimension m × n, with m?n. Often the matrix A is produced incrementally, so all the columns are not available simultaneously. This problem arises, e.g., in image processing, where each column of the matrix A represents an image of a given sequence leading to a singular value decomposition-based compression [S. Chandrasekaran, B.S. Manjunath, Y.F. Wang, J. Winkeler, H. Zhang, An eigenspace update algorithm for image analysis, Graphical Models and Image Process. 59 (5) (1997) 321–332]. Furthermore, the so-called proper orthogonal decomposition approximation uses the left dominant subspace of a matrix A where a column consists of a time instance of the solution of an evolution equation, e.g., the flow field from a fluid dynamics simulation. Since these flow fields tend to be very large, only a small number can be stored efficiently during the simulation, and therefore an incremental approach is useful [P. Van Dooren, Gramian based model reduction of large-scale dynamical systems, in: Numerical Analysis 1999, Chapman & Hall, CRC Press, London, Boca Raton, FL, 2000, pp.
    [Show full text]
  • New Fast and Accurate Jacobi Svd Algorithm: I
    NEW FAST AND ACCURATE JACOBI SVD ALGORITHM: I. ZLATKO DRMAC· ¤ AND KRESIMIR· VESELIC¶ y Abstract. This paper is the result of contrived e®orts to break the barrier between numerical accuracy and run time e±ciency in computing the fundamental decomposition of numerical linear algebra { the singular value decomposition (SVD) of a general dense matrix. It is an unfortunate fact that the numerically most accurate one{sided Jacobi SVD algorithm is several times slower than generally less accurate bidiagonalization based methods such as the QR or the divide and conquer algorithm. Despite its sound numerical qualities, the Jacobi SVD is not included in the state of the art matrix computation libraries and it is even considered obsolete by some leading researches. Our quest for a highly accurate and e±cient SVD algorithm has led us to a new, superior variant of the Jacobi algorithm. The new algorithm has inherited all good high accuracy properties, and it outperforms not only the best implementations of the one{sided Jacobi algorithm but also the QR algorithm. Moreover, it seems that the potential of the new approach is yet to be fully exploited. Key words. Jacobi method, singular value decomposition, eigenvalues AMS subject classi¯cations. 15A09, 15A12, 15A18, 15A23, 65F15, 65F22, 65F35 1. Introduction. In der Theorie der SÄacularstÄorungenund der kleinen Oscil- lationen wird man auf ein System lineÄarerGleichungen gefÄurt,in welchem die Co- e±cienten der verschiedenen Unbekanten in Bezug auf die Diagonale symmetrisch sind, die ganz constanten Glieder fehlen und zu allen in der Diagonale be¯ndlichen Coe±cienten noch dieselbe GrÄo¼e ¡x addirt ist.
    [Show full text]
  • Numerical Linear Algebra Revised February 15, 2010 4.1 the LU
    Numerical Linear Algebra Revised February 15, 2010 4.1 The LU Decomposition The Elementary Matrices and badgauss In the previous chapters we talked a bit about the solving systems of the form Lx = b and Ux = b where L is lower triangular and U is upper triangular. In the exercises for Chapter 2 you were asked to write a program x=lusolver(L,U,b) which solves LUx = b using forsub and backsub. We now address the problem of representing a matrix A as a product of a lower triangular L and an upper triangular U: Recall our old friend badgauss. function B=badgauss(A) m=size(A,1); B=A; for i=1:m-1 for j=i+1:m a=-B(j,i)/B(i,i); B(j,:)=a*B(i,:)+B(j,:); end end The heart of badgauss is the elementary row operation of type 3: B(j,:)=a*B(i,:)+B(j,:); where a=-B(j,i)/B(i,i); Note also that the index j is greater than i since the loop is for j=i+1:m As we know from linear algebra, an elementary opreration of type 3 can be viewed as matrix multipli- cation EA where E is an elementary matrix of type 3. E looks just like the identity matrix, except that E(j; i) = a where j; i and a are as in the MATLAB code above. In particular, E is a lower triangular matrix, moreover the entries on the diagonal are 1's. We call such a matrix unit lower triangular.
    [Show full text]
  • (Hessenberg) Eigenvalue-Eigenmatrix Relations∗
    (HESSENBERG) EIGENVALUE-EIGENMATRIX RELATIONS∗ JENS-PETER M. ZEMKE† Abstract. Explicit relations between eigenvalues, eigenmatrix entries and matrix elements are derived. First, a general, theoretical result based on the Taylor expansion of the adjugate of zI − A on the one hand and explicit knowledge of the Jordan decomposition on the other hand is proven. This result forms the basis for several, more practical and enlightening results tailored to non-derogatory, diagonalizable and normal matrices, respectively. Finally, inherent properties of (upper) Hessenberg, resp. tridiagonal matrix structure are utilized to construct computable relations between eigenvalues, eigenvector components, eigenvalues of principal submatrices and products of lower diagonal elements. Key words. Algebraic eigenvalue problem, eigenvalue-eigenmatrix relations, Jordan normal form, adjugate, principal submatrices, Hessenberg matrices, eigenvector components AMS subject classifications. 15A18 (primary), 15A24, 15A15, 15A57 1. Introduction. Eigenvalues and eigenvectors are defined using the relations Av = vλ and V −1AV = J. (1.1) We speak of a partial eigenvalue problem, when for a given matrix A ∈ Cn×n we seek scalar λ ∈ C and a corresponding nonzero vector v ∈ Cn. The scalar λ is called the eigenvalue and the corresponding vector v is called the eigenvector. We speak of the full or algebraic eigenvalue problem, when for a given matrix A ∈ Cn×n we seek its Jordan normal form J ∈ Cn×n and a corresponding (not necessarily unique) eigenmatrix V ∈ Cn×n. Apart from these constitutional relations, for some classes of structured matrices several more intriguing relations between components of eigenvectors, matrix entries and eigenvalues are known. For example, consider the so-called Jacobi matrices.
    [Show full text]
  • Massively Parallel Poisson and QR Factorization Solvers
    Computers Math. Applic. Vol. 31, No. 4/5, pp. 19-26, 1996 Pergamon Copyright~)1996 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0898-1221/96 $15.00 + 0.00 0898-122 ! (95)00212-X Massively Parallel Poisson and QR Factorization Solvers M. LUCK£ Institute for Control Theory and Robotics, Slovak Academy of Sciences DdbravskA cesta 9, 842 37 Bratislava, Slovak Republik utrrluck@savba, sk M. VAJTERSIC Institute of Informatics, Slovak Academy of Sciences DdbravskA cesta 9, 840 00 Bratislava, P.O. Box 56, Slovak Republic kaifmava©savba, sk E. VIKTORINOVA Institute for Control Theoryand Robotics, Slovak Academy of Sciences DdbravskA cesta 9, 842 37 Bratislava, Slovak Republik utrrevka@savba, sk Abstract--The paper brings a massively parallel Poisson solver for rectangle domain and parallel algorithms for computation of QR factorization of a dense matrix A by means of Householder re- flections and Givens rotations. The computer model under consideration is a SIMD mesh-connected toroidal n x n processor array. The Dirichlet problem is replaced by its finite-difference analog on an M x N (M + 1, N are powers of two) grid. The algorithm is composed of parallel fast sine transform and cyclic odd-even reduction blocks and runs in a fully parallel fashion. Its computational complexity is O(MN log L/n2), where L = max(M + 1, N). A parallel proposal of QI~ factorization by the Householder method zeros all subdiagonal elements in each column and updates all elements of the given submatrix in parallel. For the second method with Givens rotations, the parallel scheme of the Sameh and Kuck was chosen where the disjoint rotations can be computed simultaneously.
    [Show full text]
  • Explicit Inverse of a Tridiagonal (P, R)–Toeplitz Matrix
    Explicit inverse of a tridiagonal (p; r){Toeplitz matrix A.M. Encinas, M.J. Jim´enez Departament de Matemtiques Universitat Politcnica de Catalunya Abstract Tridiagonal matrices appears in many contexts in pure and applied mathematics, so the study of the inverse of these matrices becomes of specific interest. In recent years the invertibility of nonsingular tridiagonal matrices has been quite investigated in different fields, not only from the theoretical point of view (either in the framework of linear algebra or in the ambit of numerical analysis), but also due to applications, for instance in the study of sound propagation problems or certain quantum oscillators. However, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix has constant diagonals or the coefficients of these diagonals are subjected to some restrictions like the tridiagonal p{Toeplitz matrices [7], such that their three diagonals are formed by p{periodic sequences. The recent formulae for the inversion of tridiagonal p{Toeplitz matrices are based, more o less directly, on the solution of second order linear difference equations, although most of them use a cumbersome formulation, that in fact don not take into account the periodicity of the coefficients. This contribution presents the explicit inverse of a tridiagonal matrix (p; r){Toeplitz, which diagonal coefficients are in a more general class of sequences than periodic ones, that we have called quasi{periodic sequences. A tridiagonal matrix A = (aij) of order n + 2 is called (p; r){Toeplitz if there exists m 2 N0 such that n + 2 = mp and ai+p;j+p = raij; i; j = 0;:::; (m − 1)p: Equivalently, A is a (p; r){Toeplitz matrix iff k ai+kp;j+kp = r aij; i; j = 0; : : : ; p; k = 0; : : : ; m − 1: We have developed a technique that reduces any linear second order difference equation with periodic or quasi-periodic coefficients to a difference equation of the same kind but with constant coefficients [3].
    [Show full text]
  • Determinant Formulas and Cofactors
    Determinant formulas and cofactors Now that we know the properties of the determinant, it’s time to learn some (rather messy) formulas for computing it. Formula for the determinant We know that the determinant has the following three properties: 1. det I = 1 2. Exchanging rows reverses the sign of the determinant. 3. The determinant is linear in each row separately. Last class we listed seven consequences of these properties. We can use these ten properties to find a formula for the determinant of a 2 by 2 matrix: � � � � � � � a b � � a 0 � � 0 b � � � = � � + � � � c d � � c d � � c d � � � � � � � � � � a 0 � � a 0 � � 0 b � � 0 b � = � � + � � + � � + � � � c 0 � � 0 d � � c 0 � � 0 d � = 0 + ad + (−cb) + 0 = ad − bc. By applying property 3 to separate the individual entries of each row we could get a formula for any other square matrix. However, for a 3 by 3 matrix we’ll have to add the determinants of twenty seven different matrices! Many of those determinants are zero. The non-zero pieces are: � � � � � � � � � a a a � � a 0 0 � � a 0 0 � � 0 a 0 � � 11 12 13 � � 11 � � 11 � � 12 � � a21 a22 a23 � = � 0 a22 0 � + � 0 0 a23 � + � a21 0 0 � � � � � � � � � � a31 a32 a33 � � 0 0 a33 � � 0 a32 0 � � 0 0 a33 � � � � � � � � 0 a 0 � � 0 0 a � � 0 0 a � � 12 � � 13 � � 13 � + � 0 0 a23 � + � a21 0 0 � + � 0 a22 0 � � � � � � � � a31 0 0 � � 0 a32 0 � � a31 0 0 � = a11 a22a33 − a11a23 a33 − a12a21a33 +a12a23a31 + a13 a21a32 − a13a22a31. Each of the non-zero pieces has one entry from each row in each column, as in a permutation matrix.
    [Show full text]
  • The Sparse Matrix Transform for Covariance Estimation and Analysis of High Dimensional Signals
    The Sparse Matrix Transform for Covariance Estimation and Analysis of High Dimensional Signals Guangzhi Cao*, Member, IEEE, Leonardo R. Bachega, and Charles A. Bouman, Fellow, IEEE Abstract Covariance estimation for high dimensional signals is a classically difficult problem in statistical signal analysis and machine learning. In this paper, we propose a maximum likelihood (ML) approach to covariance estimation, which employs a novel non-linear sparsity constraint. More specifically, the covariance is constrained to have an eigen decomposition which can be represented as a sparse matrix transform (SMT). The SMT is formed by a product of pairwise coordinate rotations known as Givens rotations. Using this framework, the covariance can be efficiently estimated using greedy optimization of the log-likelihood function, and the number of Givens rotations can be efficiently computed using a cross- validation procedure. The resulting estimator is generally positive definite and well-conditioned, even when the sample size is limited. Experiments on a combination of simulated data, standard hyperspectral data, and face image sets show that the SMT-based covariance estimates are consistently more accurate than both traditional shrinkage estimates and recently proposed graphical lasso estimates for a variety of different classes and sample sizes. An important property of the new covariance estimate is that it naturally yields a fast implementation of the estimated eigen-transformation using the SMT representation. In fact, the SMT can be viewed as a generalization of the classical fast Fourier transform (FFT) in that it uses “butterflies” to represent an orthonormal transform. However, unlike the FFT, the SMT can be used for fast eigen-signal analysis of general non-stationary signals.
    [Show full text]
  • Chapter 3 LEAST SQUARES PROBLEMS
    Chapter 3 LEAST SQUARES PROBLEMS E B D C A the sea F One application is geodesy & surveying. Let z = elevation, and suppose we have the mea- surements: zA ≈ 1., zB ≈ 2., zC ≈ 3., zB − zA ≈ 1., zC − zB ≈ 2., zC − zA ≈ 1. 71 This is overdetermined and inconsistent: 0 1 0 1 100 1. B C 0 1 B . C B 010C z B 2 C B C A B . C B 001C @ z A ≈ B 3 C . B − C B B . C B 110C z B 1 C @ 0 −11A C @ 2. A −101 1. Another application is fitting a curve to given data: y y=ax+b x 2 3 2 3 x1 1 y1 6 7 6 7 6 x2 1 7 a 6 y2 7 6 . 7 ≈ 6 . 7 . 4 . 5 b 4 . 5 xn 1 yn More generally Am×nxn ≈ bm where A is known exactly, m ≥ n,andb is subject to inde- pendent random errors of equal variance (which can be achieved by scaling the equations). Gauss (1821) proved that the “best” solution x minimizes kb − Axk2, i.e., the sum of the squares of the residual components. Hence, we might write Ax ≈2 b. Even if the 2-norm is inappropriate, it is the easiest to work with. 3.1 The Normal Equations 3.2 QR Factorization 3.3 Householder Reflections 3.4 Givens Rotations 3.5 Gram-Schmidt Orthogonalization 3.6 Singular Value Decomposition 72 3.1 The Normal Equations Recall the inner product xTy for x, y ∈ Rm. (What is the geometric interpretation of the 3 m T inner product in R ?) A sequence of vectors x1,x2,...,xn in R is orthogonal if xi xj =0 T if and only if i =6 j and orthonormal if xi xj = δij.Twosubspaces S, T are orthogonal if T T x ∈ S, y ∈ T ⇒ x y =0.IfX =[x1,x2,...,xn], then orthonormal means X X = I.
    [Show full text]