Questions for Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Questions for Linear Algebra www.YoYoBrain.com - Accelerators for Memory and Learning Questions for Linear Algebra Category: Default - (148 questions) Linear Algebra: Wronskian of a set of Let (y1, y2, ... yn} be a set of functions which function have n-1 derivatives on an interval I. The determinant:| y1, y2, .... yn || y1', y2', ... yn'||.......................|| y1 (n-1 derivative)..... yn (n-1 derivative) | Linear Algebra: Wronskian test for linear Let { y1, y2, .... yn } be a set of nth order independence linear homogeneous differential equations.This set is linearly independent if and only if the Wronskian is not identically equal to zero Linear Algebra: define the dimension of a if w is a basis of vector space then the vector space number of elements in w is dimension Linear Algebra: if A is m x n matrix define row space - subspace of R^n spanned by row and column space of matrix row vectors of Acolum space subspace of R^m spanned by the column vectors of A Linear Algebra: if A is a m x n matrix, then have the same dimension the row space and the column space of A _____ Linear Algebra: if A is a m x n matrix, then have the same dimension the row space and the column space of A _____ Linear Algebra: define the rank of matrix dimension of the row ( or column ) space of a matrix A is called the rank of Arank( A ) Linear Algebra: how to determine the row put matrix in row echelon form and the space of a matrix non-zero rows form the basis of row space of A Linear Algebra: if A is an m x n matrix of rank n x r r, then the dimension of the solution space of Ax=0 is _____ Linear Algebra: define the coordinate Let B = {v1, v2, ... vn } be a basis for vector representation relative to a basis space V and x a vector in V such that:x = c1 v1 + c2 v2 + .... cn vnThen the scalars c1, c2, ... , cn are called coordinates of x relative to B Linear Algebra: a set of vectors s = { v1, v2, the following conditions are true1.) S spans .... , vn } in a vector space V is called a basis V2.) S is linearly independent for v if _____ Linear Algebra: the length of a vector in R^nv || v || = square root ( v1^2 + v2^2 + ... + vn^2 = (v1, v2, ..., vn) ) Linear Algebra: unit vector in the direction of V / || V ||length of 1 and direction of V V Linear Algebra: normalizing the vector V process of finding the unit vector in the direction VV / || V || Linear Algebra: a linear equation in n a1 x1 + a2 x2 + ... + an xn = bThe variables x1, x2, ... , xn has the form coefficients a1, a2, ..., an are real numbers Linear Algebra: a system of linear equations inconsistent is called ___ if it has no solutions Linear Algebra: agumented matrix matrix derived from coefficients and constant terms of a system or linear equations Linear Algebra: 2 properties of matrix 1.) Associative A * ( B * C ) = ( A * B ) * multiplication C2.) Distributive A * ( B + C ) = ( A * B ) + ( A * C )(A + B ) * C = (A * C ) + ( B * C ) Linear Algebra: properties of Identity matrix If A is an identity matrix of order m x n thenA * I = AI * A = A Linear Algebra: transpose ( transpose( A ) ) A Linear Algebra: transpose( A + B ) transpose(A) + transpose(B) Linear Algebra: transpose( c * A) c * transpose(A) Linear Algebra: transpose ( A * B ) transpose(B) * transpose(A) Linear Algebra: skew symmetric matrix if matrix istranspose(A) = -A Linear Algebra: trace of n x n matrix Tr(A) = sum of main diagonal entriesa11 + a22 + ... + ann Linear Algebra: inverse of matrix n x n matrix A is invertible if there exists an n x n matrix such thatA * B = B * A = I (Identity matrix ) Linear Algebra: if matrix does not have an singular inverse it is ______ Linear Algebra: if matrix A is invertible( A^-1 A ) ^-1 Linear Algebra: if matrix A is invertible( A^k ) A^-1 * A^-1 * .... * A^-1k times ^ -1 = Linear Algebra: if matrix A is invertible( c * A 1/c * A^-1 ) ^ -1 Linear Algebra: if matrix A is invertible( transpose ( A ^ -1 ) transpose(A) ) ^-1 = Linear Algebra: inverse of a product of B^-1 * A^-1 matrices( A * B ) ^ -1 Linear Algebra: if C is an invertible matrix, 1.) A = B2. ) A = B then1.) if A*C = B*C then _____2.) if C*A = C * B then _____ Linear Algebra: if A is invertible matrix then X = A^-1 * B the solution ofA * X = B is Linear Algebra: what is an elementary matrix an nxn matrix that can be obtained from I^n by a single elementary row operation Linear Algebra: a square matrix A is it can be written as the product of elementary invertible if and only if _____ matrices Linear Algebra: define idempotent matrix square matrix A ifA^2 = A Linear Algebra: if A is a square matrix, then minor - the determinant of the matrix the minor Mij of the element aij is ___ and obtained by deleting the ith row and jth cofactor cij is ____ column of Acofactor - cij = (-1)^(i+j) Mij Linear Algebra: triangular matrix if all zero entries above or below its main diagonal Linear Algebra: if A is a triangular matrix of the product of the entries on the main order n, then its determinant is ___ diagonal| A | = a11 * a22 * .... * ann Linear Algebra: if A and B are square |A| * |B| matrices of order n then| A * B | = Linear Algebra: if A is an n x n matrix and c a c^n * |A| scalar then| c * A | = Linear Algebra: a square matrix A is not equal 0 invertible if and only if | A | = _____ Linear Algebra: if A is invertible, then | A ^ -1 1 / | A | | = Linear Algebra: an invertible square matrix A A ^ -1 = transpose( A ) is called orthogonal if ____ Linear Algebra: adjoint of matrix if A is a square matrix then matrix of cofactors =| C11 C12 .... C1n || C21 C22 ..... C2n|| Cn1 Cn2 .... Cnn|the transpose of this matrix is adjoint A Linear Algebra: relationship of inverse of nxn A ^ -1 = 1 / |A| * adj(A) matrix and its adjoint Linear Algebra: area of a triangle with 1/2 * determinant|x1 y1 1 ||x2 y2 1 ||x3 y3 1 | vertices (x1, y1), (x2, y2), (x3, y3) Linear Algebra: what information does means the transformation has an inverse existence of a determinant provide about a operation matrix that is a linear transformation of vector space Linear Algebra: geometric interpretation of * absolute value of determinant is scale determinant of square matrix with real factor by which area / volume is multiplied entries when being used as linear under linear transformation* sign indicates transformation of vector space whether transformation preserves orientation Linear Algebra: nilpotent matrix there exists a positive integer k such that A^k = 0 Linear Algebra: define subspace of vector subset W of a vector space V is a subspace space of V if W is itself a vector space under the operation of vector addition and scalar multiplication Linear Algebra: if V and W are both is also subspace of U subspaces of a vector space U, then the intersection of V and W ______ Linear Algebra: a vector v in a vector space V can be written in the formv = c1 u1 + c2 u2 V is called a linear combination of vectors + ... + ck ukwhere c1, c2, ..., ck are scalars u1, u2, ..., un in V if _____ Linear Algebra: define the spanning set of a S = { v1, v2, ..., vK) be a subset of vector vector space space V. The set S is called a spanning set of V if every vector of V can be written as a linear combination of vectors in S Linear Algebra: a set of vectors S = { v1, v2, the vector equationc1 v1 + c2 v2 + .... + ck ..., vn } is a vector space V is called linearly vk = 0has only the trivial solutionc1=0, c2=0, independent if _____ ... , ck = 0 Linear Algebra: distance between 2 vectors u || u - v || and v in R^n is Linear Algebra: the dot product ofu = ( u1, scalar quantityu dot v = u1*v1 + u2*v2 + .... + u2, ... un) andv = ( v1, v2, ...., vn) un*vn Linear Algebra: Cauchy-Schwarz Inequality If u and v are vectors in R^n, then| u dot v | <= ||u|| * ||v|| Linear Algebra: the angle theta between 2 cos(theta) = ( u dot v ) / ( ||u|| * ||v|| ) non zero vectors in R^n is given by Linear Algebra: 2 vectors u and v in R^n are u dot v = 0 orthogonal if ____ Linear Algebra: if u and v are vectors in R^n ||u||^2 + ||v||^2 then u and v are orthogonal if and only if ||u + v||^2 = _____ Linear Algebra: another name for dot product Euclidean inner product in R^n Linear Algebra: notation for dot product in dot - u . vgeneral - <u,v> R^n versus general inner product Linear Algebra: notation for dot product in dot - u . vgeneral - <u,v> R^n versus general inner product Linear Algebra: a vector space V with an an inner product space inner product is called _____ Linear Algebra: 4 axioms that define inner associate real number <u,v> with each pair product vector space V of vectors u + v1.) <u,v> = <v,u>2.) <u, v+w> = <u,v> + <u,w>3.) c*<u,v> = <c*u, v>4.) <v,v> >= 0 and <v,v> = 0 if and only if v=0 Linear Algebra: u is a vector in an inner ||u|| = square root( <u,u> ) product space Vnorm of u is ______ Linear Algebra: if u and v are vectors in inner || u - v || product space Vthe distance between u and v is ____ Linear Algebra: let u and v be vectors in an cos ( theta ) = <u,v> / ||u||*||v|| inner product space Vthe angle between 2 nonzero vectors u and v is _____ Linear Algebra: if u and v are vectors in an <u,v> = 0 inner product spaceu and v are orthogonal if Linear Algebra: if u and v are vectors in an ||u||^2 + ||v||^2 inner product space Vu and v are orthogonal if and only if ||u + v||^2 = _____ Linear Algebra: u and v are vectors in an (<u,v> / <v,v> ) * V inner product space Vthe orthogonal projection of u onto v is _____ Linear Algebra: orthogonal set of vectors S every pair of vectors in S is orthogonal in an inner product space V Linear Algebra: orthonormal set of vectors S every pair of vectors is orthogonal and each in an inner product space V vector is a unit vector Linear Algebra: coordinates for vector w w = <w,v1>v1 + <w, v2>v2 + ...
Recommended publications
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]
  • Dimensionality Reduction
    CHAPTER 7 Dimensionality Reduction We saw in Chapter 6 that high-dimensional data has some peculiar characteristics, some of which are counterintuitive. For example, in high dimensions the center of the space is devoid of points, with most of the points being scattered along the surface of the space or in the corners. There is also an apparent proliferation of orthogonal axes. As a consequence high-dimensional data can cause problems for data mining and analysis, although in some cases high-dimensionality can help, for example, for nonlinear classification. Nevertheless, it is important to check whether the dimensionality can be reduced while preserving the essential properties of the full data matrix. This can aid data visualization as well as data mining. In this chapter we study methods that allow us to obtain optimal lower-dimensional projections of the data. 7.1 BACKGROUND Let the data D consist of n points over d attributes, that is, it is an n × d matrix, given as ⎛ ⎞ X X ··· Xd ⎜ 1 2 ⎟ ⎜ x x ··· x ⎟ ⎜x1 11 12 1d ⎟ ⎜ x x ··· x ⎟ D =⎜x2 21 22 2d ⎟ ⎜ . ⎟ ⎝ . .. ⎠ xn xn1 xn2 ··· xnd T Each point xi = (xi1,xi2,...,xid) is a vector in the ambient d-dimensional vector space spanned by the d standard basis vectors e1,e2,...,ed ,whereei corresponds to the ith attribute Xi . Recall that the standard basis is an orthonormal basis for the data T space, that is, the basis vectors are pairwise orthogonal, ei ej = 0, and have unit length ei = 1. T As such, given any other set of d orthonormal vectors u1,u2,...,ud ,withui uj = 0 T and ui = 1(orui ui = 1), we can re-express each point x as the linear combination x = a1u1 + a2u2 +···+adud (7.1) 183 184 Dimensionality Reduction T where the vector a = (a1,a2,...,ad ) represents the coordinates of x in the new basis.
    [Show full text]
  • Linear Algebra
    Linear Algebra July 28, 2006 1 Introduction These notes are intended for use in the warm-up camp for incoming Berkeley Statistics graduate students. Welcome to Cal! We assume that you have taken a linear algebra course before and that most of the material in these notes will be a review of what you already know. If you have never taken such a course before, you are strongly encouraged to do so by taking math 110 (or the honors version of it), or by covering material presented and/or mentioned here on your own. If some of the material is unfamiliar, do not be intimidated! We hope you find these notes helpful! If not, you can consult the references listed at the end, or any other textbooks of your choice for more information or another style of presentation (most of the proofs on linear algebra part have been adopted from Strang, the proof of F-test from Montgomery et al, and the proof of bivariate normal density from Bickel and Doksum). Go Bears! 1 2 Vector Spaces A set V is a vector space over R and its elements are called vectors if there are 2 opera- tions defined on it: 1. Vector addition, that assigns to each pair of vectors v ; v V another vector w V 1 2 2 2 (we write v1 + v2 = w) 2. Scalar multiplication, that assigns to each vector v V and each scalar r R another 2 2 vector w V (we write rv = w) 2 that satisfy the following 8 conditions v ; v ; v V and r ; r R: 8 1 2 3 2 8 1 2 2 1.
    [Show full text]
  • Linear Algebra with Exercises B
    Linear Algebra with Exercises B Fall 2017 Kyoto University Ivan Ip These notes summarize the definitions, theorems and some examples discussed in class. Please refer to the class notes and reference books for proofs and more in-depth discussions. Contents 1 Abstract Vector Spaces 1 1.1 Vector Spaces . .1 1.2 Subspaces . .3 1.3 Linearly Independent Sets . .4 1.4 Bases . .5 1.5 Dimensions . .7 1.6 Intersections, Sums and Direct Sums . .9 2 Linear Transformations and Matrices 11 2.1 Linear Transformations . 11 2.2 Injection, Surjection and Isomorphism . 13 2.3 Rank . 14 2.4 Change of Basis . 15 3 Euclidean Space 17 3.1 Inner Product . 17 3.2 Orthogonal Basis . 20 3.3 Orthogonal Projection . 21 i 3.4 Orthogonal Matrix . 24 3.5 Gram-Schmidt Process . 25 3.6 Least Square Approximation . 28 4 Eigenvectors and Eigenvalues 31 4.1 Eigenvectors . 31 4.2 Determinants . 33 4.3 Characteristic polynomial . 36 4.4 Similarity . 38 5 Diagonalization 41 5.1 Diagonalization . 41 5.2 Symmetric Matrices . 44 5.3 Minimal Polynomials . 46 5.4 Jordan Canonical Form . 48 5.5 Positive definite matrix (Optional) . 52 5.6 Singular Value Decomposition (Optional) . 54 A Complex Matrix 59 ii Introduction Real life problems are hard. Linear Algebra is easy (in the mathematical sense). We make linear approximations to real life problems, and reduce the problems to systems of linear equations where we can then use the techniques from Linear Algebra to solve for approximate solutions. Linear Algebra also gives new insights and tools to the original problems.
    [Show full text]
  • Applied Linear Algebra and Differential Equations
    Applied Linear Algebra and Differential Equations Lecture notes for MATH 2350 Jeffrey R. Chasnov The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon Hong Kong Copyright ○c 2017-2019 by Jeffrey Robert Chasnov This work is licensed under the Creative Commons Attribution 3.0 Hong Kong License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/hk/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Preface What follows are my lecture notes for a mathematics course offered to second-year engineering students at the the Hong Kong University of Science and Technology. Material from our usual courses on linear algebra and differential equations have been combined into a single course (essentially, two half-semester courses) at the request of our Engineering School. I have tried my best to select the most essential and interesting topics from both courses, and to show how knowledge of linear algebra can improve students’ understanding of differential equations. All web surfers are welcome to download these notes and to use the notes and videos freely for teaching and learning. I also have some online courses on Coursera. You can click on the links below to explore these courses. If you want to learn differential equations, have a look at Differential Equations for Engineers If your interests are matrices and elementary linear algebra, try Matrix Algebra for Engineers If you want to learn vector calculus (also known as multivariable calculus, or calcu- lus three), you can sign up for Vector Calculus for Engineers And if your interest is numerical methods, have a go at Numerical Methods for Engineers Jeffrey R.
    [Show full text]
  • Projection Matrices
    Projection Matrices Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 1 Objectives • Derive the projection matrices used for standard OpenGL projections • Introduce oblique projections • Introduce projection normalization Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 2 Normalization • Rather than derive a different projection matrix for each type of projection, we can convert all projections to orthogonal projections with the default view volume • This strategy allows us to use standard transformations in the pipeline and makes for efficient clipping Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 3 Pipeline View modelview projection perspective transformation transformation division 4D → 3D nonsingular clipping Hidden surface removal projection 3D → 2D against default cube Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 4 Notes • We stay in four-dimensional homogeneous coordinates through both the modelview and projection transformations - Both these transformations are nonsingular - Default to identity matrices (orthogonal view) • Normalization lets us clip against simple cube regardless of type of projection • Delay final projection until end - Important for hidden-surface removal to retain depth information as long as possible Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 5 Orthogonal Normalization glOrtho(left,right,bottom,top,near,far) normalization
    [Show full text]
  • Lecture 5: Matrix Multiplication, Cont.; and Random Projections
    Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 5 - 09/18/2013 Lecture 5: Matrix Multiplication, Cont.; and Random Projections Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide more details on what we discussed in class, but there may still be some errors, incomplete/imprecise statements, etc. in them. 5 Matrix Multiplication, Cont.; and Random Projections Today, we will use the concentration results of the last few classes to go back and make statements about approximating the product of two matrices; and we will also describe an important topic we will spend a great deal more time on, i.e., random projections and Johnson-Lindenstrauss lemmas. Here is the reading for today. • Dasgupta and Gupta, \An elementary proof of a theorem of Johnson and Lindenstrauss" • Appendix of: Drineas, Mahoney, Muthukrishnan, and Sarlos, \Faster Least Squares Approx- imation" • Achlioptas, \Database-friendly random projections: Johnson-Lindenstrauss with binary coins" 5.1 Spectral norm bounds for matrix multiplication Here, we will provide a spectral norm bound for the error of the approximation constructed by the BasicMatrixMultiplication algorithm. Recall that, given as input a m × n matrix A and an n × p matrix B, this algorithm randomly samples c columns of A and the corresponding rows of B to construct a m × c matrix C and a c × p matrix R such that CR ≈ AB, in the sense that some matrix norm jjAB − CRjj is small. The Frobenius norm bound we established before immediately implies a bound for the spectral norm, but in some cases we will need a better bound than can be obtained in this manner.
    [Show full text]
  • Topics in Random Matrix Theory Terence
    Topics in random matrix theory Terence Tao Department of Mathematics, UCLA, Los Angeles, CA 90095 E-mail address: [email protected] To Garth Gaudry, who set me on the road; To my family, for their constant support; And to the readers of my blog, for their feedback and contributions. Contents Preface ix Acknowledgments x Chapter 1. Preparatory material 1 x1.1. A review of probability theory 2 x1.2. Stirling's formula 41 x1.3. Eigenvalues and sums of Hermitian matrices 45 Chapter 2. Random matrices 65 x2.1. Concentration of measure 66 x2.2. The central limit theorem 93 x2.3. The operator norm of random matrices 124 x2.4. The semicircular law 159 x2.5. Free probability 183 x2.6. Gaussian ensembles 217 x2.7. The least singular value 246 x2.8. The circular law 263 Chapter 3. Related articles 277 x3.1. Brownian motion and Dyson Brownian motion 278 x3.2. The Golden-Thompson inequality 297 vii viii Contents x3.3. The Dyson and Airy kernels of GUE via semiclassical analysis 305 x3.4. The mesoscopic structure of GUE eigenvalues 313 Bibliography 321 Index 329 Preface In the spring of 2010, I taught a topics graduate course on random matrix theory, the lecture notes of which then formed the basis for this text. This course was inspired by recent developments in the subject, particularly with regard to the rigorous demonstration of universal laws for eigenvalue spacing distributions of Wigner matri- ces (see the recent survey [Gu2009b]). This course does not directly discuss these laws, but instead focuses on more foundational topics in random matrix theory upon which the most recent work has been based.
    [Show full text]
  • Arxiv:1811.08406V1 [Math.NA]
    MATRI manuscript No. (will be inserted by the editor) Bj¨orck-Pereyra-type methods and total positivity Jos´e-Javier Mart´ınez Received: date / Accepted: date Abstract The approach to solving linear systems with structured matrices by means of the bidiagonal factorization of the inverse of the coefficient matrix is first considered, the starting point being the classical Bj¨orck-Pereyra algo- rithms for Vandermonde systems, published in 1970 and carefully analyzed by Higham in 1987. The work of Higham showed the crucial role of total pos- itivity for obtaining accurate results, which led to the generalization of this approach to totally positive Cauchy, Cauchy-Vandermonde and generalized Vandermonde matrices. Then, the solution of other linear algebra problems (eigenvalue and singular value computation, least squares problems) is addressed, a fundamental tool being the bidiagonal decomposition of the corresponding matrices. This bidi- agonal decomposition is related to the theory of Neville elimination, although for achieving high relative accuracy the algorithm of Neville elimination is not used. Numerical experiments showing the good behaviour of these algorithms when compared with algorithms which ignore the matrix structure are also included. Keywords Bj¨orck-Pereyra algorithm · Structured matrix · Totally positive matrix · Bidiagonal decomposition · High relative accuracy Mathematics Subject Classification (2010) 65F05 · 65F15 · 65F20 · 65F35 · 15A23 · 15B05 · 15B48 arXiv:1811.08406v1 [math.NA] 20 Nov 2018 J.-J. Mart´ınez Departamento de F´ısica y Matem´aticas, Universidad de Alcal´a, Alcal´ade Henares, Madrid 28871, Spain E-mail: [email protected] 2 Jos´e-Javier Mart´ınez 1 Introduction The second edition of the Handbook of Linear Algebra [25], edited by Leslie Hogben, is substantially expanded from the first edition of 2007 and, in connec- tion with our work, it contains a new chapter by Michael Stewart entitled Fast Algorithms for Structured Matrix Computations (chapter 62) [41].
    [Show full text]
  • Dimensionality Reduction Via Euclidean Distance Embeddings
    School of Computer Science and Communication CVAP - Computational Vision and Active Perception Dimensionality Reduction via Euclidean Distance Embeddings Marin Šarić, Carl Henrik Ek and Danica Kragić TRITA-CSC-CV 2011:2 CVAP320 Marin Sari´c,Carlˇ Henrik Ek and Danica Kragi´c Dimensionality Reduction via Euclidean Distance Embeddings Report number: TRITA-CSC-CV 2011:2 CVAP320 Publication date: Jul, 2011 E-mail of author(s): [marins,chek,dani]@csc.kth.se Reports can be ordered from: School of Computer Science and Communication (CSC) Royal Institute of Technology (KTH) SE-100 44 Stockholm SWEDEN telefax: +46 8 790 09 30 http://www.csc.kth.se/ Dimensionality Reduction via Euclidean Distance Embeddings Marin Sari´c,Carlˇ Henrik Ek and Danica Kragi´c Centre for Autonomous Systems Computational Vision and Active Perception Lab School of Computer Science and Communication KTH, Stockholm, Sweden [marins,chek,dani]@csc.kth.se Contents 1 Introduction2 2 The Geometry of Data3 D 2.1 The input space R : the geometry of observed data.............3 2.2 The configuration region M ...........................4 2.3 The use of Euclidean distance in the input space as a measure of dissimilarity5 q 2.4 Distance-isometric output space R .......................6 3 The Sample Space of a Data Matrix7 3.1 Centering a dataset through projections on the equiangular vector.....8 4 Multidimensional scaling - a globally distance isometric embedding 10 4.1 The relationship between the Euclidean distance matrix and the kernel matrix 11 4.1.1 Generalizing to Mercer kernels..................... 13 4.1.2 Generalizing to any metric....................... 13 4.2 Obtaining output coordinates from an EDM.................
    [Show full text]
  • Differential Equations and Linear Algebra
    Index A beat, 128 absolute stability, 188 bell-shaped curve, 16, 189, 458 absolute value, 83, 86 Bernoulli equation, 61 acceleration, 73, 478 Bessel function, 364, 460, 478 accuracy, 183, 185, 190, 191 better notation, 113, 124, 125 Adams method, 191, 192 big picture, 298, 301, 304, 397 addition formula, 87 Black-Scholes, 457 add exponents, 9 block matrix, 230, 236, 418 adjacency matrix, 316, 318, 425 block multiplication, 225, 226 boundaryconditions, 417, 403, 409, 429, 457 Airy’s equation, 130 boundary value problem, 403, 457, 470 albedo, 49 box, 175 amplitude, 75, 82, 111 box function, 404, 437, 443, 469, 478, 488 amplitude response, 34, 77 Brauer, 179 antisymmetric, 244, 321, 349, 406 applied mathematics, 314, 421, 487 C arrows, 155, 316 capacitance, 119 associative law, 219 carbon, 46 attractor, 169, 180 carrying capacity, 53, 55, 61 augmented matrix, 230, 257, 271, 278 Castillo-Chavez, 179 autocorrelation, 480 catalyst, 179 autonomous, 57, 71, 156, 157, 159 Cayley-Hamilton theorem, 345 average, 434, 438 cell phone, 44, 175 center, 160, 162, 173 B centered difference, 6, 189 back substitution, 212, 262 chain rule, 3, 4, 365, 368 backslash, 220 change of variables, 362 backward difference, 6, 12, 245, 413 chaos, 154, 180 backward Euler, 187, 188 characteristic equation, 90, 103, 108, 163 bad news, 326 chebfun, 402 balance equation, 48, 118, 314, 424 chemical engineering, 457 balance of forces, 118 chess matrix, 309 bank, 12, 40, 485 Cholesky factorization, 400 bar, 403, 405, 409, 455, 457 circulant matrix, 204, 448, 486, 488 basis,
    [Show full text]