Matrix Analysis February 17-19, 2014 ME

Total Page:16

File Type:pdf, Size:1020Kb

Matrix Analysis February 17-19, 2014 ME Matrix Analysis February 17-19, 2014 Outline Introduction to Matrix Analysis • Why matrices, basic definitions/terms • Matrix multiplication • Determinants Larry Caretto • Inverse of a Matrix Mechanical Engineering 309 Numerical Analysis of • Simultaneous linear equations: matrix from and Gaussian elimination solution Engineering Systems – Matrix rank and its role in determining type February 17–19, 2014 of solutions (unique, infinite, none) • Eigenvalues and eigenvectors 2 Why Matrices Matrix Example • Simplified notation for general statements about mathematical or engineering systems • Simple linkage of two springs with • Have multiple components that are spring constant k and displacements uk interrelated at individual points – Elements in a machine structure • Individual loads, Pk, related to individual – Springs, masses and dampers in a displacements, uk, by matrix equation vibrating system shown below. • Matrix notation provides general 푃1 푘1 −푘1 0 푢1 푢 relationships among components P= 푃2 = −푘1 푘1 + 푘2 −푘2 2 = 푲푼 푃 0 −푘 푘 푢3 3 3 2 2 4 Matrix Basics Row and Column Matrices a a a a • Matrices with only one row or only one 11 12 13 1m • Array of a a a a column are called row or column matrices 21 22 23 2m numbers with (sometimes called row or column vectors) a31 a32 a33 a3m n rows and m A – The row number 1 is usually columns dropped in row matrices as is c11 c1 c c • Components the column number 1 in 21 2 an1 an2 an3 anm column matrices are a(row)(column) c c31 c3 r r r r r • Size of matrix (n x m) or (n by m) is 11 12 13 1m number of rows and columns r1 r2 r3 rm cn1 cn 5 6 ME 309 – Numerical Analysis of Engineering Systems 1 Matrix Analysis February 17-19, 2014 More Matrix Basics Diagonal Matrix • Two matrices are equal (e.g., A = B) a1 0 0 0 • The diagonal matrix – If both A and B have the same size (rows 0 a 0 0 A is a square and columns) 2 0 0 a3 0 matrix with nonzero – If each component of A is the same as the A components only corresponding component of B (aij = bij for on the principal all i and j) diagonal • A square matrix has the same number 0 0 0 an of rows and columns • Components of A are 1 i j • A diagonal matrix, D, has all zeros aidij, where dij is the dij except for the principal diagonal Kroenecker delta 0 i j 7 8 Matrix Operations Null (0) and Unit (I) Matrices • Can add or subtract matrices if they are • For any matrix, A, A + 0 = 0 + A = A; the same size IA = AI = A and 0 A = A 0 = 0 – C = A B only valid if A, B, and C have • The unit (or identity) matrix is a square the same size (rows and columns) matrix; the null matrix, which need not – Components of C, cij = aij bij be square, is sometimes written 0(nxm) • Multiplication by a scalar: C = xA 0 0 0 0 1 0 0 0 – C and A have the same size (rows and 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 columns) 0 I – Components of C, cij = xaij – For scalar division, C = A/x, cij = aij/x 0 0 0 0 0 0 0 1 9 10 Transpose of a Matrix Matrix Multiplication Preview • Transpose of A denoted as AT • Not an intuitive operation. Look at two • Reverse rows and columns; for B = AT coordinate transformations as example y a x a x z b y b y – bij = aji 1 11 1 12 2 1 11 1 12 2 – If A is (n x m), B = AT is (m by n) y2 a21x1 a22x2 z2 b21y1 b22y2 • In MATLAB the apostrophe is used to construct the transpose: B = A’ • Substitute equations for y in terms of x 3 14 into equations for z1 and z2 3 12 6 T A A 12 2 z1 b11[a11x1 a12x2 ] b12[a21x1 a22x2 ] 14 2 0 6 0 z2 b21[a11x1 a12x2 ] b22[a21x1 a22x2 ] 11 12 ME 309 – Numerical Analysis of Engineering Systems 2 Matrix Analysis February 17-19, 2014 Matrix Multiplication Preview II Matrix Multiplication Preview III • Rearrange last set of equations to get • Coefficients as matrix components direct transformation from x to z a a b b c c A 11 12 B 11 12 C 11 12 z [b a b a ]x [b a b a ]x c x c x 1 11 11 12 21 1 11 12 12 22 2 11 1 12 2 a21 a22 b21 b22 c21 c22 z2 [b21a11 b22a21]x1 [b21a12 b22a22]x2 c21x1 c22x2 c c b a b a b a b a C 11 12 11 11 12 21 11 12 12 22 c11 [b11a11 b12a21] c12 [b11a12 b12a22] c21 c22 b21a11 b22a21 b21a12 b22a22 c21 [b21a11 b22a21] c22 [b21a12 b22a22] 2 C = AB if cij bik akj (i 1,2; j 1,2) k1 • Generalize this to any matrix size 13 14 Multiplying Matrices General Matrix Multiplication • For general matrix multiplication, C = AB • For matrix multiplication, C = AB – A has n rows and p columns p – A has n rows and p columns p cij aikbkj cij aikbkj – B has p rows and m columns k 1 – B has p rows and m columns k 1 – C has n rows and m columns (i 1,n; j 1,m) – C has n rows and m columns (i 1,n; j 1,m) – A is left; B is right; C is product – A is left; B is right; C is product (AB) is • For C = AB, we get c by adding prod- ij ij • Example 3 4 product of row i ucts of terms in row i of A (left matrix) 3 0 6 A B 1 2 of A times by terms in column j of B (right matrix) 4 2 0 6 1 column j of B • c = a b + a b + a b + a b + … ij i1 1j i2 2j i3 3j i4 4j 3(3) 0(1) 6(6) 3(4) 0(2) 6(1) 27 6 • In general, AB ≠ BA AB 4(3) 2(1) 0(6) 4(4) 2(2) 0(1) 10 12 15 16 Matrix Multiplication Exercise Matrix Multiplication Exercise II • Consider the following matrices • What is the size of C = AB? 1 2 3 0 2 2 0 A 0 1 4 B 1 3 1 0 1 1 0 2 1 3 1 • Can you find AB, BA or both? • We can find AB, because A has three • C = AB has three rows (like A) and four columns and B has three rows columns (like B) • We cannot find BA because B has four • What is c ? columns and A has three rows 11 • Next chart starts the process of finding all • c11 = (1)(0) + (2)(1) + (3)(2) = 8 components of AB 17 18 ME 309 – Numerical Analysis of Engineering Systems 3 Matrix Analysis February 17-19, 2014 Matrix Multiplication Exercise III Matrix Multiplication Exercise IV • Find c11, c12, c13, and c14 in C = AB • Find c21, c22, c23, and c24 in C = AB • c11 = (1)(0) + (2)(1) + (3)(2) = 8 • c21 = (0)(0) + (-1)(1) + (4)(2) = 7 • c12 = (1)(2) + (2)(3) + (3)(1) = 11 • c22 = (0)(2) + (-1)(3) + (4)(1) = 1 • c13 = (1)(-2) + (28)(111) + 9(3)(3-3) = -9 • c23 = (0)(-2) + (-1)(1) + (4)(-3) = -13 8 11 9 3 C • c14 = (1)(0) + (2)(0) + (3)(1) = 3 • c24 = (0)(0C) + 7(-11)(0) 13+ (44)(1) = 4 19 20 Matrix Multiplication Exercise V Matrix Multiplication Results • Find c31, c32, c33, and c34 in C = AB • Solution for matrix product C = AB • c31 = (1)(0) + (1)(1) + (0)(2) = 1 • c32 = (1)(2) + (1)(3) + (0)(1) = 5 • c = (1)(-2) + (1)(1) + (0)(-3) = -1 33 8 11 9 3 • c34 = (1)(0C) + (71)(01) + (130)(14) = 0 1 5 1 0 21 22 MATLAB/Excel Matrices Determinants • Have seen term-by-term operations on • Looks like a matrix but isn’t a matrix arrays,1 A+B,2 3 A-B, A.*B,0 A./B2 and2 0 A.^n • A square array of numbers with a rule A 0 1 4 B 1 3 1 0 for computing a single value for the • Matrix operations are given by following: 1 1 0 array A+B, A-B, A*B, A/B, A2\B 1and A^n3 1 Example at 1 3 1 9 a11 a12 a13 a11 a12 a13 • MATLAB 푨 = 푨. ^ퟐ = 2 4 4 16 right shows Deta a a a a a calculation of 21 22 23 21 22 23 1 3 1 3 7 15 a a a a a a • A^2: 푨ퟐ = = Det(A), the 31 32 33 31 32 33 2 4 2 4 10 22 determinant of a a a a a a a a a • Excel has mmult array function to 11 22 33 21 32 13 31 12 23 3 x 3 matrix A a a a a a a a a a multiply two arrays 11 32 23 21 12 33 31 22 13 23 24 ME 309 – Numerical Analysis of Engineering Systems 4 Matrix Analysis February 17-19, 2014 More Determinants General Rule for Determinants • Useful in obtaining algebraic • Any size determinant can be evaluated expressions for matrix operations, but by any of the following equations not useful for numerical computation n n n n i j i j – Equation for 2 by 2 determinant Det A (1) aijMij (1) aijMij aijAij aijAij i1 j1 i1 j1 a11 a12 a11 a12 Det A Det a11a22 a21a12 a21 a22 a21 a22 • Can pick any row or any column • Choose row or column with most zeros to 2 • An n x n determinant has n Minors, Mij, simplify calculations obtained by deleting row i and column j • Can apply equation recursively; evaluate a 5 i+j • Cofactors, Aij = (-1) Mij used in general x 5 determinant as a sum of 4 x 4 deter- expressions for determinants 25 minants then get 4 x 4’s in terms of 3 x 3’s26 Determinant Behavior Determinant Behavior II • A determinant is zero if any row or any • If one row (or one column) of a column contains all zeros.
Recommended publications
  • Matrix Analysis (Lecture 1)
    Matrix Analysis (Lecture 1) Yikun Zhang∗ March 29, 2018 Abstract Linear algebra and matrix analysis have long played a fundamen- tal and indispensable role in fields of science. Many modern research projects in applied science rely heavily on the theory of matrix. Mean- while, matrix analysis and its ramifications are active fields for re- search in their own right. In this course, we aim to study some fun- damental topics in matrix theory, such as eigen-pairs and equivalence relations of matrices, scrutinize the proofs of essential results, and dabble in some up-to-date applications of matrix theory. Our lecture will maintain a cohesive organization with the main stream of Horn's book[1] and complement some necessary background knowledge omit- ted by the textbook sporadically. 1 Introduction We begin our lectures with Chapter 1 of Horn's book[1], which focuses on eigenvalues, eigenvectors, and similarity of matrices. In this lecture, we re- view the concepts of eigenvalues and eigenvectors with which we are familiar in linear algebra, and investigate their connections with coefficients of the characteristic polynomial. Here we first outline the main concepts in Chap- ter 1 (Eigenvalues, Eigenvectors, and Similarity). ∗School of Mathematics, Sun Yat-sen University 1 Matrix Analysis and its Applications, Spring 2018 (L1) Yikun Zhang 1.1 Change of basis and similarity (Page 39-40, 43) Let V be an n-dimensional vector space over the field F, which can be R, C, or even Z(p) (the integers modulo a specified prime number p), and let the list B1 = fv1; v2; :::; vng be a basis for V.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • Arxiv:1805.04488V5 [Math.NA]
    GENERALIZED STANDARD TRIPLES FOR ALGEBRAIC LINEARIZATIONS OF MATRIX POLYNOMIALS∗ EUNICE Y. S. CHAN†, ROBERT M. CORLESS‡, AND LEILI RAFIEE SEVYERI§ Abstract. We define generalized standard triples X, Y , and L(z) = zC1 − C0, where L(z) is a linearization of a regular n×n −1 −1 matrix polynomial P (z) ∈ C [z], in order to use the representation X(zC1 − C0) Y = P (z) which holds except when z is an eigenvalue of P . This representation can be used in constructing so-called algebraic linearizations for matrix polynomials of the form H(z) = zA(z)B(z)+ C ∈ Cn×n[z] from generalized standard triples of A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using ℓ the coefficients of the expression 1 = Pk=0 ekφk(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. Key words. Standard triple, regular matrix polynomial, polynomial bases, companion matrix, colleague matrix, comrade matrix, algebraic linearization, linearization of matrix polynomials. AMS subject classifications. 65F15, 15A22, 65D05 1. Introduction. A matrix polynomial P (z) ∈ Fm×n[z] is a polynomial in the variable z with coef- ficients that are m by n matrices with entries from the field F. We will use F = C, the field of complex k numbers, in this paper.
    [Show full text]
  • Math 623: Matrix Analysis Final Exam Preparation
    Mike O'Sullivan Department of Mathematics San Diego State University Spring 2013 Math 623: Matrix Analysis Final Exam Preparation The final exam has two parts, which I intend to each take one hour. Part one is on the material covered since the last exam: Determinants; normal, Hermitian and positive definite matrices; positive matrices and Perron's theorem. The problems will all be very similar to the ones in my notes, or in the accompanying list. For part two, you will write an essay on what I see as the fundamental theme of the course, the four equivalence relations on matrices: matrix equivalence, similarity, unitary equivalence and unitary similarity (See p. 41 in Horn 2nd Ed. He calls matrix equivalence simply \equivalence."). Imagine you are writing for a fellow master's student. Your goal is to explain to them the key ideas. Matrix equivalence: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a simple set of representatives for the equivalence classes: identify them and sketch the proof. Similarity: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a set of representatives for the equivalence classes using Jordan matrices. State the Jordan theorem. (4) The proof of the Jordan theorem can be broken into two parts: (1) writing the ambient space as a direct sum of generalized eigenspaces, (2) classifying nilpotent matrices. Sketch the proof of each. Explain the role of invariant spaces. Unitary equivalence (1) Define it. (2) Explain the relationship with abstract inner product spaces and change of basis.
    [Show full text]
  • The Classical Matrix Groups 1 Groups
    The Classical Matrix Groups CDs 270, Spring 2010/2011 The notes provide a brief review of matrix groups. The primary goal is to motivate the lan- guage and symbols used to represent rotations (SO(2) and SO(3)) and spatial displacements (SE(2) and SE(3)). 1 Groups A group, G, is a mathematical structure with the following characteristics and properties: i. the group consists of a set of elements {gj} which can be indexed. The indices j may form a finite, countably infinite, or continous (uncountably infinite) set. ii. An associative binary group operation, denoted by 0 ∗0 , termed the group product. The product of two group elements is also a group element: ∀ gi, gj ∈ G gi ∗ gj = gk, where gk ∈ G. iii. A unique group identify element, e, with the property that: e ∗ gj = gj for all gj ∈ G. −1 iv. For every gj ∈ G, there must exist an inverse element, gj , such that −1 gj ∗ gj = e. Simple examples of groups include the integers, Z, with addition as the group operation, and the real numbers mod zero, R − {0}, with multiplication as the group operation. 1.1 The General Linear Group, GL(N) The set of all N × N invertible matrices with the group operation of matrix multiplication forms the General Linear Group of dimension N. This group is denoted by the symbol GL(N), or GL(N, K) where K is a field, such as R, C, etc. Generally, we will only consider the cases where K = R or K = C, which are respectively denoted by GL(N, R) and GL(N, C).
    [Show full text]
  • Arxiv:1901.01378V2 [Math-Ph] 8 Apr 2020 Where A(P, Q) Is the Arithmetic Mean of the Vectors P and Q, G(P, Q) Is Their P Geometric Mean, and Tr X Stands for Xi
    MATRIX VERSIONS OF THE HELLINGER DISTANCE RAJENDRA BHATIA, STEPHANE GAUBERT, AND TANVI JAIN Abstract. On the space of positive definite matrices we consider dis- tance functions of the form d(A; B) = [trA(A; B) − trG(A; B)]1=2 ; where A(A; B) is the arithmetic mean and G(A; B) is one of the different versions of the geometric mean. When G(A; B) = A1=2B1=2 this distance is kA1=2− 1=2 1=2 1=2 1=2 B k2; and when G(A; B) = (A BA ) it is the Bures-Wasserstein metric. We study two other cases: G(A; B) = A1=2(A−1=2BA−1=2)1=2A1=2; log A+log B the Pusz-Woronowicz geometric mean, and G(A; B) = exp 2 ; the log Euclidean mean. With these choices d(A; B) is no longer a metric, but it turns out that d2(A; B) is a divergence. We establish some (strict) convexity properties of these divergences. We obtain characterisations of barycentres of m positive definite matrices with respect to these distance measures. 1. Introduction Let p and q be two discrete probability distributions; i.e. p = (p1; : : : ; pn) and q = (q1; : : : ; qn) are n -vectors with nonnegative coordinates such that P P pi = qi = 1: The Hellinger distance between p and q is the Euclidean norm of the difference between the square roots of p and q ; i.e. 1=2 1=2 p p hX p p 2i hX X p i d(p; q) = k p− qk2 = ( pi − qi) = (pi + qi) − 2 piqi : (1) This distance and its continuous version, are much used in statistics, where it is customary to take d (p; q) = p1 d(p; q) as the definition of the Hellinger H 2 distance.
    [Show full text]
  • Matrix Multiplication. Diagonal Matrices. Inverse Matrix. Matrices
    MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a11 a12 ... a1n a21 a22 ... a2n . .. . am1 am2 ... amn Notation: A = (aij )1≤i≤n, 1≤j≤m or simply A = (aij ) if the dimensions are known. Matrix algebra: linear operations Addition: two matrices of the same dimensions can be added by adding their corresponding entries. Scalar multiplication: to multiply a matrix A by a scalar r, one multiplies each entry of A by r. Zero matrix O: all entries are zeros. Negative: −A is defined as (−1)A. Subtraction: A − B is defined as A + (−B). As far as the linear operations are concerned, the m×n matrices can be regarded as mn-dimensional vectors. Properties of linear operations (A + B) + C = A + (B + C) A + B = B + A A + O = O + A = A A + (−A) = (−A) + A = O r(sA) = (rs)A r(A + B) = rA + rB (r + s)A = rA + sA 1A = A 0A = O Dot product Definition. The dot product of n-dimensional vectors x = (x1, x2,..., xn) and y = (y1, y2,..., yn) is a scalar n x · y = x1y1 + x2y2 + ··· + xnyn = xk yk . Xk=1 The dot product is also called the scalar product. Matrix multiplication The product of matrices A and B is defined if the number of columns in A matches the number of rows in B. Definition. Let A = (aik ) be an m×n matrix and B = (bkj ) be an n×p matrix.
    [Show full text]