Lecture 2J Inner Product Spaces (Pages 348-350)

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 2J Inner Product Spaces (Pages 348-350) Lecture 2j Inner Product Spaces (pages 348-350) So far, we have taken the essential properties of Rn and used them to form general vector spaces, and now we have taken the essential properties of the standard basis and created orthonormal bases. But the definition of an or- thonormal basis was dependent on the properties of the dot product. But what was so special about the dot product? We take the essential properties of the dot product and use then to define the more general concept of an inner product. Definition: Let V be a vector space over R. An inner product on V is a function h ; i : V × V ! R such that (1) hv; vi ≥ 0 for all v 2 V, and hv; vi = 0 if and only if v = 0. (positive definite) (2) hv; wi = hw; vi for all v; w 2 V (symmetric) (3) hv; sw + tzi = shv; wi + thv; zi for all s; t 2 R and v; w; z 2 V (bilinear) Note that, if we combine properties (2) and (3), then we also have that hsv + tw; zi = shv; zi + thw; zi. In its full generality, we have that hs1x + s2v; t1w + t2zi = s1hx; t1w + t2zi + s2hv; t1w + t2zi = s1t1hx; wi + s1t2hx; zi + s2t1hv; wi + s2t2hv; zi Definition: A vector space V with an inner product is called an inner product space. Let's start our study of inner product spaces by looking in Rn. Example: Show that function h ; i defined by h~x;~yi = x1y1 + 4x2y2 + 9x3y3 is an inner product on R3. To show this, we verify that h ; i satisfies the three properties of an inner product. 3 2 2 2 (1) For any ~x 2 R , h~x;~xi = x1x1 + 4x2x2 + 9x3x3 = x1 + 4x2 + 9x3. Since 2 2 2 each of x1, x2, and x3 are greater than or equal to zero, we see that h~x;~xi ≥ 0. 2 2 2 What if h~x;~xi = 0? Then x1 + 4x2 + 9x3 = 0. But since the terms in this sum are all greater than or equal to zero, the only way their sum can equal zero is 2 2 2 if each of the terms equals zero. So we get x1 = 0, 4x2 = 0, and 9x3 = 0. This means that x1 = 0, x2 = 0, and x3 = 0, which means that ~x = ~0. And so we see that h ; i is positive definite. 1 3 (2) For any ~x;~y 2 R , h~x;~yi = x1y1 + 4x2y2 + 9x3y3 = y1x1 + 4y2x2 + 9y3x3 = h~y; ~xi, so h ; i is symmetric. (3) For any ~x;~y; ~z 2 R3 and s; t 2 R, h~x;s~y + t~zi = x1(sy1 + tz1) + 4x2(sy2 + tz2) + 9x3(sy3 + tz3) = sx1y1 + tx1z1 + 4sx2y2 + 4tx2z2 + 9sx3y3 + 9tx3z3 = s(x1y1 + 4x2y2 + 9x3y3) + t(x1z1 + 4x2z2 + 9x3z3) = sh~x;~yi + th~x;~zi So h ; i is bilinear. Example: The function h ; i defined by h~x;~yi = 2x1y1 + 3x2y2 is not an inner product on R3 because it is not positive definite. For example, *2 0 3 2 0 3+ 2 0 3 we get that 4 0 5 ; 4 0 5 = 2(0)(0) + 3(0)(0) = 0, even though 4 0 5 6= 3 3 3 2 0 3 4 0 5. Note that the textbook shows that this function does define an inner 0 product on R2, so this example is intended to show the importance of paying attention which vector space you are working in. Example: The function h ; i defined by h~x;~yi = x1y1 + x1y2 + x1y3 is not an inner product on R3 because it is not symmetric. For example, *2 1 3 2 2 3+ *2 2 3 2 1 3+ 4 2 5 ; 4 3 5 = (1)(2) + (1)(3) + (1)(4) = 9, but 4 3 5 ; 4 2 5 = 3 4 4 3 *2 1 3 2 2 3+ *2 2 3 2 1 3+ (2)(1) + (2)(2) + (2)(3) = 12, so 4 2 5 ; 4 3 5 6= 4 3 5 ; 4 2 5 . (Our 3 4 4 3 function is also not positive definite. It is bilinear.) The dot product was only defined in Rn, but an inner product can be defined in any vector space. What could be an inner product in a matrix space? Since our result is supposed to be a number, not a vector, matrix multiplication is not a contender. What about the determinant? 2 Example: The function hA; Bi = det(AB) is not an inner product on M(2; 2). 1 0 1 0 The easiest thing to see is that it is not positive definite, since det = 0 0 0 0 1 0 1 0 1 0 1 0 det = 0, so ; = 0 even though 6= 0 0 0 0 0 0 0 0 0 0 . This counterexample easily extends to show that hA; Bi = det(AB) 0 0 is not an inner product on M(n; n) for any n. It is also worth noting that the determinant is not bilinear, since in general det(sA) = sndetA 6= sdetA. So, while the determinant is useful for many things, it is not useful for defining an inner product on matrix spaces. There is another operation associated with matrices that results in a number, though, and that is the trace. Recall that the trace of a matrix is the sum of the diagonal entries. Example: The function hA; Bi = tr(BT A) is an inner product on M(m; n). To see this, let's first look at this definition in M(2; 3): a a a b b b 11 12 13 ; 11 12 13 a21 a22 a23 b21 b22 b23 02 3 1 b11 b21 a11 a12 a13 = tr @4 b12 b22 5 A a21 a22 a23 b13 b23 2 3 b11a11 + b21a21 ?? = tr 4 ? b12a12 + b22a22 ? 5 ?? b13a13 + b23a23 = b11a11 + b21a21 + b12a12 + b22a22 + b13a13 + b23a23 Note that I left some entries in the matrix BT A blank because it was not necessary to calculate them in order to compute the trace. In fact, the entries b a b a b a on the diagonal are 11 · 11 , 12 · 12 , and 13 · 13 . b21 a21 b22 a22 b23 a23 That is, the i-th diagonal entry is the dot product of the i-th column of B with the i-th column of A. As you may now be guessing, this will be true in our general case. Consider that our definition for matrix multiplication was that the ij-th entry of BA is the dot product of the i-th row of B with the j-th column of A. But our \B" is currently BT , and the i-th row of BT is the same as the i-th column of B, so the ij-th entry of BT A is the dot product of the i-th column of B and the j-th column of A. Since we are looking for the trace of BT A, we need to take the sum of the diagonal entries (the ii entries), which will be the dot product of the i-th column of B with the i-th column of A. So, h ~ ~ i if A = ~a1 ··· ~an and B = b1 ··· bn , then 3 n T X~ hA; Bi = tr(B A) = bi · ~ai i=1 With this formula in hand, let's show that h ; i is an inner product. Pn (1) hA; Ai = i=1 ~ai ·~ai. Since ~ai ·~ai ≥ 0 for all i (as the dot product is positive Pn Pn definite), we know that i=1 ~ai · ~ai ≥ 0. Moreover, if i=1 ~ai · ~ai = 0, then we need each ~ai · ~ai = 0, and again using the fact that the dot product is positive definite, this means that ~ai = 0 for all i. And since the columns of A are all the zero vector, we get that A is the zero matrix. As such, we have shown that h ; i is positive definite. Pn ~ (2) We have already shown that hA; Bi = i=1 bi ·~ai, so now we want to look at hB; Ai. By the same arguments we used for hA; Bi, we know that the ij-th entry of AT B is the dot product of the i-th column of A with the j-th column of B. T Pn ~ This means that tr(A B) = i=1 ~ai · bi. Since dot products are symmetrical, ~ ~ we know that ~ai · bi = bi · ~ai. And thus we have the following: hB; Ai = tr(AT B) Pn ~ = i=1 ~ai · bi Pn ~ = i=1 bi · ~ai = hA; Bi Pn ~ (3) hA; sB + tCi = i=1(sbi + t~ci) · ~ai Pn ~ = i=1 sbi · ~ai + t~ci · ~ai Pn ~ Pn = s i=1 bi · ~ai + t i=1 ~ci · ~ai = shA; Bi + thA; Ci Before we leave this example, let's take a further look at the formula hA; Bi = T Pn ~ ~ tr(B A) = i=1 bi ·~ai. Because the dot product bi ·~ai is the sum bi1ai1 + ··· + bimaim. But since we cycle through all possible i values when we take the inner Pn Pn product, what we end up with is that hA; Bi = i=1 j=1 bijaij. Don't let the double sum scare you! The important fact is what is inside: that we are taking the product of the corresponding entries of A and B, and adding them together. This formula is quite reminiscent of our formula for the dot product in Rn. (And, in fact, under the standard isomorphism from M(m; n) to Rmn, this is the same as the dot product.) So whenever we need to calculate tr(BT A), we Pn Pn know it is i=1 j=1 bijaij.
Recommended publications
  • Estimations of the Trace of Powers of Positive Self-Adjoint Operators by Extrapolation of the Moments∗
    Electronic Transactions on Numerical Analysis. ETNA Volume 39, pp. 144-155, 2012. Kent State University Copyright 2012, Kent State University. http://etna.math.kent.edu ISSN 1068-9613. ESTIMATIONS OF THE TRACE OF POWERS OF POSITIVE SELF-ADJOINT OPERATORS BY EXTRAPOLATION OF THE MOMENTS∗ CLAUDE BREZINSKI†, PARASKEVI FIKA‡, AND MARILENA MITROULI‡ Abstract. Let A be a positive self-adjoint linear operator on a real separable Hilbert space H. Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the moments of A. Applications of the matrix case are discussed, and numerical results are given. Key words. Trace, positive self-adjoint linear operator, symmetric matrix, matrix powers, matrix moments, extrapolation. AMS subject classifications. 65F15, 65F30, 65B05, 65C05, 65J10, 15A18, 15A45. 1. Introduction. Let A be a positive self-adjoint linear operator from H to H, where H is a real separable Hilbert space with inner product denoted by (·, ·). Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the integer moments (z, Anz) of A, for n ∈ N. A similar procedure was first introduced in [3] for estimating the Euclidean norm of the error when solving a system of linear equations, which corresponds to q = −2. The case q = −1, which leads to estimates of the trace of the inverse of a matrix, was studied in [4]; on this problem, see [10]. Let us mention that, when only positive powers of A are used, the Hilbert space H could be infinite dimensional, while, for negative powers of A, it is always assumed to be a finite dimensional one, and, obviously, A is also assumed to be invertible.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • 18.700 JORDAN NORMAL FORM NOTES These Are Some Supplementary Notes on How to Find the Jordan Normal Form of a Small Matrix. Firs
    18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm for finding the Jordan normal form of a linear operator, and then we will see how this works for small matrices. 1. Facts Throughout we will work over the field C of complex numbers, but if you like you may replace this with any other algebraically closed field. Suppose that V is a C-vector space of dimension n and suppose that T : V → V is a C-linear operator. Then the characteristic polynomial of T factors into a product of linear terms, and the irreducible factorization has the form m1 m2 mr cT (X) = (X − λ1) (X − λ2) ... (X − λr) , (1) for some distinct numbers λ1, . , λr ∈ C and with each mi an integer m1 ≥ 1 such that m1 + ··· + mr = n. Recall that for each eigenvalue λi, the eigenspace Eλi is the kernel of T − λiIV . We generalized this by defining for each integer k = 1, 2,... the vector subspace k k E(X−λi) = ker(T − λiIV ) . (2) It is clear that we have inclusions 2 e Eλi = EX−λi ⊂ E(X−λi) ⊂ · · · ⊂ E(X−λi) ⊂ .... (3) k k+1 Since dim(V ) = n, it cannot happen that each dim(E(X−λi) ) < dim(E(X−λi) ), for each e e +1 k = 1, . , n. Therefore there is some least integer ei ≤ n such that E(X−λi) i = E(X−λi) i .
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Trace Inequalities for Matrices
    Bull. Aust. Math. Soc. 87 (2013), 139–148 doi:10.1017/S0004972712000627 TRACE INEQUALITIES FOR MATRICES KHALID SHEBRAWI ˛ and HUSSIEN ALBADAWI (Received 23 February 2012; accepted 20 June 2012) Abstract Trace inequalities for sums and products of matrices are presented. Relations between the given inequalities and earlier results are discussed. Among other inequalities, it is shown that if A and B are positive semidefinite matrices then tr(AB)k ≤ minfkAkk tr Bk; kBkk tr Akg for any positive integer k. 2010 Mathematics subject classification: primary 15A18; secondary 15A42, 15A45. Keywords and phrases: trace inequalities, eigenvalues, singular values. 1. Introduction Let Mn(C) be the algebra of all n × n matrices over the complex number field. The singular values of A 2 Mn(C), denoted by s1(A); s2(A);:::; sn(A), are the eigenvalues ∗ 1=2 of the matrix jAj = (A A) arranged in such a way that s1(A) ≥ s2(A) ≥ · · · ≥ sn(A). 2 ∗ ∗ Note that si (A) = λi(A A) = λi(AA ), so for a positive semidefinite matrix A, we have si(A) = λi(A)(i = 1; 2;:::; n). The trace functional of A 2 Mn(C), denoted by tr A or tr(A), is defined to be the sum of the entries on the main diagonal of A and it is well known that the trace of a Pn matrix A is equal to the sum of its eigenvalues, that is, tr A = j=1 λ j(A). Two principal properties of the trace are that it is a linear functional and, for A; B 2 Mn(C), we have tr(AB) = tr(BA).
    [Show full text]
  • The Definition Definition Definition of the Determinant Determinant For
    The Definition ofofof thethethe Determinant For our discussion of the determinant I am going to be using a slightly different definition for the determinant than the author uses in the text. The reason I want to do this is because I think that this definition will give you a little more insight into how the determinant works and may make some of the proofs we have to do a little easier. I will also show you that my definition is equivalent to the author's definition. My definition starts with the concept of a permutation . A permutation of a list of numbers is a particular re-ordering of the list. For example, here are all of the permutations of the list {1,2,3}. 1,2,3 1,3,2 2,3,1 2,1,3 3,1,2 3,2,1 I will use the greek letter σ to stand for a particular permutation. If I want to talk about a particular number in that permutation I will use the notation σ(j) to stand for element j in the permutation σ. For example, if σ is the third permutation in the list above, σ(2) = 3. The sign of a permutation is defined as (-1) i(σ), where i(σ), the inversion count of σ, is defined as the number of cases where i < j while σ(i) > σ(j). Closely related to the inversion count is the swap count , s(σ), which counts the number of swaps needed to restore the permutation to the original list order.
    [Show full text]