216 Section 6.1 Chapter 6 Hermitian, Orthogonal, And

Total Page:16

File Type:pdf, Size:1020Kb

216 Section 6.1 Chapter 6 Hermitian, Orthogonal, And 216 SECTION 6.1 CHAPTER 6 HERMITIAN, ORTHOGONAL, AND UNITARY OPERATORS In Chapter 4, we saw advantages in using bases consisting of eigenvectors of linear opera- tors in a number of applications. Chapter 5 illustrated the benefit of orthonormal bases. Unfortunately, eigenvectors of linear operators are not usually orthogonal, and vectors in an orthonormal basis are not likely to be eigenvectors of any pertinent linear operator. There are operators, however, for which eigenvectors are orthogonal, and hence it is possible to have a basis that is simultaneously orthonormal and consists of eigenvectors. This chapter introduces some of these operators. 6.1 Hermitian Operators § When the basis for an n-dimensional real, inner product space is orthonormal, the inner product of two vectors u and v can be calculated with formula 5.48. If v not only represents a vector, but also denotes its representation as a column matrix, we can write the inner product as the product of two matrices, one a row matrix and the other a column matrix, (u, v) = uT v. If A is an n n real matrix, the inner product of u and the vector Av is × (u, Av) = uT (Av) = (uT A)v = (AT u)T v = (AT u, v). (6.1) This result, (u, Av) = (AT u, v), (6.2) allows us to move the matrix A from the second term to the first term in the inner product, but it must be replaced by its transpose AT . A similar result can be derived for complex, inner product spaces. When A is a complex matrix, we can use equation 5.50 to write T T T (u, Av) = uT (Av) = (uT A)v = (AT u)T v = A u v = (A u, v). (6.3) This is the complex counterpart of equation 6.2, T (u, Av) = (A u, v), (6.4) but this time, we must also take complex conjugates of entries in AT . We identify this matrix in the following definition. Definition 6.1 The Hermitian conjugate of a complex matrix A is the transpose of its complex conjugate T A . For example, the Hermitian conjugate of 1+2i 3 3 i 1 2i 2+3i 2 − − 2 3i 2i 1+6i is 3 2i 4+2i . −2 4 2i 6+7i 3 + i 1− 6i 6 7i − − − − − A real matrix is symmetric if it is equal to its transpose. The complex counterpart of a symmetric matrix is contained in the next definition. Definition 6.2 A square complex matrix A is said to be Hermitian if it is equal to its Hermitian conjugate, T A = A . (6.5) In a Hermitian matrix, entries on opposite sides of the diagonal are complex conjugates, and diagonal entries are real (see Exercise 7). A square real matrix is Hermitian if it is symmetric. SECTION 6.1 217 If L is a linear operator on a complex, inner product space V , we can associate a complex matrix A with L so that mapping a vector v can be written as matrix multiplication, L(v) = Av. We use Hermitian matrices to define Hermitian operators. Definition 6.3 A linear operator H on a complex inner product space is said to be Hermitian if its matrix is Hermitian. We use the letter H to represent a Hermitian operator to distinguish that it is Hermitian. The matrix A of a Hermitian operator satisfies equation 6.5, and for such an operator, equation 6.4 implies that u,H(v) = H(u), v . (6.6) Hermitian operators are the first of our operators for which eigenvectors are orthogonal. This is proved in the following theorem. Theorem 6.1 Eigenvalues of a Hermitian operator on an inner product space are real and eigenvectors corresponding to distinct eigenvalues are orthogonal. Proof Let λ and v be an eigenpair for a Hermitian operator H. Not out of necessity, but to simplify calculations, suppose that v has length one. Then λ = λ(v, v) (the length of v is one) = (v, λv) (property 5.19b) = v,H(v) (λ is an eigenvalue for H with eigenvector v) = H(v), v (equation 6.6) = (λv, v) (λ is an eigenvalue for H with eigenvector v) = λ(v, v) (property 5.19a) = λ (the length of v is one). Hence, λ is real. Now suppose that v1 and v2 are eigenvectors corresponding to (real) distinct eigenvalues λ1 and λ2. Then λ1(v1, v2) = (λ1v1, v2) (λ1 is real and property 5.19a) = H(v1), v2 (λ1 is an eigenvalue for H with eigenvector v1) = v1,H(v2) (equation 6.6) = (v1, λ2v2) (λ2 is an eigenvalue for H with eigenvector v2) = λ2(v1, v2). (property 5.19b) Since λ1 = λ2, this implies that (v1, v2) = 0, and the eigenvectors are orthogonal. 6 It is important to note that different eigenvectors corresponding to the same eigenvalue are not usually orthogonal. Indeed, if v is an eigenvector of an operator, then so also is 3 v, and these vectors are certainly not orthogonal. When an eigenvalue has two, or more, linearly independent eigenvectors, it is possible to have orthogonal eigenvectors corresponding to this eigenvalue. Here are some examples to illustrate the theorem and this remark. Example 6.1 The symmetric matrix corresponding to a Hermitian operator on a real inner product space is 2 1 1 − A = 1 2 1 . 11 2 − 218 SECTION 6.1 Show that eigenvalues are real, and eigenvectors corresponding to different eigenvalues are orthogonal. Solution Eigenvalues are given by 2 λ 1 1 − − 2 0 = det(A λI) = det 1 2 λ 1 = λ(λ 3) . − 1− 1 2 λ − − − − Thus, λ = 0 and λ = 3 are the eigenvalues. Eigenvectors corresponding to λ = 0 are v3(1, 1, 1). Eigenvectors corresponding to λ = 3 are v2(1, 1, 0) + v3( 1, 0, 1). If we take − − the inner product of any eigenvector v1 = v(1, 1, 1) corresponding to λ = 0, and any − eigenvector v2 = u(1, 1, 0) + w( 1, 0, 1) corresponding to λ = 3, we obtain − (v1, v2) = v(1, 1, 1),u(1, 1, 0) + w( 1, 0, 1) = vu(1 1+0)+ vw( 1+0+1)=0. − − − − They are orthogonal. The inner product of the eigenvectors (1, 1, 0) and ( 1, 0, 1) corre- sponding to λ =3 is (1, 1, 0), ( 1, 0, 1) = 1. They are not orthogonal. Notice,− however, − − the vector (1, 1, 0)+ 2( 1, 0, 1)= ( 1, 1, 2) is an eigenvector corresponding to λ =3 and it is orthogonal to the eigenvector− (1,−1, 0). • Example 6.2 The Hermitian matrix corresponding to a Hermitian operator on a complex inner product space is 2 0 i A = 0 10 . i 0 2 − Find its eigenvalues and eigenvectors. Demonstrate orthogonality of eigenvectors corre- sponding to different eigenvalues. Solution Eigenvalues are given by 2 λ 0 i − 2 0 = det(A λI) = det 0 1 λ 0 = (λ 1) (λ 3). − i −0 2 λ − − − − − Thus, λ = 1 and λ = 3 are eigenvalues. Eigenvectors corresponding to λ = 1 are v2(0, 1, 0)+ v3( i, 0, 1). Eigenvectors corresponding to λ = 3 are v3(i, 0, 1). If we take the inner product − of any eigenvector v = v2(0, 1, 0) + v3( i, 0, 1) corresponding to λ = 1, and any eigenvector − w = w3(i, 0, 1) corresponding to λ = 3, we obtain (v, w) = v2(0, 1, 0) + v3( i, 0, 1),w3(i, 0, 1) = v2w3(0) + v3w3( 1+1)=0. − − They are orthogonal. • Eigenvectors corresponding to eigenvalues with algebraic multiplicity larger than one need not be orthogonal, but from nonorthogonal eigenvectors, the Gram-Schmidt process can be used to construct orthogonal eigenvectors corresponding to the same eigenvalue. Here is an example. Example 6.3 The symmetric (Hermitian) matrix of an operator on a 3-dimensional space is 5 2 4 − A = 2 8 2 . −4 2 5 Show that A has two eigenvalues one with (algebraic) multiplicity 2. Find orthonormal eigenvectors corresponding to the eigenvalue with algebraic multiplicity 2. Solution Eigenvalues are given by SECTION 6.1 219 5 λ 2 4 − − 2 0 = det 2 8 λ 2 = λ(λ 9) . −4− 25 λ − − − Eigenvalues are λ = 0 and λ = 9. Eigenvectors corresponding to λ = 9 satisfy 4 2 4 v1 − − 0 = (A 9I)v = 2 1 2 v2 . − − − 4 2 4 v3 − The reduced row echelon form for the augmented matrix of this system is 1 1/2 1 0 − 0 0 0 0 . 0 0 0 0 When we convert to equations, v2 v1 + v3 =0. 2 − Eigenvectors are therefore v2/2 + v3 1 1 − v2 − v = v2 = 2 + v3 0 . 2 v3 0 1 Linearly independent eigenvectors corresponding to λ = 9 are v1 = (1, 2, 0) and v2 = − (1, 0, 1). We can construct an eigenvector corresponding to λ = 9 orthogonal to v1 using Gram-Schmidt, (1, 0, 1), (1, 2, 0) 1 4 2 v3 = (1, 0, 1) − (1, 2, 0)= (1, 0, 1) (1, 2, 0) = , , 1 . − 5 − − 5 − 5 5 Orthonormal eigenvectors corresponding to λ = 9 are (1, 2, 0) (4, 2, 5) vˆ1 = − and vˆ3 = . √5 3√5 • We now know that eigenvectors corresponding to different eigenvalues of a Hermitian operator are orthogonal, and should an eigenvalue with algebraic multiplicity greater than 1 have more than one linearly independent eigenvector, then orthogonal eigenvectors can be constructed from them. The following theorem shows that the geometric multiplicity of every eigenvalue of a Hermitian operator on a finite-dimensional vector space must be equal to its algebraic multiplicity. In other words, the number of linearly independent eigenvectors of a Hermitian operator on a finite-dimensional space must be equal to the dimension of the space.
Recommended publications
  • Math 4571 (Advanced Linear Algebra) Lecture #27
    Math 4571 (Advanced Linear Algebra) Lecture #27 Applications of Diagonalization and the Jordan Canonical Form (Part 1): Spectral Mapping and the Cayley-Hamilton Theorem Transition Matrices and Markov Chains The Spectral Theorem for Hermitian Operators This material represents x4.4.1 + x4.4.4 +x4.4.5 from the course notes. Overview In this lecture and the next, we discuss a variety of applications of diagonalization and the Jordan canonical form. This lecture will discuss three essentially unrelated topics: A proof of the Cayley-Hamilton theorem for general matrices Transition matrices and Markov chains, used for modeling iterated changes in systems over time The spectral theorem for Hermitian operators, in which we establish that Hermitian operators (i.e., operators with T ∗ = T ) are diagonalizable In the next lecture, we will discuss another fundamental application: solving systems of linear differential equations. Cayley-Hamilton, I First, we establish the Cayley-Hamilton theorem for arbitrary matrices: Theorem (Cayley-Hamilton) If p(x) is the characteristic polynomial of a matrix A, then p(A) is the zero matrix 0. The same result holds for the characteristic polynomial of a linear operator T : V ! V on a finite-dimensional vector space. Cayley-Hamilton, II Proof: Since the characteristic polynomial of a matrix does not depend on the underlying field of coefficients, we may assume that the characteristic polynomial factors completely over the field (i.e., that all of the eigenvalues of A lie in the field) by replacing the field with its algebraic closure. Then by our results, the Jordan canonical form of A exists.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Math 223 Symmetric and Hermitian Matrices. Richard Anstee an N × N Matrix Q Is Orthogonal If QT = Q−1
    Math 223 Symmetric and Hermitian Matrices. Richard Anstee An n × n matrix Q is orthogonal if QT = Q−1. The columns of Q would form an orthonormal basis for Rn. The rows would also form an orthonormal basis for Rn. A matrix A is symmetric if AT = A. Theorem 0.1 Let A be a symmetric n × n matrix of real entries. Then there is an orthogonal matrix Q and a diagonal matrix D so that AQ = QD; i.e. QT AQ = D: Note that the entries of M and D are real. There are various consequences to this result: A symmetric matrix A is diagonalizable A symmetric matrix A has an othonormal basis of eigenvectors. A symmetric matrix A has real eigenvalues. Proof: The proof begins with an appeal to the fundamental theorem of algebra applied to det(A − λI) which asserts that the polynomial factors into linear factors and one of which yields an eigenvalue λ which may not be real. Our second step it to show λ is real. Let x be an eigenvector for λ so that Ax = λx. Again, if λ is not real we must allow for the possibility that x is not a real vector. Let xH = xT denote the conjugate transpose. It applies to matrices as AH = AT . Now xH x ≥ 0 with xH x = 0 if and only if x = 0. We compute xH Ax = xH (λx) = λxH x. Now taking complex conjugates and transpose (xH Ax)H = xH AH x using that (xH )H = x. Then (xH Ax)H = xH Ax = λxH x using AH = A.
    [Show full text]
  • The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a Self-Adjoi
    The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a self-adjoint operator 3. Spectral theorem for bounded self-adjoint operators 4. Functions of unitary operators 5. Spectral theorem for unitary operators 6. Alternative approach 7. From Theorem 1.2 to Theorem 1.1 A. Spectral projections B. Unbounded self-adjoint operators C. Von Neumann's mean ergodic theorem 1 2 1. Introduction If H is a Hilbert space, a bounded linear operator A : H ! H (A 2 L(H)) has an adjoint A∗ : H ! H defined by (1.1) (Au; v) = (u; A∗v); u; v 2 H: We say A is self-adjoint if A = A∗. We say U 2 L(H) is unitary if U ∗ = U −1. More generally, if H is another Hilbert space, we say Φ 2 L(H; H) is unitary provided Φ is one-to-one and onto, and (Φu; Φv)H = (u; v)H , for all u; v 2 H. If dim H = n < 1, each self-adjoint A 2 L(H) has the property that H has an orthonormal basis of eigenvectors of A. The same holds for each unitary U 2 L(H). Proofs can be found in xx11{12, Chapter 2, of [T3]. Here, we aim to prove the following infinite dimensional variant of such a result, called the Spectral Theorem. Theorem 1.1. If A 2 L(H) is self-adjoint, there exists a measure space (X; F; µ), a unitary map Φ: H ! L2(X; µ), and a 2 L1(X; µ), such that (1.2) ΦAΦ−1f(x) = a(x)f(x); 8 f 2 L2(X; µ): Here, a is real valued, and kakL1 = kAk.
    [Show full text]
  • Linear Operators
    C H A P T E R 10 Linear Operators Recall that a linear transformation T ∞ L(V) of a vector space into itself is called a (linear) operator. In this chapter we shall elaborate somewhat on the theory of operators. In so doing, we will define several important types of operators, and we will also prove some important diagonalization theorems. Much of this material is directly useful in physics and engineering as well as in mathematics. While some of this chapter overlaps with Chapter 8, we assume that the reader has studied at least Section 8.1. 10.1 LINEAR FUNCTIONALS AND ADJOINTS Recall that in Theorem 9.3 we showed that for a finite-dimensional real inner product space V, the mapping u ’ Lu = Óu, Ô was an isomorphism of V onto V*. This mapping had the property that Lauv = Óau, vÔ = aÓu, vÔ = aLuv, and hence Lau = aLu for all u ∞ V and a ∞ ®. However, if V is a complex space with a Hermitian inner product, then Lauv = Óau, vÔ = a*Óu, vÔ = a*Luv, and hence Lau = a*Lu which is not even linear (this was the definition of an anti- linear (or conjugate linear) transformation given in Section 9.2). Fortunately, there is a closely related result that holds even for complex vector spaces. Let V be finite-dimensional over ç, and assume that V has an inner prod- uct Ó , Ô defined on it (this is just a positive definite Hermitian form on V). Thus for any X, Y ∞ V we have ÓX, YÔ ∞ ç.
    [Show full text]
  • Arxiv:1901.01378V2 [Math-Ph] 8 Apr 2020 Where A(P, Q) Is the Arithmetic Mean of the Vectors P and Q, G(P, Q) Is Their P Geometric Mean, and Tr X Stands for Xi
    MATRIX VERSIONS OF THE HELLINGER DISTANCE RAJENDRA BHATIA, STEPHANE GAUBERT, AND TANVI JAIN Abstract. On the space of positive definite matrices we consider dis- tance functions of the form d(A; B) = [trA(A; B) − trG(A; B)]1=2 ; where A(A; B) is the arithmetic mean and G(A; B) is one of the different versions of the geometric mean. When G(A; B) = A1=2B1=2 this distance is kA1=2− 1=2 1=2 1=2 1=2 B k2; and when G(A; B) = (A BA ) it is the Bures-Wasserstein metric. We study two other cases: G(A; B) = A1=2(A−1=2BA−1=2)1=2A1=2; log A+log B the Pusz-Woronowicz geometric mean, and G(A; B) = exp 2 ; the log Euclidean mean. With these choices d(A; B) is no longer a metric, but it turns out that d2(A; B) is a divergence. We establish some (strict) convexity properties of these divergences. We obtain characterisations of barycentres of m positive definite matrices with respect to these distance measures. 1. Introduction Let p and q be two discrete probability distributions; i.e. p = (p1; : : : ; pn) and q = (q1; : : : ; qn) are n -vectors with nonnegative coordinates such that P P pi = qi = 1: The Hellinger distance between p and q is the Euclidean norm of the difference between the square roots of p and q ; i.e. 1=2 1=2 p p hX p p 2i hX X p i d(p; q) = k p− qk2 = ( pi − qi) = (pi + qi) − 2 piqi : (1) This distance and its continuous version, are much used in statistics, where it is customary to take d (p; q) = p1 d(p; q) as the definition of the Hellinger H 2 distance.
    [Show full text]
  • MATRICES WHOSE HERMITIAN PART IS POSITIVE DEFINITE Thesis by Charles Royal Johnson in Partial Fulfillment of the Requirements Fo
    MATRICES WHOSE HERMITIAN PART IS POSITIVE DEFINITE Thesis by Charles Royal Johnson In Partial Fulfillment of the Requirements For the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California 1972 (Submitted March 31, 1972) ii ACKNOWLEDGMENTS I am most thankful to my adviser Professor Olga Taus sky Todd for the inspiration she gave me during my graduate career as well as the painstaking time and effort she lent to this thesis. I am also particularly grateful to Professor Charles De Prima and Professor Ky Fan for the helpful discussions of my work which I had with them at various times. For their financial support of my graduate tenure I wish to thank the National Science Foundation and Ford Foundation as well as the California Institute of Technology. It has been important to me that Caltech has been a most pleasant place to work. I have enjoyed living with the men of Fleming House for two years, and in the Department of Mathematics the faculty members have always been generous with their time and the secretaries pleasant to work around. iii ABSTRACT We are concerned with the class Iln of n><n complex matrices A for which the Hermitian part H(A) = A2A * is positive definite. Various connections are established with other classes such as the stable, D-stable and dominant diagonal matrices. For instance it is proved that if there exist positive diagonal matrices D, E such that DAE is either row dominant or column dominant and has positive diag­ onal entries, then there is a positive diagonal F such that FA E Iln.
    [Show full text]
  • Unitary-And-Hermitian-Matrices.Pdf
    11-24-2014 Unitary Matrices and Hermitian Matrices Recall that the conjugate of a complex number a + bi is a bi. The conjugate of a + bi is denoted a + bi or (a + bi)∗. − In this section, I’ll use ( ) for complex conjugation of numbers of matrices. I want to use ( )∗ to denote an operation on matrices, the conjugate transpose. Thus, 3 + 4i = 3 4i, 5 6i =5+6i, 7i = 7i, 10 = 10. − − − Complex conjugation satisfies the following properties: (a) If z C, then z = z if and only if z is a real number. ∈ (b) If z1, z2 C, then ∈ z1 + z2 = z1 + z2. (c) If z1, z2 C, then ∈ z1 z2 = z1 z2. · · The proofs are easy; just write out the complex numbers (e.g. z1 = a+bi and z2 = c+di) and compute. The conjugate of a matrix A is the matrix A obtained by conjugating each element: That is, (A)ij = Aij. You can check that if A and B are matrices and k C, then ∈ kA + B = k A + B and AB = A B. · · You can prove these results by looking at individual elements of the matrices and using the properties of conjugation of numbers given above. Definition. If A is a complex matrix, A∗ is the conjugate transpose of A: ∗ A = AT . Note that the conjugation and transposition can be done in either order: That is, AT = (A)T . To see this, consider the (i, j)th element of the matrices: T T T [(A )]ij = (A )ij = Aji =(A)ji = [(A) ]ij. Example. If 1 2i 4 1 + 2i 2 i 3i ∗ − A = − , then A = 2 + i 2 7i .
    [Show full text]
  • Complex Inner Product Spaces
    Complex Inner Product Spaces A vector space V over the field C of complex numbers is called a (complex) inner product space if it is equipped with a pairing h, i : V × V → C which has the following properties: 1. (Sesqui-symmetry) For every v, w in V we have hw, vi = hv, wi. 2. (Linearity) For every u, v, w in V and a in C we have langleau + v, wi = ahu, wi + hv, wi. 3. (Positive definite-ness) For every v in V , we have hv, vi ≥ 0. Moreover, if hv, vi = 0, the v must be 0. Sometimes the third condition is replaced by the following weaker condition: 3’. (Non-degeneracy) For every v in V , there is a w in V so that hv, wi 6= 0. We will generally work with the stronger condition of positive definite-ness as is conventional. A basis f1, f2,... of V is called orthogonal if hfi, fji = 0 if i < j. An orthogonal basis of V is called an orthonormal (or unitary) basis if, in addition hfi, fii = 1. Given a linear transformation A : V → V and another linear transformation B : V → V , we say that B is the adjoint of A, if for all v, w in V , we have hAv, wi = hv, Bwi. By sesqui-symmetry we see that A is then the adjoint of B as well. Moreover, by positive definite-ness, we see that if A has an adjoint B, then this adjoint is unique. Note that, when V is infinite dimensional, we may not be able to find an adjoint for A! In case it does exist it is denoted as A∗.
    [Show full text]
  • RANDOM DOUBLY STOCHASTIC MATRICES: the CIRCULAR LAW 3 Easily Be Bounded by a Polynomial in N
    The Annals of Probability 2014, Vol. 42, No. 3, 1161–1196 DOI: 10.1214/13-AOP877 c Institute of Mathematical Statistics, 2014 RANDOM DOUBLY STOCHASTIC MATRICES: THE CIRCULAR LAW By Hoi H. Nguyen1 Ohio State University Let X be a matrix sampled uniformly from the set of doubly stochastic matrices of size n n. We show that the empirical spectral × distribution of the normalized matrix √n(X EX) converges almost − surely to the circular law. This confirms a conjecture of Chatterjee, Diaconis and Sly. 1. Introduction. Let M be a matrix of size n n, and let λ , . , λ be × 1 n the eigenvalues of M. The empirical spectral distribution (ESD) µM of M is defined as 1 µ := δ . M n λi i n X≤ We also define µcir as the uniform distribution over the unit disk 1 µ (s,t) := mes( z 1; (z) s, (z) t). cir π | |≤ ℜ ≤ ℑ ≤ Given a random n n matrix M, an important problem in random matrix theory is to study the× limiting distribution of the empirical spectral distri- bution as n tends to infinity. We consider one of the simplest random matrix ensembles, when the entries of M are i.i.d. copies of the random variable ξ. When ξ is a standard complex Gaussian random variable, M can be viewed as a random matrix drawn from the probability distribution P(dM)= 1 tr(MM ∗) 2 e− dM on the set of complex n n matrices. This is known as arXiv:1205.0843v3 [math.CO] 27 Mar 2014 πn × the complex Ginibre ensemble.
    [Show full text]
  • PHYSICS 116A Homework 8 Solutions 1. Boas, Problem 3.9–4
    PHYSICS 116A Homework 8 Solutions 1. Boas, problem 3.9–4. Given the matrix, 0 2i 1 − A = i 2 0 , − 3 0 0 find the transpose, the inverse, the complex conjugate and the transpose conjugate of A. Verify that AA−1 = A−1A = I, where I is the identity matrix. We shall evaluate A−1 by employing Eq. (6.13) in Ch. 3 of Boas. First we compute the determinant by expanding in cofactors about the third column: 0 2i 1 − i 2 det A i 2 0 =( 1) − =( 1)( 6) = 6 . ≡ − − 3 0 − − 3 0 0 Next, we evaluate the matrix of cofactors and take the transpose: 2 0 2i 1 2i 1 − − 0 0 − 0 0 2 0 0 0 2 i 0 0 1 0 1 adj A = − − − = 0 3 i . (1) − 3 0 3 0 − i 0 − 6 6i 2 i 2 0 2i 0 2i − − − 3 0 − 3 0 i 2 − According to Eq. (6.13) in Ch. 3 of Boas the inverse of A is given by Eq. (1) divided by det(A), so 1 0 0 3 −1 1 i A = 0 2 6 . (2) 1 i 1 − − 3 It is straightforward to check by matrix multiplication that 0 2i 1 0 0 1 0 0 1 0 2i 1 1 0 0 − 3 3 − i 2 0 0 1 i = 0 1 i i 2 0 = 0 1 0 . − 2 6 2 6 − 3 0 0 1 i 1 1 i 1 3 0 0 0 0 1 − − 3 − − 3 The transpose, complex conjugate and the transpose conjugate∗ can be written down by inspec- tion: 0 i 3 0 2i 1 0 i 3 − − − ∗ † AT = 2i 2 0 , A = i 2 0 , A = 2i 2 0 .
    [Show full text]
  • Is to Understand Some of the Interesting Properties of the Evolution Operator, We Need to Say a Few Things About Unitary Operat
    Appendix 5: Unitary Operators, the Evolution Operator, the Schrödinger and the Heisenberg Pictures, and Wigner’s Formula A5.1 Unitary Operators, the Evolution Operator In ordinary 3-dimensional space, there are operators, such as an operator that simply rotates vectors, which alter ordinary vectors while preserving their lengths. Unitary operators do much the same thing in n-dimensional complex spaces.1 Let us start with some definitions. Given an operator (matrix) A, its inverse A-1 is such that AA-1 = A-1A = I .2 (A5.1.1) An operator (matrix) U is unitary if its inverse is equal to its Hermitian conjugate (its adjoint), so that U -1 = U + , (A5.1.2) and therefore UU + = U +U = I. (A5.1.3) Since the evolution operator U(t,t0 ) and TDSE do the same thing, U(t,t0 ) must preserve normalization and therefore unsurprisingly it can be shown that the evolution operator is unitary. Moreover, + -1 U(tn ,ti )= U (ti,tn )= U (ti,tn ). (A5.1.4) In addition, U(tn ,t1)= U(tn ,tn-1)U(tn-1,tn-2 )× × ×U(t2,t1), (A5.1.5) 1 As we know, the state vector is normalized (has length one), and stays normalized. Hence, the only way for it to undergo linear change is to rotate in n-dimensional space. 2 It is worth noticing that a matrix has an inverse if and only if its determinant is different from zero. 413 that is, the evolution operator for the system’s transition in the interval tn - t1 is equal to the product of the evolution operators in the intermediate intervals.
    [Show full text]