Chapter 6. Unitary Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 6. Unitary Matrices Chapter 6: Complex Matrices We assume that the reader has some experience with matrices and determinants. We can easily extend basic theory of linear algebra by allowing taking complex numbers as matrix entries. However, we should pay more attention to features unique to complex matrices, especially the notion of adjoint, which is the matrix version of complex conjugate. Among the first things we learned from linear algebra is the intimate relation between matrices and linear mappings. To describe this relation within our convention, we need to identify each vector in Cn as a column, that is, as an n 1 matrix. Thus a vector in Cn, × say x = (x1, x2, . , xn), will be considered as the same as x1 x2 x = . [x1 x2 xn]⊤. ≡ ··· xn We are safeguarded from confusion by different types of brackets. From now on, let us adopt the following rule: things in a row surrounded by the round brackets “(” and “)” is the same things arranged in a column surrounded by the square brackets “[” and “]”, e.g. dog (dog, cat) = . We have the following cat “Matrix Representation Theorem” A map T from Cn to Cm is linear if and only if there exists an m n matrix A such that T x = Ax for all x Cn. × ∈ Furthermore, the matrix A here is uniquely determined by T . (Recall that a mapping T from Cn to Cm (we write T : Cn Cm) is linear if the following identity holds → for all vectors x, y in Cn and all scalars α, β: T (αx + βy) = αT x + βT y.) Given a complex matrix A, we define the adjoint of A, denoted by A∗, to be the conjugate transpose of A. In other words, A∗ is obtained by taking complex conjugate of all entries of A, followed by taking the transpose: A∗ = A⊤. Thus a11 a12 a1n a11 a21 an1 a a ··· a a a ··· a 21 22 ··· 2n 12 22 ··· n2 A = = A∗ = . ⇒ am1 am2 amn a1m a2m anm ··· ··· 1 As we have mentioned, the adjoint is the matrix version of the complex conjugate. Example 6.1. Regarding a vector v = (a , a , , a ] in Cn as a matrix, we have 1 2 ··· n a a a a a a a1 1 1 1 2 1 n a a a a ··· a a a2 2 1 2 2 ··· 2 n v = . , v∗ = [a1 a2 an], vv∗ = , . ··· an ana1 ana2 anan ··· 2 2 2 and v∗v = a + a + + a = v, v . | 1| | 2| ··· | n| For n n matrices A and B, and for a complex number α, we have × (A + B)∗ = A∗ + B∗, (αA)∗ = aA∗ (AB)∗ = B∗A∗ The last identity tells us that in general (AB)∗ = A∗B∗ is false. The following identity is the most basic feature concerning the adjoint of a matrix: for every n n matrix A, and all vectors x, y in the complex vector space Cn, we have × Ax, y = x,A∗y We check this identity only for 2 2 matrices. Suppose × a a x y A = 11 12 , x = 1 y = 1 . a a x y 21 22 2 2 Then a11x1 + a12x2 a11y1 + a21y2 Ax = and A∗y = . a x + a x a y + a y 21 1 22 2 12 1 22 2 So Ax, y = a x y + a x y + a x y + a x y and x,A∗y = x a y + x a y + 11 1 1 12 2 1 21 1 2 22 2 2 1 11 1 1 21 2 x a y + x a y . Comparing them, we see that Ax, y = x,A∗y . 2 12 1 2 22 2 We say that an n n matrix is self–adjoint or Hermitian if A∗ = A. The last × identity can be regarded as the matrix version of z = z. So being Hermitian is the matrix analogue of being real for numbers. We say that a matrix A is unitary if A∗A = AA∗ = I, that is, the adjoint A∗ is equal to the inverse of A. The identity A∗A = AA∗ = I is the matrix analogue of zz = 1, or z = 1. Thus, being unitary is a | | matrix analogue of being unit modular for complex numbers. Denote by U(n) the set of 2 all n n unitary matrices. It is easy to check that U(n) forms a group under the usual × matrix multiplication. For example, A, B U(n) implies A∗A = B∗B = I and hence ∈ (AB)(AB)∗ = ABB∗A∗ = AIA∗ = AA∗ = I, etc. The group U(n) is called the unitary group. It plays a basic role in the geometry of the complex vector space Cn. Let A be an n n unitary matrix and denote by v , v ,..., v its column × 1 2 n vectors. Thus we have A = [v1 v2 ... vn] and hence v v1∗v1 v1∗v2 v1∗vn v1, v1 v1, v2 v1, vn 1∗ ··· · · · v v2∗v1 v2∗v2 v2∗vn v2, v1 v2, v2 v2, vn 2∗ ··· · · · A∗ = . A∗A= = . v∗ n vn∗ v1 vn∗ v2 vn∗ vn vn, v1 vn, v2 vn, vn ··· · · · Thus A∗A = I tells us that v , v = δ , meaning that the columns v , v ,..., v j k jk 1 2 n form an orthonormal basis in Cn. We have shown that the columns of a unitary matrix form an orthonormal basis. It turns out that the converse is also true. We have arrived at the following characterization of unitary matrices: An n n matrix is unitary iff its columns form an orthonormal basis in Cn. × Here “iff” stands for “if and only if”, a short hand invented by Paul Halmos. We also have the “real version” of the above statement: A real n n matrix is orthogonal iff its columns × form an orthonormal basis in Rn. Now we give examples of unitary matrices which are used in practice: communication theory (but exactly how they are used is too lengthy to be explained here). Example 6.2. The matrix 1 1 1 1/√2 1/√2 H1 = wih columns v1 = , v2 = √2 1 1 1/√2 1/√2 − − is an orthogonal matrix, since we can check that its columns v1, v2 form an orthonormal 2 basis in R . Now we describe a process to define the Hadamard matrix Hn. Let a a A = 11 12 a a 21 22 be a 2 2 matrix and let B be an n n matrix. We define their tensor product A B × × ⊗ to be the 2n 2n matrix given × a B a B A B = 11 12 . ⊗ a21B a22B 3 We have the following basic identities about tensor products of matrices: aA bB = ab(A B), (A B)∗ = A∗ B∗, (A B)(C D) = AC BD. (6.1) ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ A consequence of these identities is: if A and B are unitary (or orthogonal), then so is A B. For example ⊗ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 H2 H1 H1 = = − − ≡ ⊗ 2 1 1 ⊗ 1 1 2 1 1 1 1 − − − − 1 1 1 1 − − We can define Hn inductively by putting 1 Hn 1 Hn 1 Hn = H1 Hn 1 = − − ⊗ − √2 Hn 1 Hn 1 − − − which is a 2n 2n orthogonal matrix, called the Hadamard matrix. We remark that × tensoring is an important operation used in many areas, such as quantum information and quantum computation. Example 6.3. Let ω = e2πi/n. The columns of the following matrix is the orthonor- mal basis of Cn described in Example 5.2 of the last chapter and hence is a unitary matrix: 1 1 1 1 1 1 2 3 4 ··· n 1 1 ω ω ω ω ω − 2 4 6 8 ··· 2(n 1) 1 ω ω ω ω ω − 1 ··· F = √n n 1 2(n 1) 3(n 1) 4(n 1) (n 1)(n 1) 1 ω − ω − ω − ω − ω − − ··· The linear mapping associated with this matrix is called the finite Fourier transform. To speed up this transform by using some special methods is related to saving the cost of communication network in recent years. The rediscovery of so–called FFT (Fast Fourier Transform) has great practical significance. Now the historian can trace back FFT method as early as Gauss. The material in the rest of the present chapter is optional. We say that an n n complex matrix A is orthogonally diagonalizable if there × is an orthonormal basis = e , e ,..., e consisting of eigenvectors of A, that is, E { 1 2 n} 4 for each k, Aek = λkek, where λk is the eigenvalue corresponding to the eigenvector ek. Now we use the basis vectors (considered as columns) in to form the unitary matrix E U = [e1 e2 ... en]. In the next step, we make use of Aek = λkek, but somehow we find it incorrect because we need to consider the scalar λ as a 1 1 matrix, while the k × vector e on its right hand side is n 1. To adjust this, we rewrite λ e as e λ . Thus k × k k k k we have Aek = ekλk. Now the way is clear for the following matrix manipulation: AU = A[e1 e2 ... en] = [Ae1 Ae2 ...Aen] = [e1λ1 e2λ2 ... enλn] = [e1 e2 ... en]D = UD where D is the diagonal matrix given by λ1 0 0 0 0 λ2 0 0 0 0 λ 0 D = 3 0 0 0 λn 1 Thus we have A = UDU − . The above steps can go backward. So we have proved: 1 Fact. A is orthogonally diagonalizable if and only if A = UDU − UDU ∗ for some ≡ unitary U and diagonal D.
Recommended publications
  • Math 651 Homework 1 - Algebras and Groups Due 2/22/2013
    Math 651 Homework 1 - Algebras and Groups Due 2/22/2013 1) Consider the Lie Group SU(2), the group of 2 × 2 complex matrices A T with A A = I and det(A) = 1. The underlying set is z −w jzj2 + jwj2 = 1 (1) w z with the standard S3 topology. The usual basis for su(2) is 0 i 0 −1 i 0 X = Y = Z = (2) i 0 1 0 0 −i (which are each i times the Pauli matrices: X = iσx, etc.). a) Show this is the algebra of purely imaginary quaternions, under the commutator bracket. b) Extend X to a left-invariant field and Y to a right-invariant field, and show by computation that the Lie bracket between them is zero. c) Extending X, Y , Z to global left-invariant vector fields, give SU(2) the metric g(X; X) = g(Y; Y ) = g(Z; Z) = 1 and all other inner products zero. Show this is a bi-invariant metric. d) Pick > 0 and set g(X; X) = 2, leaving g(Y; Y ) = g(Z; Z) = 1. Show this is left-invariant but not bi-invariant. p 2) The realification of an n × n complex matrix A + −1B is its assignment it to the 2n × 2n matrix A −B (3) BA Any n × n quaternionic matrix can be written A + Bk where A and B are complex matrices. Its complexification is the 2n × 2n complex matrix A −B (4) B A a) Show that the realification of complex matrices and complexifica- tion of quaternionic matrices are algebra homomorphisms.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Lecture 2: Spectral Theorems
    Lecture 2: Spectral Theorems This lecture introduces normal matrices. The spectral theorem will inform us that normal matrices are exactly the unitarily diagonalizable matrices. As a consequence, we will deduce the classical spectral theorem for Hermitian matrices. The case of commuting families of matrices will also be studied. All of this corresponds to section 2.5 of the textbook. 1 Normal matrices Definition 1. A matrix A 2 Mn is called a normal matrix if AA∗ = A∗A: Observation: The set of normal matrices includes all the Hermitian matrices (A∗ = A), the skew-Hermitian matrices (A∗ = −A), and the unitary matrices (AA∗ = A∗A = I). It also " # " # 1 −1 1 1 contains other matrices, e.g. , but not all matrices, e.g. 1 1 0 1 Here is an alternate characterization of normal matrices. Theorem 2. A matrix A 2 Mn is normal iff ∗ n kAxk2 = kA xk2 for all x 2 C : n Proof. If A is normal, then for any x 2 C , 2 ∗ ∗ ∗ ∗ ∗ 2 kAxk2 = hAx; Axi = hx; A Axi = hx; AA xi = hA x; A xi = kA xk2: ∗ n n Conversely, suppose that kAxk = kA xk for all x 2 C . For any x; y 2 C and for λ 2 C with jλj = 1 chosen so that <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij, we expand both sides of 2 ∗ 2 kA(λx + y)k2 = kA (λx + y)k2 to obtain 2 2 ∗ 2 ∗ 2 ∗ ∗ kAxk2 + kAyk2 + 2<(λhAx; Ayi) = kA xk2 + kA yk2 + 2<(λhA x; A yi): 2 ∗ 2 2 ∗ 2 Using the facts that kAxk2 = kA xk2 and kAyk2 = kA yk2, we derive 0 = <(λhAx; Ayi − λhA∗x; A∗yi) = <(λhx; A∗Ayi − λhx; AA∗yi) = <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij: n ∗ ∗ n Since this is true for any x 2 C , we deduce (A A − AA )y = 0, which holds for any y 2 C , meaning that A∗A − AA∗ = 0, as desired.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Math 223 Symmetric and Hermitian Matrices. Richard Anstee an N × N Matrix Q Is Orthogonal If QT = Q−1
    Math 223 Symmetric and Hermitian Matrices. Richard Anstee An n × n matrix Q is orthogonal if QT = Q−1. The columns of Q would form an orthonormal basis for Rn. The rows would also form an orthonormal basis for Rn. A matrix A is symmetric if AT = A. Theorem 0.1 Let A be a symmetric n × n matrix of real entries. Then there is an orthogonal matrix Q and a diagonal matrix D so that AQ = QD; i.e. QT AQ = D: Note that the entries of M and D are real. There are various consequences to this result: A symmetric matrix A is diagonalizable A symmetric matrix A has an othonormal basis of eigenvectors. A symmetric matrix A has real eigenvalues. Proof: The proof begins with an appeal to the fundamental theorem of algebra applied to det(A − λI) which asserts that the polynomial factors into linear factors and one of which yields an eigenvalue λ which may not be real. Our second step it to show λ is real. Let x be an eigenvector for λ so that Ax = λx. Again, if λ is not real we must allow for the possibility that x is not a real vector. Let xH = xT denote the conjugate transpose. It applies to matrices as AH = AT . Now xH x ≥ 0 with xH x = 0 if and only if x = 0. We compute xH Ax = xH (λx) = λxH x. Now taking complex conjugates and transpose (xH Ax)H = xH AH x using that (xH )H = x. Then (xH Ax)H = xH Ax = λxH x using AH = A.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Chapter 1: Complex Numbers Lecture Notes Math Section
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Almae Matris Studiorum Campus Chapter 1: Complex Numbers Lecture notes Math Section 1.1: Definition of Complex Numbers Definition of a complex number A complex number is a number that can be expressed in the form z = a + bi, where a and b are real numbers and i is the imaginary unit, that satisfies the equation i2 = −1. In this expression, a is the real part Re(z) and b is the imaginary part Im(z) of the complex number. The complex number a + bi can be identified with the point (a; b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. Ex.1 Understanding complex numbers Write the real part and the imaginary part of the following complex numbers and plot each number in the complex plane. (1) i (2) 4 + 2i (3) 1 − 3i (4) −2 Section 1.2: Operations with Complex Numbers Addition and subtraction of two complex numbers To add/subtract two complex numbers we add/subtract each part separately: (a + bi) + (c + di) = (a + c) + (b + d)i and (a + bi) − (c + di) = (a − c) + (b − d)i Ex.1 Addition and subtraction of complex numbers (1) (9 + i) + (2 − 3i) (2) (−2 + 4i) − (6 + 3i) (3) (i) − (−11 + 2i) (4) (1 + i) + (4 + 9i) Multiplication of two complex numbers To multiply two complex numbers we proceed as follows: (a + bi)(c + di) = ac + adi + bci + bdi2 = ac + adi + bci − bd = (ac − bd) + (ad + bc)i Ex.2 Multiplication of complex numbers (1) (3 + 2i)(1 + 7i) (2) (i + 1)2 (3) (−4 + 3i)(2 − 5i) 1 Chapter 1: Complex Numbers Lecture notes Math Conjugate of a complex number The complex conjugate of the complex number z = a + bi is defined to be z¯ = a − bi.
    [Show full text]
  • COMPLEX MATRICES and THEIR PROPERTIES Mrs
    ISSN: 2277-9655 [Devi * et al., 6(7): July, 2017] Impact Factor: 4.116 IC™ Value: 3.00 CODEN: IJESS7 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY COMPLEX MATRICES AND THEIR PROPERTIES Mrs. Manju Devi* *Assistant Professor In Mathematics, S.D. (P.G.) College, Panipat DOI: 10.5281/zenodo.828441 ABSTRACT By this paper, our aim is to introduce the Complex Matrices that why we require the complex matrices and we have discussed about the different types of complex matrices and their properties. I. INTRODUCTION It is no longer possible to work only with real matrices and real vectors. When the basic problem was Ax = b the solution was real when A and b are real. Complex number could have been permitted, but would have contributed nothing new. Now we cannot avoid them. A real matrix has real coefficients in det ( A - λI), but the eigen values may complex. We now introduce the space Cn of vectors with n complex components. The old way, the vector in C2 with components ( l, i ) would have zero length: 12 + i2 = 0 not good. The correct length squared is 12 + 1i12 = 2 2 2 2 This change to 11x11 = 1x11 + …….. │xn│ forces a whole series of other changes. The inner product, the transpose, the definitions of symmetric and orthogonal matrices all need to be modified for complex numbers. II. DEFINITION A matrix whose elements may contain complex numbers called complex matrix. The matrix product of two complex matrices is given by where III. LENGTHS AND TRANSPOSES IN THE COMPLEX CASE The complex vector space Cn contains all vectors x with n complex components.
    [Show full text]
  • 7 Spectral Properties of Matrices
    7 Spectral Properties of Matrices 7.1 Introduction The existence of directions that are preserved by linear transformations (which are referred to as eigenvectors) has been discovered by L. Euler in his study of movements of rigid bodies. This work was continued by Lagrange, Cauchy, Fourier, and Hermite. The study of eigenvectors and eigenvalues acquired in- creasing significance through its applications in heat propagation and stability theory. Later, Hilbert initiated the study of eigenvalue in functional analysis (in the theory of integral operators). He introduced the terms of eigenvalue and eigenvector. The term eigenvalue is a German-English hybrid formed from the German word eigen which means “own” and the English word “value”. It is interesting that Cauchy referred to the same concept as characteristic value and the term characteristic polynomial of a matrix (which we introduce in Definition 7.1) was derived from this naming. We present the notions of geometric and algebraic multiplicities of eigen- values, examine properties of spectra of special matrices, discuss variational characterizations of spectra and the relationships between matrix norms and eigenvalues. We conclude this chapter with a section dedicated to singular values of matrices. 7.2 Eigenvalues and Eigenvectors Let A Cn×n be a square matrix. An eigenpair of A is a pair (λ, x) C (Cn∈ 0 ) such that Ax = λx. We refer to λ is an eigenvalue and to ∈x is× an eigenvector−{ } . The set of eigenvalues of A is the spectrum of A and will be denoted by spec(A). If (λ, x) is an eigenpair of A, the linear system Ax = λx has a non-trivial solution in x.
    [Show full text]
  • Speeding up Spmv for Power-Law Graph Analytics by Enhancing Locality & Vectorization
    Speeding Up SpMV for Power-Law Graph Analytics by Enhancing Locality & Vectorization Serif Yesil Azin Heidarshenas Adam Morrison Josep Torrellas Dept. of Computer Science Dept. of Computer Science Blavatnik School of Dept. of Computer Science University of Illinois at University of Illinois at Computer Science University of Illinois at Urbana-Champaign Urbana-Champaign Tel Aviv University Urbana-Champaign [email protected] [email protected] [email protected] [email protected] Abstract—Graph analytics applications often target large-scale data-dependent behavior of some accesses makes them hard web and social networks, which are typically power-law graphs. to predict and optimize for. As a result, SpMV on large power- Graph algorithms can often be recast as generalized Sparse law graphs becomes memory bound. Matrix-Vector multiplication (SpMV) operations, making SpMV optimization important for graph analytics. However, executing To address this challenge, previous work has focused on SpMV on large-scale power-law graphs results in highly irregular increasing SpMV’s Memory-Level Parallelism (MLP) using memory access patterns with poor cache utilization. Worse, we vectorization [9], [10] and/or on improving memory access find that existing SpMV locality and vectorization optimiza- locality by rearranging the order of computation. The main tions are largely ineffective on modern out-of-order (OOO) techniques for improving locality are binning [11], [12], which processors—they are not faster (or only marginally so) than the standard Compressed Sparse Row (CSR) SpMV implementation. translates indirect memory accesses into efficient sequential To improve performance for power-law graphs on modern accesses, and cache blocking [13], which processes the ma- OOO processors, we propose Locality-Aware Vectorization (LAV).
    [Show full text]
  • Complex Inner Product Spaces
    MATH 355 Supplemental Notes Complex Inner Product Spaces Complex Inner Product Spaces The Cn spaces The prototypical (and most important) real vector spaces are the Euclidean spaces Rn. Any study of complex vector spaces will similar begin with Cn. As a set, Cn contains vectors of length n whose entries are complex numbers. Thus, 2 i ` 3 5i C3, » ´ fi P i — ffi – fl 5, 1 is an element found both in R2 and C2 (and, indeed, all of Rn is found in Cn), and 0, 0, 0, 0 p ´ q p q serves as the zero element in C4. Addition and scalar multiplication in Cn is done in the analogous way to how they are performed in Rn, except that now the scalars are allowed to be nonreal numbers. Thus, to rescale the vector 3 i, 2 3i by 1 3i, we have p ` ´ ´ q ´ 3 i 1 3i 3 i 6 8i 1 3i ` p ´ qp ` q ´ . p ´ q « 2 3iff “ « 1 3i 2 3i ff “ « 11 3iff ´ ´ p ´ qp´ ´ q ´ ` Given the notation 3 2i for the complex conjugate 3 2i of 3 2i, we adopt a similar notation ` ´ ` when we want to take the complex conjugate simultaneously of all entries in a vector. Thus, 3 4i 3 4i ´ ` » 2i fi » 2i fi if z , then z ´ . “ “ — 2 5iffi — 2 5iffi —´ ` ffi —´ ´ ffi — 1 ffi — 1 ffi — ´ ffi — ´ ffi – fl – fl Both z and z are vectors in C4. In general, if the entries of z are all real numbers, then z z. “ The inner product in Cn In Rn, the length of a vector x ?x x is a real, nonnegative number.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 25: Complex Eigenvalues and Eigenvectors
    MATH 304 Linear Algebra Lecture 25: Complex eigenvalues and eigenvectors. Orthogonal matrices. Rotations in space. Complex numbers C: complex numbers. Complex number: z = x + iy, where x, y R and i 2 = 1. ∈ − i = √ 1: imaginary unit − Alternative notation: z = x + yi. x = real part of z, iy = imaginary part of z y = 0 = z = x (real number) ⇒ x = 0 = z = iy (purely imaginary number) ⇒ We add, subtract, and multiply complex numbers as polynomials in i (but keep in mind that i 2 = 1). − If z1 = x1 + iy1 and z2 = x2 + iy2, then z1 + z2 = (x1 + x2) + i(y1 + y2), z z = (x x ) + i(y y ), 1 − 2 1 − 2 1 − 2 z z = (x x y y ) + i(x y + x y ). 1 2 1 2 − 1 2 1 2 2 1 Given z = x + iy, the complex conjugate of z is z¯ = x iy. The modulus of z is z = x 2 + y 2. − | | zz¯ = (x + iy)(x iy) = x 2 (iy)2 = x 2 +py 2 = z 2. − − | | 1 z¯ 1 x iy z− = ,(x + iy) = − . z 2 − x 2 + y 2 | | Geometric representation Any complex number z = x + iy is represented by the vector/point (x, y) R2. ∈ y r φ 0 x 0 x = r cos φ, y = r sin φ = z = r(cos φ + i sin φ) = reiφ ⇒ iφ1 iφ2 If z1 = r1e and z2 = r2e , then i(φ1+φ2) i(φ1 φ2) z1z2 = r1r2e , z1/z2 = (r1/r2)e − . Fundamental Theorem of Algebra Any polynomial of degree n 1, with complex ≥ coefficients, has exactly n roots (counting with multiplicities).
    [Show full text]