Complex Conjugation of Group Representations by Inner Automorphlsms
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Math 651 Homework 1 - Algebras and Groups Due 2/22/2013
Math 651 Homework 1 - Algebras and Groups Due 2/22/2013 1) Consider the Lie Group SU(2), the group of 2 × 2 complex matrices A T with A A = I and det(A) = 1. The underlying set is z −w jzj2 + jwj2 = 1 (1) w z with the standard S3 topology. The usual basis for su(2) is 0 i 0 −1 i 0 X = Y = Z = (2) i 0 1 0 0 −i (which are each i times the Pauli matrices: X = iσx, etc.). a) Show this is the algebra of purely imaginary quaternions, under the commutator bracket. b) Extend X to a left-invariant field and Y to a right-invariant field, and show by computation that the Lie bracket between them is zero. c) Extending X, Y , Z to global left-invariant vector fields, give SU(2) the metric g(X; X) = g(Y; Y ) = g(Z; Z) = 1 and all other inner products zero. Show this is a bi-invariant metric. d) Pick > 0 and set g(X; X) = 2, leaving g(Y; Y ) = g(Z; Z) = 1. Show this is left-invariant but not bi-invariant. p 2) The realification of an n × n complex matrix A + −1B is its assignment it to the 2n × 2n matrix A −B (3) BA Any n × n quaternionic matrix can be written A + Bk where A and B are complex matrices. Its complexification is the 2n × 2n complex matrix A −B (4) B A a) Show that the realification of complex matrices and complexifica- tion of quaternionic matrices are algebra homomorphisms. -
Inner Product Spaces
CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn. -
Chapter 1: Complex Numbers Lecture Notes Math Section
CORE Metadata, citation and similar papers at core.ac.uk Provided by Almae Matris Studiorum Campus Chapter 1: Complex Numbers Lecture notes Math Section 1.1: Definition of Complex Numbers Definition of a complex number A complex number is a number that can be expressed in the form z = a + bi, where a and b are real numbers and i is the imaginary unit, that satisfies the equation i2 = −1. In this expression, a is the real part Re(z) and b is the imaginary part Im(z) of the complex number. The complex number a + bi can be identified with the point (a; b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. Ex.1 Understanding complex numbers Write the real part and the imaginary part of the following complex numbers and plot each number in the complex plane. (1) i (2) 4 + 2i (3) 1 − 3i (4) −2 Section 1.2: Operations with Complex Numbers Addition and subtraction of two complex numbers To add/subtract two complex numbers we add/subtract each part separately: (a + bi) + (c + di) = (a + c) + (b + d)i and (a + bi) − (c + di) = (a − c) + (b − d)i Ex.1 Addition and subtraction of complex numbers (1) (9 + i) + (2 − 3i) (2) (−2 + 4i) − (6 + 3i) (3) (i) − (−11 + 2i) (4) (1 + i) + (4 + 9i) Multiplication of two complex numbers To multiply two complex numbers we proceed as follows: (a + bi)(c + di) = ac + adi + bci + bdi2 = ac + adi + bci − bd = (ac − bd) + (ad + bc)i Ex.2 Multiplication of complex numbers (1) (3 + 2i)(1 + 7i) (2) (i + 1)2 (3) (−4 + 3i)(2 − 5i) 1 Chapter 1: Complex Numbers Lecture notes Math Conjugate of a complex number The complex conjugate of the complex number z = a + bi is defined to be z¯ = a − bi. -
Speeding up Spmv for Power-Law Graph Analytics by Enhancing Locality & Vectorization
Speeding Up SpMV for Power-Law Graph Analytics by Enhancing Locality & Vectorization Serif Yesil Azin Heidarshenas Adam Morrison Josep Torrellas Dept. of Computer Science Dept. of Computer Science Blavatnik School of Dept. of Computer Science University of Illinois at University of Illinois at Computer Science University of Illinois at Urbana-Champaign Urbana-Champaign Tel Aviv University Urbana-Champaign [email protected] [email protected] [email protected] [email protected] Abstract—Graph analytics applications often target large-scale data-dependent behavior of some accesses makes them hard web and social networks, which are typically power-law graphs. to predict and optimize for. As a result, SpMV on large power- Graph algorithms can often be recast as generalized Sparse law graphs becomes memory bound. Matrix-Vector multiplication (SpMV) operations, making SpMV optimization important for graph analytics. However, executing To address this challenge, previous work has focused on SpMV on large-scale power-law graphs results in highly irregular increasing SpMV’s Memory-Level Parallelism (MLP) using memory access patterns with poor cache utilization. Worse, we vectorization [9], [10] and/or on improving memory access find that existing SpMV locality and vectorization optimiza- locality by rearranging the order of computation. The main tions are largely ineffective on modern out-of-order (OOO) techniques for improving locality are binning [11], [12], which processors—they are not faster (or only marginally so) than the standard Compressed Sparse Row (CSR) SpMV implementation. translates indirect memory accesses into efficient sequential To improve performance for power-law graphs on modern accesses, and cache blocking [13], which processes the ma- OOO processors, we propose Locality-Aware Vectorization (LAV). -
Complex Inner Product Spaces
MATH 355 Supplemental Notes Complex Inner Product Spaces Complex Inner Product Spaces The Cn spaces The prototypical (and most important) real vector spaces are the Euclidean spaces Rn. Any study of complex vector spaces will similar begin with Cn. As a set, Cn contains vectors of length n whose entries are complex numbers. Thus, 2 i ` 3 5i C3, » ´ fi P i — ffi – fl 5, 1 is an element found both in R2 and C2 (and, indeed, all of Rn is found in Cn), and 0, 0, 0, 0 p ´ q p q serves as the zero element in C4. Addition and scalar multiplication in Cn is done in the analogous way to how they are performed in Rn, except that now the scalars are allowed to be nonreal numbers. Thus, to rescale the vector 3 i, 2 3i by 1 3i, we have p ` ´ ´ q ´ 3 i 1 3i 3 i 6 8i 1 3i ` p ´ qp ` q ´ . p ´ q « 2 3iff “ « 1 3i 2 3i ff “ « 11 3iff ´ ´ p ´ qp´ ´ q ´ ` Given the notation 3 2i for the complex conjugate 3 2i of 3 2i, we adopt a similar notation ` ´ ` when we want to take the complex conjugate simultaneously of all entries in a vector. Thus, 3 4i 3 4i ´ ` » 2i fi » 2i fi if z , then z ´ . “ “ — 2 5iffi — 2 5iffi —´ ` ffi —´ ´ ffi — 1 ffi — 1 ffi — ´ ffi — ´ ffi – fl – fl Both z and z are vectors in C4. In general, if the entries of z are all real numbers, then z z. “ The inner product in Cn In Rn, the length of a vector x ?x x is a real, nonnegative number. -
MATH 304 Linear Algebra Lecture 25: Complex Eigenvalues and Eigenvectors
MATH 304 Linear Algebra Lecture 25: Complex eigenvalues and eigenvectors. Orthogonal matrices. Rotations in space. Complex numbers C: complex numbers. Complex number: z = x + iy, where x, y R and i 2 = 1. ∈ − i = √ 1: imaginary unit − Alternative notation: z = x + yi. x = real part of z, iy = imaginary part of z y = 0 = z = x (real number) ⇒ x = 0 = z = iy (purely imaginary number) ⇒ We add, subtract, and multiply complex numbers as polynomials in i (but keep in mind that i 2 = 1). − If z1 = x1 + iy1 and z2 = x2 + iy2, then z1 + z2 = (x1 + x2) + i(y1 + y2), z z = (x x ) + i(y y ), 1 − 2 1 − 2 1 − 2 z z = (x x y y ) + i(x y + x y ). 1 2 1 2 − 1 2 1 2 2 1 Given z = x + iy, the complex conjugate of z is z¯ = x iy. The modulus of z is z = x 2 + y 2. − | | zz¯ = (x + iy)(x iy) = x 2 (iy)2 = x 2 +py 2 = z 2. − − | | 1 z¯ 1 x iy z− = ,(x + iy) = − . z 2 − x 2 + y 2 | | Geometric representation Any complex number z = x + iy is represented by the vector/point (x, y) R2. ∈ y r φ 0 x 0 x = r cos φ, y = r sin φ = z = r(cos φ + i sin φ) = reiφ ⇒ iφ1 iφ2 If z1 = r1e and z2 = r2e , then i(φ1+φ2) i(φ1 φ2) z1z2 = r1r2e , z1/z2 = (r1/r2)e − . Fundamental Theorem of Algebra Any polynomial of degree n 1, with complex ≥ coefficients, has exactly n roots (counting with multiplicities). -
The Integration Problem for Complex Lie Algebroids
The integration problem for complex Lie algebroids Alan Weinstein∗ Department of Mathematics University of California Berkeley CA, 94720-3840 USA [email protected] July 17, 2018 Abstract A complex Lie algebroid is a complex vector bundle over a smooth (real) manifold M with a bracket on sections and an anchor to the complexified tangent bundle of M which satisfy the usual Lie algebroid axioms. A proposal is made here to integrate analytic complex Lie al- gebroids by using analytic continuation to a complexification of M and integration to a holomorphic groupoid. A collection of diverse exam- ples reveal that the holomorphic stacks presented by these groupoids tend to coincide with known objects associated to structures in com- plex geometry. This suggests that the object integrating a complex Lie algebroid should be a holomorphic stack. 1 Introduction It is a pleasure to dedicate this paper to Professor Hideki Omori. His work arXiv:math/0601752v1 [math.DG] 31 Jan 2006 over many years, introducing ILH manifolds [30], Weyl manifolds [32], and blurred Lie groups [31] has broadened the notion of what constitutes a “space.” The problem of “integrating” complex vector fields on real mani- folds seems to lead to yet another kind of space, which is investigated in this paper. ∗research partially supported by NSF grant DMS-0204100 MSC2000 Subject Classification Numbers: Keywords: 1 Recall that a Lie algebroid over a smooth manifold M is a real vector bundle E over M with a Lie algebra structure (over R) on its sections and a bundle map ρ (called the anchor) from E to the tangent bundle T M, satisfying the Leibniz rule [a, fb]= f[a, b] + (ρ(a)f)b for sections a and b and smooth functions f : M → R. -
Complex Numbers and Coordinate Transformations WHOI Math Review
Complex Numbers and Coordinate transformations WHOI Math Review Isabela Le Bras July 27, 2015 Class outline: 1. Complex number algebra 2. Complex number application 3. Rotation of coordinate systems 4. Polar and spherical coordinates 1 Complex number algebra Complex numbers are ap combination of real and imaginary numbers. Imaginary numbers are based around the definition of i, i = −1. They are useful for solving differential equations; they carry twice as much information as a real number and there exists a useful framework for handling them. To add and subtract complex numbers, group together the real and imaginary parts. For example, (4 + 3i) + (3 + 2i) = 7 + 5i: Try a few examples: • (9 + 3i) - (4 + 7i) = • (6i) + (8 + 2i) = • (7 + 7i) - (9 - 9i) = To multiply, multiply all components by each other, and use the fact that i2 = −1 to simplify. For example, (3 − 2i)(4 + 3i) = 12 + 9i − 8i − 6i2 = 18 + i To divide, multiply the numerator by the complex conjugate of the denominator. The complex conjugate is formed by multiplying the imaginary part of the complex number by -1, and is often denoted by a star, i.e. (6 + 3i)∗ = (6 − 3i). The reason this works is easier to see in the complex plane. Here is an example of division: (4 + 2i)=(3 − i) = (4 + 2i)(3 + i) = 12 + 4i + 6i + 2i2 = 10 + 10i Try a few examples of multiplication and division: • (4 + 7i)(2 + i) = 1 • (5 + 3i)(5 - 2i) = • (6 -2i)/(4 + 3i) = • (3 + 2i)/(6i) = We often think of complex numbers as living on the plane of real and imaginary numbers, and can write them as reiq = r(cos(q) + isin(q)). -
Sparse-Matrix Representation of Spiking Neural P Systems for Gpus
Sparse-matrix Representation of Spiking Neural P Systems for GPUs Miguel A.´ Mart´ınez-del-Amor1, David Orellana-Mart´ın1, Francis G.C. Cabarle2, Mario J. P´erez-Jim´enez1, Henry N. Adorna2 1Research Group on Natural Computing Department of Computer Science and Artificial Intelligence Universidad de Sevilla Avda. Reina Mercedes s/n, 41012 Sevilla, Spain E-mail: [email protected], [email protected], [email protected] 2Algorithms and Complexity Laboratory Department of Computer Science University of the Philippines Diliman Diliman 1101 Quezon City, Philippines E-mail: [email protected], [email protected] Summary. Current parallel simulation algorithms for Spiking Neural P (SNP) systems are based on a matrix representation. This helps to harness the inherent parallelism in algebraic operations, such as vector-matrix multiplication. Although it has been convenient for the first parallel simulators running on Graphics Processing Units (GPUs), such as CuSNP, there are some bottlenecks to cope with. For example, matrix representation of SNP systems with a low-connectivity-degree graph lead to sparse matrices, i.e. containing more zeros than actual values. Having to deal with sparse matrices downgrades the performance of the simulators because of wasting memory and time. However, sparse matrices is a known problem on parallel computing with GPUs, and several solutions and algorithms are available in the literature. In this paper, we briefly analyse some of these ideas and apply them to represent some variants of SNP systems. We also conclude which variant better suit a sparse-matrix representation. Keywords: Spiking Neural P systems, Simulation Algorithm, Sparse Matrix Representation, GPU computing, CUDA 1 Introduction Spiking Neural P (SNP) systems [9] are a type of P systems [16] composed of a directed graph inspired by how neurons are interconnected by axons and synapses 162 M.A. -
A Framework for Efficient Execution of Matrix Computations
A Framework for Efficient Execution of Matrix Computations Doctoral Thesis May 2006 Jos´e Ram´on Herrero Advisor: Prof. Juan J. Navarro UNIVERSITAT POLITECNICA` DE CATALUNYA Departament D'Arquitectura de Computadors To Joan and Albert my children To Eug`enia my wife To Ram´on and Gloria my parents Stillicidi casus lapidem cavat Lucretius (c. 99 B.C.-c. 55 B.C.) De Rerum Natura1 1Continual dropping wears away a stone. Titus Lucretius Carus. On the nature of things Abstract Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix op- erations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields. We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multipli- cation; dense Cholesky factorization; and Nearest Neighbor Classification. A lot of research has been conducted on the efficient parallelization of nu- merical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor. -
Lecture No 9: COMPLEX INNER PRODUCT SPACES (Supplement) 1
Lecture no 9: COMPLEX INNER PRODUCT SPACES (supplement) 1 1. Some Facts about Complex Numbers In this chapter, complex numbers come more to the forefront of our discussion. Here we give enough background in complex numbers to study vector spaces where the scalars are the complex numbers. Complex numbers, denoted by ℂ, are defined by where 푖 = √−1. To plot complex numbers, one uses a two-dimensional grid Complex numbers are usually represented as z, where z = a + i b, with a, b ℝ. The arithmetic operations are what one would expect. For example, if If a complex number z = a + ib, where a, b R, then its complex conjugate is defined as 푧̅ = 푎 − 푖푏. 1 9 Л 2 Комплексний Евклідов простір. Аксіоми комплексного скалярного добутку. 10 ПР 2 Обчислення комплексного скалярного добутку. The following relationships occur Also, z is real if and only if 푧̅ = 푧. If the vector space is ℂn, the formula for the usual dot product breaks down in that it does not necessarily give the length of a vector. For example, with the definition of the Euclidean dot product, the length of the vector (3i, 2i) would be √−13. In ℂn, the length of the vector (a1, …, an) should be the distance from the origin to the terminal point of the vector, which is 2. Complex Inner Product Spaces This section considers vector spaces over the complex field ℂ. Note: The definition of inner product given in Lecture 6 is not useful for complex vector spaces because no nonzero complex vector space has such an inner product. -
Automatic Selection of Sparse Matrix Representation on Gpus
Automatic Selection of Sparse Matrix Representation on GPUs Naser Sedaghati, Te Mu, Louis-Noël Pouchet, Srinivasan Parthasarathy, P. Sadayappan The Ohio State University Columbus, OH, USA {sedaghat,mut,pouchet,srini,saday}@cse.ohio-state.edu ABSTRACT applications in terms of a dataset-algorithm-platform tri- Sparse matrix-vector multiplication (SpMV) is a core kernel angle. This is a complex relationship that is not yet well in numerous applications, ranging from physics simulation characterized even for much studied core kernels like SpMV. and large-scale solvers to data analytics. Many GPU im- In this paper, we make a first attempt at understanding the plementations of SpMV have been proposed, targeting sev- dataset-algorithm dependences for SpMV on GPUs. eral sparse representations and aiming at maximizing over- The optimization of SpMV for GPUs has been a much re- all performance. No single sparse matrix representation is searched topic over the last few years, including auto-tuning uniformly superior, and the best performing representation approaches to deliver very high performance [22, 11] by tun- varies for sparse matrices with different sparsity patterns. ing the block size to the sparsity features of the computa- In this paper, we study the inter-relation between GPU tion. For CPUs, the compressed sparse row (CSR) – or the architecture, sparse matrix representation and the sparse dual compressed sparse column – is the dominant represen- dataset. We perform extensive characterization of perti- tation, effective across domains from which the sparse ma- nent sparsity features of around 700 sparse matrices, and trices arise. In contrast, for GPUs no single representation their SpMV performance with a number of sparse represen- has been found to be effective across a range of sparse matri- tations implemented in the NVIDIA CUSP and cuSPARSE ces.