A New Parallel Matrix Multiplication Algorithm on Distributed-Memory

Total Page:16

File Type:pdf, Size:1020Kb

A New Parallel Matrix Multiplication Algorithm on Distributed-Memory A New Parallel Matrix Multiplication Algorithm on Distributed-Memory Concurrent Computers Jaeyoung Choi Scho ol of Computing So ongsil University 1-1, Sangdo-Dong, Dong jak-Ku Seoul 156-743, KOREA Abstract We present a new fast and scalable matrix multiplication algorithm, called DIMMA Distribution-Indep endent Matrix Multiplication Algorithm, for blo ck cyclic data distribu- tion on distributed-memory concurrent computers. The algorithm is based on two new ideas; it uses a mo di ed pip elined communication scheme to overlap computation and communi- cation e ectively, and exploits the LCM blo ck concept to obtain the maximum p erformance of the sequential BLAS routine in each pro cessor even when the blo ck size is very small as well as very large. The algorithm is implemented and compared with SUMMA on the Intel Paragon computer. 1. Intro duction Anumb er of algorithms are currentlyavailable for multiplying two matrices A and B to yield the pro duct matrix C = A B on distributed-memory concurrent computers [12 , 16 ]. Two classic algorithms are Cannon's algorithm [4 ] and Fox's algorithm [11 ]. They are based on a P P square pro cessor grid with a blo ck data distribution in which each pro cessor holds a large consecutive blo ck of data. Two e orts to implementFox's algorithm on general 2-D grids have b een made: Choi, Dongarra and Walker develop ed `PUMMA' [7] for blo ck cyclic data decomp ositions, and Huss-Lederman, Jacobson, Tsao and Zhang develop ed `BiMMeR' [15 ] for the virtual 2-D torus wrap data layout. The di erences in these data layouts results in di erent algorithms. These two algorithms have b een compared on the Intel Touchstone Delta [14 ]. Recent e orts to implementnumerical algorithms for dense matrices on distributed- memory concurrent computers are based on a blo ck cyclic data distribution [6], in which an M N matrix A consists of m n blo cks of data, and the blo cks are distributed by b b wrapping around b oth row and column directions on an arbitrary P Q pro cessor grid. The distribution can repro duce most data distributions used in linear algebra computations. For details, see Section 2.2. We limit the distribution of data matrices to the blo ck cyclic data distribution. The PUMMA requires a minimum numb er of communications and computations. It consists of only Q 1 shifts for A, LC M P; Q broadcasts for B, and LC M P; Q lo cal multiplications, where LC M P; Q is the least common multipleof P and Q. It multiplies the largest p ossible matrices of A and B for each computation step, so that p erformance of the routine dep ends very weakly on the blo ck size of the matrix. However, PUMMA makes it dicult to overlap computation with communication since it always deals with the largest p ossible matrices for b oth computation and communication, and it requires large memory space to store them temp orarily, which makes it impractical in real applications. Agrawal, Gustavson and Zubair [1] prop osed another matrix multiplication algorithm by eciently overlapping computation with communication on the Intel iPSC/860 and Delta system. Van de Geijn and Watts [18 ] indep endently develop ed the same algorithm on the In- tel paragon and called it SUMMA. Also indep endently, PBLAS [5], which is a ma jor building blo ck of ScaLAPACK [3], uses the same scheme in implementing the matrix multiplication routine, PDGEMM. In this pap er, we present a new fast and scalable matrix multiplication algorithm, called DIMMA Distribution-Independent Matrix Multiplication Algorithm for blo ck cyclic data distribution on distributed-memory concurrent computers. The algorithm incorp orates SUMMA with two new ideas. It uses `a mo di ed pip elined communication scheme', which makes the algorithm the most ecientbyoverlapping computation and communication e ectively.It also exploits `the LCM concept', which maintains the maximum p erformance of the sequen- tial BLAS routine, DGEMM, in each pro cessor, even when the blo ck size is very small as well as very large. The details of the LCM concept is explained in Section 2.2. DIMMA and SUMMA are implemented and compared on the Intel Paragon computer. 3 2 The parallel matrix multiplication requires O N ops and O N communications, i. e., it is computation intensive. For a large matrix, the p erformance di erence b etween SUMMA and DIMMA may b e marginal and negligible. But for small matrix of N = 1000 on a 16 16 pro cessor grid, the p erformance di erence is approximately 10. 2. Design Principles 2.1. Level 3 BLAS Current advanced architecture computers p ossess hierarchical memories in which access to data in the upp er levels of the memory hierarchy registers, cache, and/or lo cal memory is faster than to data in lower levels shared or o -pro cessor memory. One technique to exploit the p ower of such machines more eciently is to develop algorithms that maximize reuse of data held in the upp er levels. This can b e done by partitioning the matrix or matrices into blo cks and by p erforming the computation with matrix-matrix op erations on the blo cks. The Level 3 BLAS [9 ] p erform a numb er of commonly used matrix-matrix op erations, and are available in optimized form on most computing platforms ranging from workstations up to sup ercomputers. The Level 3 BLAS have b een successfully used as the building blo cks of a numb er of ap- plications, including LAPACK [2 ], a software library that uses blo ck-partitioned algorithms for p erforming dense linear algebra computations on vector and shared memory computers. On shared memory machines, blo ck-partitioned algorithms reduce the numb er of times that data must b e fetched from shared memory, while on distributed-memory machines, they reduce the numb er of messages required to get the data from other pro cessors. Thus, there has b een muchinterest in developing versions of the Level 3 BLAS for distributed-memory concurrent computers [5 , 8 , 10 ]. The most imp ortant routine in the Level 3 BLAS is DGEMM for p erforming matrix-matrix multiplication. The general purp ose routine p erforms the following op eration: C opA opB + C T H where opX=X; X or X . And \" denotes matrix-matrix multiplication. A, B and C are matrices, and and are scalars. This pap er fo cuses on the design and implementation of the non-transp osed matrix multiplication routine of C A B + C, but the idea can T b e easily extended to the transp osed multiplication routines of C A B + C and T C A B + C. 2.2. Blo ck Cyclic Data Distribution For p erforming the matrix multiplication C = A B,we assume that A, B and C are M K , K N , and M N , resp ectively. The distributed routine also requires a condition on the blo ck size to ensure compatibility. That is, if the blo ck size of A is m k , then that of b b B and C must b e k n and m n , resp ectively. So the numb er of blo cks of matrices b b b b 0 1 2 3 4 5 6 7 8 91011 0 3 6 9 1 4 710 2 5 811 0 012012012012 0 1 345345345345 2 2 012012012012 4 3 345345345345 6 P0 P1 P2 4 012012012012 8 5 345345345345 10 6 012012012012 1 7 345345345345 3 8 012012012012 5 9 345345345345 7 P3 P4 P5 10 012012012012 9 11 345345345345 11 (a) matrix point-of-view (b) processor point-of-view Figure 1: Blo ck cyclic data distribution. A matrix with 12 12 blo cks is distributed over a2 3 pro cessor grid. a The shaded and unshaded areas represent di erent grids. b It is easier to see the distribution from the pro cessor p oint-of-view to implement algorithms. Each pro cessor has 6 4 blo cks. A, B, and C are M K , K N , and M N , resp ectively, where M = dM=m e, g g g g g g g b N = dN=n e, and K = dK=k e. g b g b The way in which a matrix is distributed over the pro cessors has a ma jor impact on the load balance and communication characteristicsof the concurrent algorithm, hence, largely determines its p erformance and scalability. The blo ck cyclic distribution provides a simple, general-purp ose way of distributing a blo ck-partitioned matrix on distributed- memory concurrent computers. Figure 1a shows an example of the blo ck cyclic data distribution, where a matrix with 12 12 blo cks is distributed over a 2 3 grid. The numb ered squares represent blo cks of elements, and the numb er indicates the lo cation in the pro cessor grid { all blo cks lab eled with the same numb er are stored in the same pro cessor. The slanted numb ers, on the left and on the top of the matrix, represent indices of a row of blo cks and of a column of blo cks, resp ectively. Figure 1b re ects the distribution from a pro cessor p oint-of-view, where each pro cessor has 6 4 blo cks. Denoting the least common multipleof P and Q by LC M ,we refer to a square of LCM LCM blo cks as an LCM blo ck. Thus, the matrix in Figure 1 may b e viewed as a 2 2 array of LCM blo cks.
Recommended publications
  • Efficient Splitter for Data Parallel Complex Event Procesing
    Institute of Parallel and Distributed Systems University of Stuttgart Universitätsstraße D– Stuttgart Bachelorarbeit Efficient Splitter for Data Parallel Complex Event Procesing Marco Amann Course of Study: Softwaretechnik Examiner: Prof. Dr. Dr. Kurt Rothermel Supervisor: M. Sc. Ahmad Slo Commenced: March , Completed: September , Abstract Complex Event Processing systems are a promising approach to detect patterns on ever growing amounts of event streams. Since a single server might not be able to run an operator at a sufficiently high rate, Data Parallel Complex Event Processing aims to distribute the load of one operator onto multiple nodes. In this work we analyze the splitter of an existing CEP framework, detail on its drawbacks and propose optimizations to cope with them. This yields the newly developed SPACE framework, which is evaluated and compared with an industry-proven CEP framework, Apache Flink. We show that the new splitter has greatly improved performance and is able to support more instances at a higher rate. In comparison with Apache Flink, the SPACE framework is able to process events at higher rates in our benchmarks but is less stable if overloaded. Kurzfassung Complex Event Processing Systeme stellen eine vielversprechende Möglichkeit dar, Muster in immer größeren Mengen von Event-Strömen zu erkennen. Da ein einzelner Server nicht in der Lage sein kann, einen Operator mit einer ausreichenden Geschwindigkeit zu betreiben, versucht Data Parallel Complex Event Processing die Last eines Operators auf mehrere Knoten zu verteilen. In dieser Arbeit wird ein Splitter eines vorhandenen CEP systems analysiert, seine Nachteile hervorgearbeitet und Optimierungen vorgeschlagen. Daraus entsteht das neue SPACE Framework, welches evaluiert wird und mit Apache Flink, einem industrieerprobten CEP Framework, verglichen wird.
    [Show full text]
  • Phases of Two Adjoints QCD3 and a Duality Chain
    Phases of Two Adjoints QCD3 And a Duality Chain Changha Choi,ab1 aPhysics and Astronomy Department, Stony Brook University, Stony Brook, NY 11794, USA bSimons Center for Geometry and Physics, Stony Brook, NY 11794, USA Abstract We analyze the 2+1 dimensional gauge theory with two fermions in the real adjoint representation with non-zero Chern-Simons level. We propose a new fermion-fermion dualities between strongly-coupled theories and determine the quantum phase using the structure of a `Duality Chain'. We argue that when Chern-Simons level is sufficiently small, the theory in general develops a strongly coupled quantum phase described by an emergent topological field theory. For special cases, our proposal predicts an interesting dynamical scenario with spontaneous breaking of partial 1-form or 0-form global symmetry. It turns out that SL(2; Z) transformation and the generalized level/rank duality are crucial for the unitary group case. We further unveil the dynamics of the 2+1 dimensional gauge theory with any pair of adjoint/rank-two fermions or two bifundamental fermions using similar `Duality Chain'. arXiv:1910.05402v1 [hep-th] 11 Oct 2019 [email protected] Contents 1 Introduction1 2 Review : Phases of Single Adjoint QCD3 7 3 Phase Diagrams for k 6= 0 : Duality Chain 10 3.1 k ≥ h : Semiclassical Regime . 10 3.2 Quantum Phase for G = SU(N)......................... 10 3.3 Quantum Phase for G = SO(N)......................... 13 3.4 Quantum Phase for G = Sp(N)......................... 16 3.5 Phase with Spontaneously Broken Partial 1-form, 0-form Symmetry . 17 4 More Duality Chains and Quantum Phases 19 4.1 Gk+Pair of Rank-Two/Adjoint Fermions .
    [Show full text]
  • Basic Electrical Engineering
    BASIC ELECTRICAL ENGINEERING V.HimaBindu V.V.S Madhuri Chandrashekar.D GOKARAJU RANGARAJU INSTITUTE OF ENGINEERING AND TECHNOLOGY (Autonomous) Index: 1. Syllabus……………………………………………….……….. .1 2. Ohm’s Law………………………………………….…………..3 3. KVL,KCL…………………………………………….……….. .4 4. Nodes,Branches& Loops…………………….……….………. 5 5. Series elements & Voltage Division………..………….……….6 6. Parallel elements & Current Division……………….………...7 7. Star-Delta transformation…………………………….………..8 8. Independent Sources …………………………………..……….9 9. Dependent sources……………………………………………12 10. Source Transformation:…………………………………….…13 11. Review of Complex Number…………………………………..16 12. Phasor Representation:………………….…………………….19 13. Phasor Relationship with a pure resistance……………..……23 14. Phasor Relationship with a pure inductance………………....24 15. Phasor Relationship with a pure capacitance………..……….25 16. Series and Parallel combinations of Inductors………….……30 17. Series and parallel connection of capacitors……………...…..32 18. Mesh Analysis…………………………………………………..34 19. Nodal Analysis……………………………………………….…37 20. Average, RMS values……………….……………………….....43 21. R-L Series Circuit……………………………………………...47 22. R-C Series circuit……………………………………………....50 23. R-L-C Series circuit…………………………………………....53 24. Real, reactive & Apparent Power…………………………….56 25. Power triangle……………………………………………….....61 26. Series Resonance……………………………………………….66 27. Parallel Resonance……………………………………………..69 28. Thevenin’s Theorem…………………………………………...72 29. Norton’s Theorem……………………………………………...75 30. Superposition Theorem………………………………………..79 31.
    [Show full text]
  • 7. Parallel Methods for Matrix-Vector Multiplication 7
    7. Parallel Methods for Matrix-Vector Multiplication 7. Parallel Methods for Matrix-Vector Multiplication................................................................... 1 7.1. Introduction ..............................................................................................................................1 7.2. Parallelization Principles..........................................................................................................2 7.3. Problem Statement..................................................................................................................3 7.4. Sequential Algorithm................................................................................................................3 7.5. Data Distribution ......................................................................................................................3 7.6. Matrix-Vector Multiplication in Case of Rowwise Data Decomposition ...................................4 7.6.1. Analysis of Information Dependencies ...........................................................................4 7.6.2. Scaling and Subtask Distribution among Processors.....................................................4 7.6.3. Efficiency Analysis ..........................................................................................................4 7.6.4. Program Implementation.................................................................................................5 7.6.5. Computational Experiment Results ................................................................................5
    [Show full text]
  • Chapter 5 Capacitance and Dielectrics
    Chapter 5 Capacitance and Dielectrics 5.1 Introduction...........................................................................................................5-3 5.2 Calculation of Capacitance ...................................................................................5-4 Example 5.1: Parallel-Plate Capacitor ....................................................................5-4 Interactive Simulation 5.1: Parallel-Plate Capacitor ...........................................5-6 Example 5.2: Cylindrical Capacitor........................................................................5-6 Example 5.3: Spherical Capacitor...........................................................................5-8 5.3 Capacitors in Electric Circuits ..............................................................................5-9 5.3.1 Parallel Connection......................................................................................5-10 5.3.2 Series Connection ........................................................................................5-11 Example 5.4: Equivalent Capacitance ..................................................................5-12 5.4 Storing Energy in a Capacitor.............................................................................5-13 5.4.1 Energy Density of the Electric Field............................................................5-14 Interactive Simulation 5.2: Charge Placed between Capacitor Plates..............5-14 Example 5.5: Electric Energy Density of Dry Air................................................5-15
    [Show full text]
  • Parallel Methods for Matrix Multiplication
    University of Nizhni Novgorod Faculty of Computational Mathematics & Cybernetics IntroductionIntroduction toto ParallelParallel ProgrammingProgramming Section 8. Parallel Methods for Matrix Multiplication Gergel V.P., Professor, D.Sc., Software Department Contents Problem Statement Sequential Algorithm Algorithm 1 – Block-Striped Decomposition Algorithm 2 – Fox’s method Algorithm 3 – Cannon’s method Summary Nizhni Novgorod, 2005 Introduction to Parallel Programming: Matrix Multiplication ©GergelV.P. 2 → 50 Problem Statement Matrix multiplication: C = A⋅ B or ⎛ c , c , ..., c ⎞ ⎛ a , a , ..., a ⎞ ⎛ b , b , ..., a ⎞ ⎜ 0,0 0,1 0,l−1 ⎟ ⎜ 0,0 0,1 0,n−1 ⎟ ⎜ 0,0 0,1 0,l−1 ⎟ ⎜ ... ⎟ = ⎜ ... ⎟ ⎜ ... ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝cm−1,0 , cm−1,1, ..., cm−1,l−1 ⎠ ⎝am−1,0 , am−1,1, ..., am−1,n−1 ⎠ ⎝bn−1,0 , bn−1,1, ..., bn−1,l−1 ⎠ ª The matrix multiplication problem can be reduced to the execution of m·l independent operations of matrix A rows and matrix B columns inner product calculation n−1 T cij = ()ai ,b j = ∑aik ⋅bkj , 0 ≤ i < m, 0 ≤ j < l k=0 Data parallelism can be exploited to design parallel computations Nizhni Novgorod, 2005 Introduction to Parallel Programming: Matrix Multiplication ©GergelV.P. 3 → 50 Sequential Algorithm… // Algorithm 8.1 // Sequential algorithm of matrix multiplication double MatrixA[Size][Size]; double MatrixB[Size][Size]; double MatrixC[Size][Size]; int i,j,k; ... for (i=0; i<Size; i++){ for (j=0; j<Size; j++){ MatrixC[i][j] = 0; for (k=0; k<Size; k++){ MatrixC[i][j] = MatrixC[i][j] + MatrixA[i][k]*MatrixB[k][j]; } } } Nizhni Novgorod, 2005 Introduction to Parallel Programming: Matrix Multiplication ©GergelV.P.
    [Show full text]
  • Introduction to Parallel Computing
    INTRODUCTION TO PARALLEL COMPUTING Plamen Krastev Office: 38 Oxford, Room 117 Email: [email protected] FAS Research Computing Harvard University OBJECTIVES: To introduce you to the basic concepts and ideas in parallel computing To familiarize you with the major programming models in parallel computing To provide you with with guidance for designing efficient parallel programs 2 OUTLINE: Introduction to Parallel Computing / High Performance Computing (HPC) Concepts and terminology Parallel programming models Parallelizing your programs Parallel examples 3 What is High Performance Computing? Pravetz 82 and 8M, Bulgarian Apple clones Image credit: flickr 4 What is High Performance Computing? Pravetz 82 and 8M, Bulgarian Apple clones Image credit: flickr 4 What is High Performance Computing? Odyssey supercomputer is the major computational resource of FAS RC: • 2,140 nodes / 60,000 cores • 14 petabytes of storage 5 What is High Performance Computing? Odyssey supercomputer is the major computational resource of FAS RC: • 2,140 nodes / 60,000 cores • 14 petabytes of storage Using the world’s fastest and largest computers to solve large and complex problems. 5 Serial Computation: Traditionally software has been written for serial computations: To be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete set of instructions Instructions are executed one after another Only one instruction can be executed at any moment in time 6 Parallel Computing: In the simplest sense, parallel
    [Show full text]
  • Conjugacy of 2–Spherical Subgroups of Coxeter Groups and Parallel Walls
    Algebraic & Geometric Topology 6 (2006) 1987–2029 1987 arXiv version: fonts, pagination and layout may vary from AGT published version Conjugacy of 2–spherical subgroups of Coxeter groups and parallel walls PIERRE-EMMANUEL CAPRACE Let (W; S) be a Coxeter system of finite rank (ie jSj is finite) and let A be the associated Coxeter (or Davis) complex. We study chains of pairwise parallel walls in A using Tits’ bilinear form associated to the standard root system of (W; S). As an application, we prove the strong parallel wall conjecture of G Niblo and L Reeves [18]. This allows to prove finiteness of the number of conjugacy classes of certain one-ended subgroups of W , which yields in turn the determination of all co-Hopfian Coxeter groups of 2–spherical type. 20F55; 20F65, 20F67, 51F15 1 Introduction 1.1 Conjugacy of 2–spherical subgroups A group Γ is called 2–spherical if it possesses a finite generating set T such that any pair of elements of T generates a finite subgroup. By Serre [21, Section 6.5, Corollaire 2], a 2–spherical group enjoys property (FA); in particular, it follows from Stalling’s theorem that it is one-ended. In the literature, a Coxeter group W is called 2–spherical if it has a Coxeter generating set S with the property that any pair of elements of S generates a finite subgroup. If W has a Coxeter generating set S such that some pair of elements of S generates an infinite subgroup, then it is easy to see that W splits non-trivially as an amalgamated product of standard parabolic subgroups, and hence W does not have Serre’s property (FA).
    [Show full text]
  • Parallel Matrix Multiplication: a Systematic Journey∗
    SIAM J. SCI.COMPUT. c 2016 Society for Industrial and Applied Mathematics Vol. 38, No. 6, pp. C748{C781 PARALLEL MATRIX MULTIPLICATION: A SYSTEMATIC JOURNEY∗ MARTIN D. SCHATZy , ROBERT A. VAN DE GEIJNy , AND JACK POULSONz Abstract. We expose a systematic approach for developing distributed-memory parallel matrix- matrix multiplication algorithms. The journey starts with a description of how matrices are dis- tributed to meshes of nodes (e.g., MPI processes), relates these distributions to scalable parallel implementation of matrix-vector multiplication and rank-1 update, continues on to reveal a family of matrix-matrix multiplication algorithms that view the nodes as a two-dimensional (2D) mesh, and finishes with extending these 2D algorithms to so-called three-dimensional (3D) algorithms that view the nodes as a 3D mesh. A cost analysis shows that the 3D algorithms can attain the (order of magnitude) lower bound for the cost of communication. The paper introduces a taxonomy for the resulting family of algorithms and explains how all algorithms have merit depending on parameters such as the sizes of the matrices and architecture parameters. The techniques described in this paper are at the heart of the Elemental distributed-memory linear algebra library. Performance results from implementation within and with this library are given on a representative distributed-memory architecture, the IBM Blue Gene/P supercomputer. Key words. parallel processing, linear algebra, matrix multiplication, libraries AMS subject classifications. 65Y05, 65Y20, 65F05 DOI. 10.1137/140993478 1. Introduction. This paper serves a number of purposes: • Parallel1 implementation of matrix-matrix multiplication is a standard topic in a course on parallel high-performance computing.
    [Show full text]
  • Phys102 Lecture 7/8 Capacitors Key Points • Capacitors • Determination of Capacitance • Capacitors in Series and Parallel • Electric Energy Storage • Dielectrics
    Phys102 Lecture 7/8 Capacitors Key Points • Capacitors • Determination of Capacitance • Capacitors in Series and Parallel • Electric Energy Storage • Dielectrics References SFU Ed: 24-1,2,3,4,5,6. 6th Ed: 17-7,8,9,10+. A capacitor consists of two conductors that are close but not touching. A capacitor has the ability to store electric charge. Parallel-plate capacitor connected to battery. (b) is a circuit diagram. 24-1 Capacitors When a capacitor is connected to a battery, the charge on its plates is proportional to the voltage: The quantity C is called the capacitance. Unit of capacitance: the farad (F): 1 F = 1 C/V. Determination of Capacitance For a parallel-plate capacitor as shown, the field between the plates is E = Q/ε0A. The potential difference: Vba = Ed = Qd/ε0A. This gives the capacitance: Example 24-1: Capacitor calculations. (a) Calculate the capacitance of a parallel-plate capacitor whose plates are 20 cm × 3.0 cm and are separated by a 1.0-mm air gap. (b) What is the charge on each plate if a 12-V battery is connected across the two plates? (c) What is the electric field between the plates? (d) Estimate the area of the plates needed to achieve a capacitance of 1 F, given the same air gap d. Capacitors are now made with capacitances of 1 farad or more, but they are not parallel- plate capacitors. Instead, they are activated carbon, which acts as a capacitor on a very small scale. The capacitance of 0.1 g of activated carbon is about 1 farad.
    [Show full text]
  • 12 Notation and List of Symbols
    12 Notation and List of Symbols Alphabetic ℵ0, ℵ1, ℵ2,... cardinality of infinite sets ,r,s,... lines are usually denoted by lowercase italics Latin letters A,B,C,... points are usually denoted by uppercase Latin letters π,ρ,σ,... planes and surfaces are usually denoted by lowercase Greek letters N positive integers; natural numbers R real numbers C complex numbers 5 real part of a complex number 6 imaginary part of a complex number R2 Euclidean 2-dimensional plane (or space) P2 projective 2-dimensional plane (or space) RN Euclidean N-dimensional space PN projective N-dimensional space Symbols General ∈ belongs to such that ∃ there exists ∀ for all ⊂ subset of ∪ set union ∩ set intersection A. Inselberg, Parallel Coordinates, DOI 10.1007/978-0-387-68628-8, © Springer Science+Business Media, LLC 2009 532 Notation and List of Symbols ∅ empty st (null set) 2A power set of A, the set of all subsets of A ip inflection point of a curve n principal normal vector to a space curve nˆ unit normal vector to a surface t vector tangent to a space curve b binormal vector τ torsion (measures “non-planarity” of a space curve) × cross product → mapping symbol A → B correspondence (also used for duality) ⇒ implies ⇔ if and only if <, > less than, greater than |a| absolute value of a |b| length of vector b norm, length L1 L1 norm L L norm, Euclidean distance 2 2 summation Jp set of integers modulo p with the operations + and × ≈ approximately equal = not equal parallel ⊥ perpendicular, orthogonal (,,) homogeneous coordinates — ordered triple representing
    [Show full text]
  • Introduction to Parallel Programming
    Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah [email protected] Overview • Types of parallel computers. • Parallel programming options. • How to write parallel applications. • How to compile. • How to debug/profile. • Summary, future expansion. 1/12/2014 http://www.chpc.utah.edu Slide 2 Parallel architectures Single processor: • SISD – single instruction single data. Multiple processors: • SIMD - single instruction multiple data. • MIMD – multiple instruction multiple data. Shared Memory . Distributed Memory 1/12/2014 http://www.chpc.utah.edu Slide 3 Shared memory Dual quad-core node • All processors have BUS access to local memory CPU Memory • Simpler programming CPU Memory • Concurrent memory access Many-core node (e.g. SGI) • More specialized CPU Memory hardware BUS • CHPC : CPU Memory Linux clusters, 2, 4, 8, 12, 16 core CPU Memory nodes GPU nodes CPU Memory 1/12/2014 http://www.chpc.utah.edu Slide 4 Distributed memory • Process has access only BUS CPU Memory to its local memory • Data between processes CPU Memory must be communicated • More complex Node Network Node programming Node Node • Cheap commodity hardware Node Node • CHPC: Linux clusters Node Node 8 node cluster (64 cores) 1/12/2014 http://www.chpc.utah.edu Slide 5 Parallel programming options Shared Memory • Threads – POSIX Pthreads, OpenMP (CPU, MIC), OpenACC, CUDA (GPU) – Thread – own execution sequence but shares memory space with the original process • Message passing – processes – Process – entity that executes a program – has its own memory space, execution sequence Distributed Memory • Message passing libraries . Vendor specific – non portable . General – MPI, PVM 1/12/2014 http://www.chpc.utah.edu Slide 6 OpenMP basics • Compiler directives to parallelize .
    [Show full text]