Mathematical Software for Linear Algebra in the 1970'S

Total Page:16

File Type:pdf, Size:1020Kb

Mathematical Software for Linear Algebra in the 1970'S 1 2 The Impact of RISC Overview Motivation - History Level 1 BLAS and Parallel RISC Systems Linpack on Linear Algebra Software Unrolling to allow compiler to reconize Why Tricks like the bab e algorithm Jack Dongarra Sup ercomputers Vector Computers Computer Science Department UniversityofTennessee Level 2 3 BLAS and Compilers Memory Heirarchy Mathematical Sciences Section Oak Ridge National Lab oratory RISC Linpack on RISC MP http://w ww .ne tl ib. org /ut k/p e opl e/J ackDo ngarra.html Scalapack Bandwidth Latency Iterative Metho ds Bombardment Future 3 4 Mathematical Software for Linear Mathematical Software for Linear Algebra in the 1970's Algebra in the 1970's 1970's software packages for eigenvalue problems and EISPACK early 70's linear equations. Fortran translation of Algol algorithms from Num. Math. EISPACK-Fortran routines Software issues - { Eigenvalue problem {Portability { Wilkinson - Reinsch Handb o ok for Automatic Com- { Robustness putation { Accuracy LINPACK - Fortran routines { Uniform {Well do cumented { Systems of linear equations. { Collective ideas. Little thought given to high p erformance computers. CDC 7600 state-of-the-art sup ercomputer. 5 6 Mathematical Software for Linear Mathematical Software for Linear Algebra in the 1970's Algebra in the 1970's Level 1 BLAS Level 1 BLAS Basic Linear Algebra Subprograms for Fortran Usage Dot pro ducts C. Lawson, R. Hanson, D. Kincaid, and F. Krogh 'axpy' op erations y x + y Conceptial aid in design and co ding Multiple a vector by a constant Aid to readability and do cumentation Set up and apply Givens rotations Promote ecieny: Copy and swap vectors { through optimization or assembly language versions Vector norms Improve robustness and reliability name dim scalars vector vector scalars Improve p ortability through standardization SUBROUTINE AXPY N, ALPHA X, INCX, Y, INCY FUNCTION DOT N, X, INCX, Y, INCY SUBROUTINE SCAL N, ALPHA X, INCX SUBROUTINE ROTG A, B C,S SUBROUTINE ROT N X, INCX, Y, INCY C,S SUBROUTINE COPY N, X, INCX, Y, INCY SUBROUTINE SWAP N, X, INCX, Y, INCY FUNCTION NRM2 N, X, INCX FUNCTION ASUM N, X, INCX FUNCTION ASUM N, X, INCX 7 8 Here is the body of the co de of the LINPACK routine Mathematical Software for Linear SPOFA, which implements the ab ove metho d: Algebra in the 1970's LINPACK late 70's DO 30 J = 1, N Used current ideas - not a translation. INFO = J First vector sup ercomputer arrived - CRAY1. S = 0.0E0 BLAS to standardize basic vector op erations. JM1 = J - 1 IF JM1 .LT. 1 GO TO 20 LINPACK embraced BLAS for mo dularity and eciency. DO 10 K = 1, JM1 Reworked algorithms. T = AK,J - SDOTK-1,A1,K,1,A1,J,1 T = T/AK,K Column oriented. AK,J = T S = S + T*T 10 CONTINUE 20 CONTINUE S = AJ,J - S C ......EXIT IF S .LE. 0.0E0 GO TO 40 AJ,J = SQRTS 30 CONTINUE 9 10 BLAS { BASICS Lo op Unrolling In scalar case do i = 1,n,4 y1:n = y1:n + alfa*x1:n Level 1 BLAS {Vector Op erations Late '70s yi = yi + alfa*xi y y + x, y x, y x, yi+1 = yi+1 + alfa*xi+1 T x y , kxk , y $x. 2 yi+2 = yi+2 + alfa*xi+2 yi+3 = yi+3 + alfa*xi+3 continue Example: SAXPY op erate with vectors: WHY? y y + x { Reduce the lo op overhead 2vector loads { Allow pip elined functional units to op erate 2vector op erations { Allow indep endent functional units to op erate { Vector instructions 1vector store Op eration to data reference { 2n op erations { 3n data transfer Too much memory trac: little chance to use the memory hierarchy, O n memory references, Machines that might b ene t? O n oating p oint op erations. { Thoughtwould work well on all scalar machines, but... vector machines 11 12 Other Tricks for Performance... Burn At Both Ends BABE Algorithm for factoring a tridiagonal matrix With the BABE algorithm the facto erization is started at 1 0 x x the top of the matrix as well as the b ottom of the matrix C B C B x x x C B at the same time. C B 1 0 1 0 x x x C B x x x x C B x x x C B C B C B 0 x x 0 x x C B C B C B . C B C B C B . C . B C B C B 0 x x x x x C B C B C B C B C B C B . x x x x x x . C B C B C B . C B C B C B . C B . C B C B . C . B C B C B . C B C B C B . C B C B C B . C B x x x C B C B C B C B C B . C . B C B C B . x x x . C . B C B C B C B C B C B x x A @ C B C B x x x x x x C B C B C B C B x x 0 x x x C B C B C B C B x x 0 x x 0 C B C B C B C B A x x @ A x x @ Algorithm is sequential in nature. 1 0 Not much scop e for pip elining. x x C B 0 x x C B C B In an attempt to increase the sp eed of execution the BABE algorithm 0 x x C B C B . C B . C B was adopted. C B C B . C B . C B C B . C B . C B C B 0 x x C B C B 0 x 0 C B C B C B x x 0 C B C B . C B . C B C B . C B . C B C B . C B . C B C B C B x x 0 C B A @ x x Reduce lo op overhead by a factor of 2 Scop e for indep endent op erations Resulted in 25 faster execution 13 14 Level 1, 2, and 3 BLAS Why Higher Level BLAS? IBM SP2 Memory Hierarchy Level 1 BLAS{vector-vector op erations Largest Smallest + . 256 B Register 2128 MB/s 256 KB Data cache 2128 MB/s 128 MB Local memory 2128 MB/s y y+ x, also dot pro duct, ... 24 GB Remote memory 32 MB/s 485 GB Secondary memory 10 MB/s Level 2 BLAS{matrix-vector op erations Largest Slowest Can only do arithmetic on data at top ! keep active data as close to top of hierarchy as p ossible Higher level BLAS lets us do this: + mem ref ops ops/mem ref Level 1 BLAS y y+ x 3n 2n 2/3 2 2 Level 2 BLAS y y+Ax n 2n 2 y y+Ax, also triangular solve, rank-1 up date 2 3 Level 3 BLAS A A+BC 4n 2n n/2 Level 3 BLAS{matrix-matrix op erations On parallel machines higher level BLAS !increase granularity ! lower synchronization cost . + A A+BC, also blo ck triangular solve, rank-k up date 15 16 Matrix-vector pro duct Lo op unrolling DOT version-25 M ops in cache DO20I=1,M,2 Mo del 530, 50 M op/s p eak T1 = YI T2 = YI+1 DO10J=1,N T1 = T1 + AI,J *XJ DO20I=1,M T2 = T2 + AI+1,J*XJ DO10J=1,N 10 CONTINUE YI = YI + AI,J*XJ YI =T1 10 CONTINUE YI+1 = T2 20 CONTINUE 20 CONTINUE From Cache 22.7 M ops Memory 12.4 M ops 3 loads, 4 ops T Sp eed of y y + A x; N =48 Mo del 530 50 M op/s p eak Depth 1 2 3 4 1 Sp eed 25 33.3 37.5 40 50 Measured 22.7 30.5 34.3 36.5 - Memory 12.4 12.7 12.7 12.6 - 17 18 Matrix-matrix multiply How to get 50 M ops! DO30J=1,M,2 DOT version - 25 M ops in cache DO 20 I + 1, M, 2 T11 = CI, J T12 = CI, J+1 T21 = CI+1,J T22 = CI+1,J+1 DO30J=1,M DO 10 K = 1, L DO20I=1,M T11 = T11 + AI, K*BK,J DO 10 K = 1, L T12 = T12 + AI, K*BK,J+1 CI,J = CI,J + AI,K*BK,J T21 = T21 + AI+1,K*BK,J 10 CONTINUE T22 = T22 + AI+1,K*BK,J+1 20 CONTINUE 10 CONTINUE 30 CONTINUE CI, J = T11 CI, J+1 = T12 CI+1,J = T21 CI+1,J+1 = T22 20 CONTINUE 30 CONTINUE Inner lo op: 4 loads , 8 op erations, optimal. In practice wehave measured 48.1 M=N=16, L=128 19 20 BLAS { BASICS BLAS { REFERENCES BLAS software and do cumentation can be obtained Level 1, 2 and 3 BLAS Performance on RS/6000−550 80 Level 3 via: WWW: http://www.netlib.org/blas, 70 { { anonymous ftp netlib2.cs.utk.edu: 60 cd blas; get index 50 { email [email protected] with the message: Level 2 index from blas 40 send Comments and questions can b e addressed at 30 Speed in Megaflops Level 1 [email protected] 20 10 C. Lawson, R. Hanson, D. Kincaid, and F. Krogh, Basic Linear lgebra Subprograms for Fortran Usage, ACM Transactions on 0 A 0 100 200 300 400 500 600 Order of vectors/matrices Mathematical Software, 5:308{325, 1979.
Recommended publications
  • Microbenchmarks in Big Data
    M Microbenchmark Overview Microbenchmarks constitute the first line of per- Nicolas Poggi formance testing. Through them, we can ensure Databricks Inc., Amsterdam, NL, BarcelonaTech the proper and timely functioning of the different (UPC), Barcelona, Spain individual components that make up our system. The term micro, of course, depends on the prob- lem size. In BigData we broaden the concept Synonyms to cover the testing of large-scale distributed systems and computing frameworks. This chap- Component benchmark; Functional benchmark; ter presents the historical background, evolution, Test central ideas, and current key applications of the field concerning BigData. Definition Historical Background A microbenchmark is either a program or routine to measure and test the performance of a single Origins and CPU-Oriented Benchmarks component or task. Microbenchmarks are used to Microbenchmarks are closer to both hardware measure simple and well-defined quantities such and software testing than to competitive bench- as elapsed time, rate of operations, bandwidth, marking, opposed to application-level – macro or latency. Typically, microbenchmarks were as- – benchmarking. For this reason, we can trace sociated with the testing of individual software microbenchmarking influence to the hardware subroutines or lower-level hardware components testing discipline as can be found in Sumne such as the CPU and for a short period of time. (1974). Furthermore, we can also find influence However, in the BigData scope, the term mi- in the origins of software testing methodology crobenchmarking is broadened to include the during the 1970s, including works such cluster – group of networked computers – acting as Chow (1978). One of the first examples of as a single system, as well as the testing of a microbenchmark clearly distinguishable from frameworks, algorithms, logical and distributed software testing is the Whetstone benchmark components, for a longer period and larger data developed during the late 1960s and published sizes.
    [Show full text]
  • Overview of the SPEC Benchmarks
    9 Overview of the SPEC Benchmarks Kaivalya M. Dixit IBM Corporation “The reputation of current benchmarketing claims regarding system performance is on par with the promises made by politicians during elections.” Standard Performance Evaluation Corporation (SPEC) was founded in October, 1988, by Apollo, Hewlett-Packard,MIPS Computer Systems and SUN Microsystems in cooperation with E. E. Times. SPEC is a nonprofit consortium of 22 major computer vendors whose common goals are “to provide the industry with a realistic yardstick to measure the performance of advanced computer systems” and to educate consumers about the performance of vendors’ products. SPEC creates, maintains, distributes, and endorses a standardized set of application-oriented programs to be used as benchmarks. 489 490 CHAPTER 9 Overview of the SPEC Benchmarks 9.1 Historical Perspective Traditional benchmarks have failed to characterize the system performance of modern computer systems. Some of those benchmarks measure component-level performance, and some of the measurements are routinely published as system performance. Historically, vendors have characterized the performances of their systems in a variety of confusing metrics. In part, the confusion is due to a lack of credible performance information, agreement, and leadership among competing vendors. Many vendors characterize system performance in millions of instructions per second (MIPS) and millions of floating-point operations per second (MFLOPS). All instructions, however, are not equal. Since CISC machine instructions usually accomplish a lot more than those of RISC machines, comparing the instructions of a CISC machine and a RISC machine is similar to comparing Latin and Greek. 9.1.1 Simple CPU Benchmarks Truth in benchmarking is an oxymoron because vendors use benchmarks for marketing purposes.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • Power Measurement Tutorial for the Green500 List
    Power Measurement Tutorial for the Green500 List R. Ge, X. Feng, H. Pyla, K. Cameron, W. Feng June 27, 2007 Contents 1 The Metric for Energy-Efficiency Evaluation 1 2 How to Obtain P¯(Rmax)? 2 2.1 The Definition of P¯(Rmax)...................................... 2 2.2 Deriving P¯(Rmax) from Unit Power . 2 2.3 Measuring Unit Power . 3 3 The Measurement Procedure 3 3.1 Equipment Check List . 4 3.2 Software Installation . 4 3.3 Hardware Connection . 4 3.4 Power Measurement Procedure . 5 4 Appendix 6 4.1 Frequently Asked Questions . 6 4.2 Resources . 6 1 The Metric for Energy-Efficiency Evaluation This tutorial serves as a practical guide for measuring the computer system power that is required as part of a Green500 submission. It describes the basic procedures to be followed in order to measure the power consumption of a supercomputer. A supercomputer that appears on The TOP500 List can easily consume megawatts of electric power. This power consumption may lead to operating costs that exceed acquisition costs as well as intolerable system failure rates. In recent years, we have witnessed an increasingly stronger movement towards energy-efficient computing systems in academia, government, and industry. Thus, the purpose of the Green500 List is to provide a ranking of the most energy-efficient supercomputers in the world and serve as a complementary view to the TOP500 List. However, as pointed out in [1, 2], identifying a single objective metric for energy efficiency in supercom- puters is a difficult task. Based on [1, 2] and given the already existing use of the “performance per watt” metric, the Green500 List uses “performance per watt” (PPW) as its metric to rank the energy efficiency of supercomputers.
    [Show full text]
  • The High Performance Linpack (HPL) Benchmark Evaluation on UTP High Performance Computing Cluster by Wong Chun Shiang 16138 Diss
    The High Performance Linpack (HPL) Benchmark Evaluation on UTP High Performance Computing Cluster By Wong Chun Shiang 16138 Dissertation submitted in partial fulfilment of the requirements for the Bachelor of Technology (Hons) (Information and Communications Technology) MAY 2015 Universiti Teknologi PETRONAS Bandar Seri Iskandar 31750 Tronoh Perak Darul Ridzuan CERTIFICATION OF APPROVAL The High Performance Linpack (HPL) Benchmark Evaluation on UTP High Performance Computing Cluster by Wong Chun Shiang 16138 A project dissertation submitted to the Information & Communication Technology Programme Universiti Teknologi PETRONAS In partial fulfillment of the requirement for the BACHELOR OF TECHNOLOGY (Hons) (INFORMATION & COMMUNICATION TECHNOLOGY) Approved by, ____________________________ (DR. IZZATDIN ABDUL AZIZ) UNIVERSITI TEKNOLOGI PETRONAS TRONOH, PERAK MAY 2015 ii CERTIFICATION OF ORIGINALITY This is to certify that I am responsible for the work submitted in this project, that the original work is my own except as specified in the references and acknowledgements, and that the original work contained herein have not been undertaken or done by unspecified sources or persons. ________________ WONG CHUN SHIANG iii ABSTRACT UTP High Performance Computing Cluster (HPCC) is a collection of computing nodes using commercially available hardware interconnected within a network to communicate among the nodes. This campus wide cluster is used by researchers from internal UTP and external parties to compute intensive applications. However, the HPCC has never been benchmarked before. It is imperative to carry out a performance study to measure the true computing ability of this cluster. This project aims to test the performance of a campus wide computing cluster using a selected benchmarking tool, the High Performance Linkpack (HPL).
    [Show full text]
  • A Study on the Evaluation of HPC Microservices in Containerized Environment
    ReceivED XXXXXXXX; ReVISED XXXXXXXX; Accepted XXXXXXXX DOI: xxx/XXXX ARTICLE TYPE A Study ON THE Evaluation OF HPC Microservices IN Containerized EnVIRONMENT DeVKI Nandan Jha*1 | SaurABH Garg2 | Prem PrAKASH JaYARAMAN3 | Rajkumar Buyya4 | Zheng Li5 | GrAHAM Morgan1 | Rajiv Ranjan1 1School Of Computing, Newcastle UnivERSITY, UK Summary 2UnivERSITY OF Tasmania, Australia, 3Swinburne UnivERSITY OF TECHNOLOGY, AustrALIA Containers ARE GAINING POPULARITY OVER VIRTUAL MACHINES (VMs) AS THEY PROVIDE THE ADVANTAGES 4 The UnivERSITY OF Melbourne, AustrALIA OF VIRTUALIZATION WITH THE PERFORMANCE OF NEAR bare-metal. The UNIFORMITY OF SUPPORT PROVIDED 5UnivERSITY IN Concepción, Chile BY DockER CONTAINERS ACROSS DIFFERENT CLOUD PROVIDERS MAKES THEM A POPULAR CHOICE FOR DEVel- Correspondence opers. EvOLUTION OF MICROSERVICE ARCHITECTURE ALLOWS COMPLEX APPLICATIONS TO BE STRUCTURED INTO *DeVKI Nandan Jha Email: [email protected] INDEPENDENT MODULAR COMPONENTS MAKING THEM EASIER TO manage. High PERFORMANCE COMPUTING (HPC) APPLICATIONS ARE ONE SUCH APPLICATION TO BE DEPLOYED AS microservices, PLACING SIGNIfiCANT RESOURCE REQUIREMENTS ON THE CONTAINER FRamework. HoweVER, THERE IS A POSSIBILTY OF INTERFERENCE BETWEEN DIFFERENT MICROSERVICES HOSTED WITHIN THE SAME CONTAINER (intra-container) AND DIFFERENT CONTAINERS (inter-container) ON THE SAME PHYSICAL host. In THIS PAPER WE DESCRIBE AN Exten- SIVE EXPERIMENTAL INVESTIGATION TO DETERMINE THE PERFORMANCE EVALUATION OF DockER CONTAINERS EXECUTING HETEROGENEOUS HPC microservices. WE ARE PARTICULARLY CONCERNED WITH HOW INTRa- CONTAINER AND inter-container INTERFERENCE INflUENCES THE performance. MoreoVER, WE INVESTIGATE THE PERFORMANCE VARIATIONS IN DockER CONTAINERS WHEN CONTROL GROUPS (cgroups) ARE USED FOR RESOURCE limitation. FOR EASE OF PRESENTATION AND REPRODUCIBILITY, WE USE Cloud Evaluation Exper- IMENT Methodology (CEEM) TO CONDUCT OUR COMPREHENSIVE SET OF Experiments.
    [Show full text]
  • Fair Benchmarking for Cloud Computing Systems
    Fair Benchmarking for Cloud Computing Systems Authors: Lee Gillam Bin Li John O’Loughlin Anuz Pranap Singh Tomar March 2012 Contents 1 Introduction .............................................................................................................................................................. 3 2 Background .............................................................................................................................................................. 4 3 Previous work ........................................................................................................................................................... 5 3.1 Literature ........................................................................................................................................................... 5 3.2 Related Resources ............................................................................................................................................. 6 4 Preparation ............................................................................................................................................................... 9 4.1 Cloud Providers................................................................................................................................................. 9 4.2 Cloud APIs ...................................................................................................................................................... 10 4.3 Benchmark selection ......................................................................................................................................
    [Show full text]
  • On the Performance of MPI-Openmp on a 12 Nodes Multi-Core Cluster
    On the Performance of MPI-OpenMP on a 12 nodes Multi-core Cluster Abdelgadir Tageldin Abdelgadir1, Al-Sakib Khan Pathan1∗ , Mohiuddin Ahmed2 1 Department of Computer Science, International Islamic University Malaysia, Gombak 53100, Kuala Lumpur, Malaysia 2 Department of Computer Network, Jazan University, Saudi Arabia [email protected] , [email protected] , [email protected] Abstract. With the increasing number of Quad-Core-based clusters and the introduction of compute nodes designed with large memory capacity shared by multiple cores, new problems related to scalability arise. In this paper, we analyze the overall performance of a cluster built with nodes having a dual Quad-Core Processor on each node. Some benchmark results are presented and some observations are mentioned when handling such processors on a benchmark test. A Quad-Core-based cluster's complexity arises from the fact that both local communication and network communications between the running processes need to be addressed. The potentials of an MPI-OpenMP approach are pinpointed because of its reduced communication overhead. At the end, we come to a conclusion that an MPI-OpenMP solution should be considered in such clusters since optimizing network communications between nodes is as important as optimizing local communications between processors in a multi-core cluster. Keywords: MPI-OpenMP, hybrid, Multi-Core, Cluster. 1 Introduction The integration of two or more processors within a single chip is an advanced technology for tackling the disadvantages exposed by a single core when it comes to increasing the speed, as more heat is generated and more power is consumed by those single cores.
    [Show full text]
  • Power Efficiency in High Performance Computing
    Power Efficiency in High Performance Computing Shoaib Kamil John Shalf Erich Strohmaier LBNL/UC Berkeley LBNL/NERSC LBNL/CRD [email protected] [email protected] [email protected] ABSTRACT 100 teraflop successor consumes almost 1,500 KW without After 15 years of exponential improvement in microproces- even being in the top 10. The first petaflop-scale systems, sor clock rates, the physical principles allowing for Dennard expected to debut in 2008, will draw 2-7 megawatts of power. scaling, which enabled performance improvements without a Projections for exaflop-scale computing systems, expected commensurate increase in power consumption, have all but in 2016-2018, range from 60-130 megawatts [16]. Therefore, ended. Until now, most HPC systems have not focused on fewer sites in the US will be able to host the largest scale power efficiency. However, as the cost of power reaches par- computing systems due to limited availability of facilities ity with capital costs, it is increasingly important to com- with sufficient power and cooling capabilities. Following this pare systems with metrics based on the sustained perfor- trend, over time an ever increasing proportion of an HPC mance per watt. Therefore we need to establish practical center’s budget will be needed for supplying power to these methods to measure power consumption of such systems in- systems. situ in order to support such metrics. Our study provides The root cause of this impending crisis is that chip power power measurements for various computational loads on the efficiency is no longer improving at historical rates. Up until largest scale HPC systems ever involved in such an assess- now, Moore’s Law improvements in photolithography tech- ment.
    [Show full text]
  • CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix
    Quick review of earlier lecture •" What do you call •" A program written in PyGAS, a Global Address CS 267 Space language based on Python… Dense Linear Algebra: •" That uses a Monte Carlo simulation algorithm to approximate π … History and Structure, •" That has a race condition, so that it gives you a Parallel Matrix Multiplication" different funny answer every time you run it? James Demmel! Monte - π - thon ! www.cs.berkeley.edu/~demmel/cs267_Spr16! ! 02/25/2016! CS267 Lecture 12! 1! 02/25/2016! CS267 Lecture 12! 2! Outline Outline •" History and motivation •" History and motivation •" What is dense linear algebra? •" What is dense linear algebra? •" Why minimize communication? •" Why minimize communication? •" Lower bound on communication •" Lower bound on communication •" Parallel Matrix-matrix multiplication •" Parallel Matrix-matrix multiplication •" Attaining the lower bound •" Attaining the lower bound •" Other Parallel Algorithms (next lecture) •" Other Parallel Algorithms (next lecture) 02/25/2016! CS267 Lecture 12! 3! 02/25/2016! CS267 Lecture 12! 4! CS267 Lecture 2 1 Motifs What is dense linear algebra? •" Not just matmul! The Motifs (formerly “Dwarfs”) from •" Linear Systems: Ax=b “The Berkeley View” (Asanovic et al.) •" Least Squares: choose x to minimize ||Ax-b||2 Motifs form key computational patterns •" Overdetermined or underdetermined; Unconstrained, constrained, or weighted •" Eigenvalues and vectors of Symmetric Matrices •" Standard (Ax = λx), Generalized (Ax=λBx) •" Eigenvalues and vectors of Unsymmetric matrices
    [Show full text]
  • CPU and FFT Benchmarks of ARM Processors
    Proceedings of SAIP2014 Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors Mitchell A Cox, Robert Reed and Bruce Mellado School of Physics, University of the Witwatersrand. 1 Jan Smuts Avenue, Braamfontein, Johannesburg, South Africa, 2000 E-mail: [email protected] Abstract. Projects such as the Large Hadron Collider at CERN generate enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. ARM System on Chips are common in mobile devices due to their low cost, low energy consumption and high performance and may be an affordable alternative to standard x86 based servers where massive parallelism is required. High Performance Linpack and CoreMark benchmark applications are used to test ARM Cortex-A7, A9 and A15 System on Chips CPU performance while their power consumption is measured. In addition to synthetic benchmarking, the FFTW library is used to test the single precision Fast Fourier Transform (FFT) performance of the ARM processors and the results obtained are converted to theoretical data throughputs for a range of FFT lengths. These results can be used to assist with specifying ARM rather than x86-based compute farms for upgrades and upcoming scientific projects. 1. Introduction Projects such as the Large Hadron Collider (LHC) generate enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 41 Tb/s (Terabits/s) [1].
    [Show full text]
  • Red Hat Enterprise Linux 6 Vs. Microsoft Windows Server 2012
    COMPARING CPU AND MEMORY PERFORMANCE: RED HAT ENTERPRISE LINUX 6 VS. MICROSOFT WINDOWS SERVER 2012 An operating system’s ability to effectively manage and use server hardware often defines system and application performance. Processors with multiple cores and random access memory (RAM) represent the two most vital subsystems that can affect the performance of business applications. Selecting the best performing operating system can help your hardware achieve its maximum potential and enable your critical applications to run faster. To help you make that critical decision, Principled Technologies compared the CPU and RAM performance of Red Hat Enterprise Linux 6 and Microsoft Windows Server 2012 using three benchmarks: SPEC® CPU2006, LINPACK, and STREAM. We found that Red Hat Enterprise Linux 6 delivered better CPU and RAM performance in nearly every test, outperforming its Microsoft competitor in both out-of -box and optimized configurations. APRIL 2013 A PRINCIPLED TECHNOLOGIES TEST REPORT Commissioned by Red Hat, Inc. BETTER CPU AND RAM PERFORMANCE We compared CPU and RAM performance on two operating systems: Red Hat Enterprise Linux 6 and Microsoft Windows Server 2012. For our comparison, we used the SPEC CPU2006 benchmark and LINPACK benchmark to test the CPU performance of the solutions using the different operating systems, and the STREAM benchmark to test the memory bandwidth of the two solutions. For each test, we first configured both solutions with out-of-box (default) settings, and then we tested those solutions using multiple tuning parameters to deliver optimized results. We ran each test three times and report the results from the median run. For detailed system configuration information, see Appendix A.
    [Show full text]