The Nas Parallel Benchmarks D
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Computer Architectures an Overview
Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements. -
Efficient Multicore Implementation of NAS Benchmarks with Fastflow
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Electronic Thesis and Dissertation Archive - Università di Pisa UNIVERSITY OF PISA AND SCUOLA SUPERIORE SANT'ANNA Master Degree in Computer S ien e an! Net"or#ing Master T$esis Efficient multicore implementation of NAS benchmarks with FastFlow Candi!ate Supervisor Dean De Leo Pro&. Mar o Dane(utto A a!emi Year )*++,)*+) Alla mia famigghia … and to all of the people who have supported me Index 1 Introduction 1 2 Context 4 2.1 NAS Parallel Benchmarks . .4 2.2 FastFlow . .7 2.3 C++ vs Fortran 77 . 10 3 Methodology 12 3.1 Minimising the memory faults . 13 3.2 Optimisation . 17 3.3 Parallel algorithm . 24 4 NPB Kernels 31 4.1 Kernel EP . 32 4.2 Kernel CG . 38 4.3 Kernel MG . 46 5 Experiments 60 5.1 Kernel EP . 62 5.2 Kernel CG . 64 5.3 Kernel MG . 69 6 Conclusions 74 A Source code 77 Chapter 1 Introduction The thesis describes an efficient implementation of a subset of the NAS Parallel Benchmarks (NPB) for the multicore architecture with the FastFlow framework. The NPB is a specification of numeric benchmarks to compare different environ- ments and implementations. FastFlow is a framework, targeted to shared memory systems, supporting the development of parallel algorithms based on the structured parallel programming. Starting from the NPB specification, the thesis selects a subset of the NPB algorithms and discerns an efficient implementation for both the sequential and parallel algorithms, through FastFlow. -
Performance Assessment of Parallel Techniques
Data Mining VIII: Data, Text and Web Mining and their Business Applications 83 Performance assessment of parallel techniques T. Grytsenko & A. Peratta Wessex Institute of Technology, UK Abstract The goal of this work is to evaluate and compare the computational performance of the most common parallel libraries such as Message Passing Interface (MPI), High Performance Fortran (HPF), OpenMP and DVM for further implementations. Evaluation is based on NAS Parallel benchmark suite (NPB) which includes simulated applications BT, SP, LU and kernel benchmarks FT, CG and MG. A brief introduction of the four parallel techniques under study: MPI, HPF, OpenMP and DVM, as well as their models is provided together with benchmarks used and the test results. Finally, corresponding recommendations are given for the different approaches depending on the number of processors. Keywords: MPI, HPL, DVM, OpenMP, parallel programming, performance, parallel calculations. 1 Introduction This section provides a brief introduction of the four parallel techniques under study: MPI, HPF, OpenMP and DVM, as well as their models. 1.1 The Message-Passing Information programming model (MPI) Programming models are generally categorised according to the way in which how memory is used. In the shared memory model each process accesses a shared address space, while in the message passing model an application runs as a collection of autonomous processes, each with its own local memory. In the message passing model processes communicate with other processes by sending and receiving messages (see Figure 1). When data is passed in a message, the sending and receiving processes must work to transfer the data from the local memory of one to the local memory of the other. -
Performance Evaluation of Vmware and Virtualbox
2012 IACSIT Hong Kong Conferences IPCSIT vol. 29 (2012) © (2012) IACSIT Press, Singapore Performance Evaluation of VMware and VirtualBox Deepak K Damodaran1+, Biju R Mohan2, Vasudevan M S 3 and Dinesh Naik 4 Information Technology, National Institute of Technology Karnataka, India Abstract. Virtualization is a framework of dividing the resources of a computer into multiple execution environments. More specific it is a layer of software that provides the illusion of a real machine to multiple instances of virtual machines. Virtualization offers a lot of benefits including flexibility, security, ease to configuration and management, reduction of cost and so forth, but at the same time it also brings a certain degree of performance overhead. Furthermore, Virtual Machine Monitor (VMM) is the core component of virtual machine (VM) system and its effectiveness greatly impacts the performance of whole system. In this paper, we measure and analyze the performance of two virtual machine monitors VMware and VirtualBox using LMbench and IOzone, and provide a quantitative and qualitative comparison of both virtual machine monitors. Keywords: Virtualization, Virtual machine monitors, Performance. 1. Introduction Years ago, a problem aroused. How to run multiple operating systems on the same machine at the same time. The solution to this problem was virtual machines. Virtual machine monitor (VMM) the core part of virtual machines sits between one or more operating systems and the hardware and gives the illusion to each running operating system that it controls the machine. Behind the scenes, however the VMM actually is in control of the hardware, and must multiplex running operating systems across the physical resources of the machine. -
The Openmp Implementation of NAS Parallel Benchmarks and Its Performance
The OpenMP Implementation of NAS Parallel Benchmarks and Its Performance H. Jin*, M. Frumkin* and J. Yan* *NASA Ames Research Center, Moffett Field, CA 94035-1000 Abstract As the new ccNUMA architecture became popular in recent years, parallel programming with compiler directives on these machines has evolved to accommodate new needs. In this study, we examine the effectiveness of OpenMP directives for parallelizing the NAS Parallel Benchmarks. Implementation details will be discussed and performance will be compared with the MPI implementation. We have demonstrated that OpenMP can achieve very good results for parallelization on a shared memory system, but effective use of memory and cache is very important. 1. Introduction Over the past decade, high performance computers based on commodity microprocessors have been introduced in rapid succession from many vendors. These machines could be classified into two major categories: distributed memory (DMP) and shared memory (SMP) multiprocessing systems. While both categories consist of physically distributed memories, the programming model on an SMP presents a globally shared address space. SMP's further include hardware supported cache coherence protocols across the processors. The introduction of ccNUMA (cache coherent Non-Uniform Memory Access) architecture, such as SGI Origin 2000, has perspective to scale up to thousands of processors. While all these architectures support some form of message passing (e.g. MPI, the de facto standard today), the SMP's further support program annotation standards, or directives, that allow the user to supply information to the compiler to assist in the parallelization of the code. Currently, there are two accepted standards for annotating programs for parallel executions: HPF and OpenMP. -
INTERNATIONAL JOURNAL of ENGINEERING SCIENCES & RESEARCH TECHNOLOGY a Media Player Based on ARM by Porting of Linux Archan Agrawal*, Mrs
[Agrawal, 3(12): December, 2014] ISSN: 2277-9655 Scientific Journal Impact Factor: 3.449 (ISRA), Impact Factor: 2.114 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY A Media Player Based On ARM by Porting of Linux Archan Agrawal*, Mrs. Monika S. Joshi Department of Electronics & Comm. Engg., Marathwada Institute of Technology, Aurangabad, India Abstract This paper describes the porting of embedded linux on ARM 9 platform for designing and implementing of embedded media player on S3Cmini2440 development board. A novel transplating method for linux kernel is presented here, Linux kernel as well as its cut,compile,and porting process under ARM platform are introduced. Optimized linux operating system in the processor has been installed and transplanted the SDL_FFMPEG library into S3Cmini2440 after the cross compilation. Ofthis whole system come together in playing the audio/video & picture formats files smoothly & effectively. Keywords: Embedded Linux, ARM9,Porting, S3Cmini2440, Compilation,.Embedded Media Player. Introduction compatible to any of the hardware with the With the wide application of embedded systems in architecture specific changes into it, Linux has consumer electronics, industrial control, aerospace, become popular making the embedded system market automotive electronics, health care, network more competitive. communications and other fields ,embedded system include executable programs and scripts. An has been familiar to people all walks of life, operating system provides applications with a embedded systems have been into people's lives, it is platform where they can run, managing their access changing people's production and lifestyles in a to the CPU and system memory. The user's operation variety of forms. -
Measuring Cache and TLB Performance and Their Effect on Benchmark Run Times²§
Measuring Cache and TLB Performance and Their Effect on Benchmark Run Times²§ Rafael H. Saavedra³ Alan Jay Smith³³ ABSTRACT In previous research, we have developed and presented a model for measuring machines and analyzing programs, and for accurately predicting the running time of any analyzed program on any measured machine. That work is extended here by: (a) developing a high level program to measure the design and performance of the cache and TLB for any machine; (b) using those measurements, along with published miss ratio data, to improve the accuracy of our run time predictions; (c) using our analysis tools and measurements to study and compare the design of several machines, with par- ticular reference to their cache and TLB performance. As part of this work, we describe the design and performance of the cache and TLB for ten machines. The work presented in this paper extends a powerful technique for the evaluation and analysis of both computer systems and their workloads; this methodology is valuable both to com- puter users and computer system designers. 1. Introduction The performance of a computer system is a function of the speed of the individual functional units, such as the integer, branch, and floating-point units, caches, bus, memory system, and I/O units, and of the workload presented to the system. In our previous research [Saav89, 90, 92b, 92c], described below, we have measured the performance of the parts of the CPU on corresponding portions of various work- loads, but this work has not explicitly considered the behavior and performance of the cache memory. -
What Every Programmer Should Know About Memory
What Every Programmer Should Know About Memory Ulrich Drepper Red Hat, Inc. [email protected] November 21, 2007 Abstract As CPU cores become both faster and more numerous, the limiting factor for most programs is now, and will be for some time, memory access. Hardware designers have come up with ever more sophisticated memory handling and acceleration techniques–such as CPU caches–but these cannot work optimally without some help from the programmer. Unfortunately, neither the structure nor the cost of using the memory subsystem of a computer or the caches on CPUs is well understood by most programmers. This paper explains the structure of memory subsys- tems in use on modern commodity hardware, illustrating why CPU caches were developed, how they work, and what programs should do to achieve optimal performance by utilizing them. 1 Introduction day these changes mainly come in the following forms: In the early days computers were much simpler. The var- • RAM hardware design (speed and parallelism). ious components of a system, such as the CPU, memory, mass storage, and network interfaces, were developed to- • Memory controller designs. gether and, as a result, were quite balanced in their per- • CPU caches. formance. For example, the memory and network inter- faces were not (much) faster than the CPU at providing • Direct memory access (DMA) for devices. data. This situation changed once the basic structure of com- For the most part, this document will deal with CPU puters stabilized and hardware developers concentrated caches and some effects of memory controller design. on optimizing individual subsystems. Suddenly the per- In the process of exploring these topics, we will explore formance of some components of the computer fell sig- DMA and bring it into the larger picture. -
Efficient NAS Benchmark Kernels with C++ Parallel Programming
Efficient NAS Benchmark Kernels with C++ Parallel Programming Dalvan Griebler, Junior Loff, Luiz G. Fernandes Gabriele Mencagli, Marco Danelutto Pontifical Catholic University of Rio Grande do Sul (PUCRS), University of Pisa (UNIPI), Parallel Application Modeling Group (GMAP), Computer Science Department, Porto Alegre – Brazil Pisa – Italy Email: dalvan.griebler,junior.loff @acad.pucrs.br, [email protected] Email: mencagli,marcod @di.unipi.it { } { } Abstract—Benchmarking is a way to study the performance of Our goal is to describe, in a comprehensive way, the new architectures and parallel programming frameworks. Well- porting of the five kernels from the NAS benchmark suite established benchmark suites such as the NAS Parallel Bench- to C++ code as well as the kernels’ parallelization with Intel marks (NPB) comprise legacy codes that still lack portability to C++ language. As a consequence, a set of high-level and easy-to- TBB [8], OpenMP (translation only) [9], and FastFlow [10]. use C++ parallel programming frameworks cannot be tested in After, we aim at evaluating and comparing the performance NPB. Our goal is to describe a C++ porting of the NPB kernels of our implementations with the C++ parallel programming and to analyze the performance achieved by different parallel frameworks. Besides the implementation issues, this work has implementations written using the Intel TBB, OpenMP and a valuable impact regarding research for the following reasons FastFlow frameworks for Multi-Cores. The experiments show an efficient code porting from Fortran to C++ and an efficient that also represent our major contributions: parallelization on average. according to Sec. II, NPB has been ported in different • Index Terms—Parallel Programming, NAS Benchmark, Perfor- languages. -
The Nas Parallel Benchmarks
THE NAS PARALLEL BENCHMARKS D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, S. Fineberg, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan and S. Weeratunga Abstract A new set of benchmarks has been developed for the performance eval- uation of highly parallel supercomputers. These benchmarks consist of five parallel kernels and three simulated application benchmarks. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their “pencil and paper” specification—all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conven- tional benchmarking approaches on highly parallel systems are avoided. This paper is an updated version of a RNR technical report originally issued January 1991. Bailey, Barszcz, Barton, Carter, and Lasinski are with NASA Ames Research Center. Dagum, Fatoohi, Fineberg, Simon and Weer- atunga are with Computer Sciences Corp. (work supported through NASA Contract NAS 2-12961). Schreiber is with RIACS (work funded via NASA cooperative agreement NCC 2-387). Browning is now in the Netherlands (work originally done with CSC). Frederickson is now with Cray Research, Inc. in Los Alamos, NM (work originally done with RIACS). Venkatakrish- nan is now with ICASE in Hampton, VA (work originally done with CSC). Address for all authors except Browning, Frederickson and Venkatakrish- nan: NASA Ames Research Center, Mail Stop T27A-1, Moffett Field, CA 94035-1000. 1 1 GENERAL REMARKS D. Bailey, D. Browning,∗ R. Carter, S. Fineberg,∗ and H. Simon∗ 1.1 Introduction The Numerical Aerodynamic Simulation (NAS) Program, which is based at NASA Ames Research Center, is a large scale effort to advance the state of computational aerodynamics. -
The NAS Parallel Benchmarks
TITLE: The NAS Parallel Benchmarks AUTHOR: David H Bailey1 ACRONYMS: NAS, NPB DEFINITION: The NAS Parallel Benchmarks (NPB) are a suite of parallel computer per- formance benchmarks. They were originally developed at the NASA Ames Re- search Center in 1991 to assess high-end parallel supercomputers [?]. Although they are no longer used as widely as they once were for comparing high-end sys- tem performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym \NAS" originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Sim- ulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains \NAS." The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. DISCUSSION: The original NAS Parallel Benchmarks consisted of eight individual bench- mark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attrac- tive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. -
Stream Processing Systems Benchmark: Streambench
Aalto University School of Science Degree Programme in Computer Science and Engineering Yangjun Wang Stream Processing Systems Benchmark: StreamBench Master's Thesis Espoo, May 26, 2016 Supervisors: Assoc. Prof. Aristides Gionis Advisor: D.Sc. Gianmarco De Francisci Morales Aalto University School of Science ABSTRACT OF Degree Programme in Computer Science and Engineering MASTER'S THESIS Author: Yangjun Wang Title: Stream Processing Systems Benchmark: StreamBench Date: May 26, 2016 Pages: 59 Major: Foundations of Advanced Computing Code: SCI3014 Supervisors: Assoc. Prof. Aristides Gionis Advisor: D.Sc. Gianmarco De Francisci Morales Batch processing technologies (Such as MapReduce, Hive, Pig) have matured and been widely used in the industry. These systems solved the issue processing big volumes of data successfully. However, first big amount of data need to be collected and stored in a database or file system. That is very time-consuming. Then it takes time to finish batch processing analysis jobs before get any results. While there are many cases that need analysed results from unbounded sequence of data in seconds or sub-seconds. To satisfy the increasing demand of processing such streaming data, several streaming processing systems are implemented and widely adopted, such as Apache Storm, Apache Spark, IBM InfoSphere Streams, and Apache Flink. They all support online stream processing, high scalability, and tasks monitoring. While how to evaluate stream processing systems before choosing one in production development is an open question. In this thesis, we introduce StreamBench, a benchmark framework to facilitate performance comparisons of stream processing systems. A common API compo- nent and a core set of workloads are defined.