A Performance Improvement Approach for CPU-GPU Heterogeneous Computing

Total Page:16

File Type:pdf, Size:1020Kb

A Performance Improvement Approach for CPU-GPU Heterogeneous Computing International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8, Issue-1, May 2019 A Performance Improvement Approach for CPU-GPU Heterogeneous Computing Raju K, Niranjan N Chiplunkar Abstract: The heterogenous computing system involving Each thread of a block has a thread id and each thread Central Processing Unit and Graphics Processing Unit block within a grid has unique block id. Threads belonging to (CPU-GPU) is widely used for accelerating compute intensive the same block share the data among themselves using applications that exhibit data parallelism. In CPU-GPU shared memory associated with that block. Thread execution model, when the GPU is performing the computation synchronization within a block is achieved through barrier the CPU cores remain idle, wasting enormous computational synchronization mechanism. A global memory is available power. The performance of an application on GPU can be further improved by efficiently utilizing the computational for all threads for reading and writing purposes. power of CPU cores along with that of GPU. In this paper we The control flow during the execution of any CUDA propose an approach to simultaneously utilize computational program is as shown in Fig. 1. A CUDA program is the power of both CPU and GPU to perform a task. We execute combination of host code and device code executed by CPU different independent data parallel portions of an application and GPU respectively. The address spaces of host memory concurrently on CPU and GPU. We use CUDA framework to and the device memory are different. Before the device code execute the task on GPU side and POSIX threads (Pthreads) to or the kernel begins its execution, the required input data execute the task on CPU side. Through several experiments we demonstrate that by judiciously allocating different kernels to resides in the host memory. Since the host memory is not suitable processors and executing them concurrently, our directly accessible to the device, the input data is transferred approach can improve the performance of a CUDA based from host to device memory through PCI-Express. After the application compared to the GPU-only execution of that host to device data transfer the kernel function is invoked. application. During the kernel execution each of thread in the grid execute an instance of the kernel code, but processing Index Terms: Heterogeneous computing; GPU; Multicore different portions of input data. On completion of the kernel CPU; Concurrent execution; CUDA kernels. execution the computed results are transferred from device to I. INTRODUCTION host memory. The CPU-GPU heterogeneous system provides a powerful architecture for accelerating computation-intensive data parallel applications. The availability of thousands of cores in a GPU makes it a suitable architecture for executing applications that exhibit massive parallelism. With thousands of cores a GPU can improve the performance of parallel applications. Also, the modern CPUs possess multiple processor cores providing huge computational power. GPUs were originally developed for rendering images. However, with the emergence of Compute Unified Device Architecture (CUDA), a programming framework Fig. 1: Execution control flow of a CUDA program. based on C language, the GPUs are widely used for general purpose computation. CUDA has reduced the programmer’s In the execution flow described above, the process of efforts in parallelizing the applications on CPU-GPU invoking a kernel is an asynchronous or non-blocking heterogeneous systems.In CUDA, the CPU along with its operation. That is, instead of waiting for the device to memory is referred to as the host, and the GPU along with its complete the kernel execution, the control is returned to the memory is referred to as the device. The code that runs on the host immediately after the kernel is launched. Even though device is known as the kernel. Threads in CUDA are the host reacquires the control, it can only perform host to organized into thread blocks and grids. A thread block is a device data transfers and launch new kernels by using CUDA group of threads, and a grid is group of thread blocks. An streams. The multiple cores of the CPU are neither used for instance of the kernel code is executed by each thread of the kernel execution nor for executing any other independent grid. computations of the current application. Hence the enormous Revised Manuscript Received on May 24, 2019. computational power of the CPU cores is wasted. Raju K, Department of Computer Science and Engineering, NMAM Institute of Technology, Nitte, Karkala, Karnataka, India. Niranjan N Chiplunkar, Department of Computer Science and Engineering, NMAM Institute of Technology, Nitte, Karkala, Karnataka, India. Published By: Blue Eyes Intelligence Engineering Retrieval Number: F2540037619 /19©BEIESP 532 & Sciences Publication A Performance Improvement Approach for CPU-GPU Heterogeneous Computing The computational power of idle CPU cores can be It can be observed from Fig. 2 that the CPU cores remain effectively utilized by concurrently executing different idle when a kernel is executing on the GPU. As the kernel independent computational tasks of an application on both launch is non-blocking operation, the control returns to the CPU and GPU cores. That is, in an application with multiple host thread immediately after a kernel is invoked. The host independent tasks, when the GPU is running a kernel, other thread could execute kernel-2 on the CPU while the kernel-1 independent tasks can be assigned to the CPU cores. This is executing on the GPU as shown in the Fig. 3. However, the method of execution improves the utilization of the concurrent execution of the two kernels is possible only when computational resources in the CPU-GPU heterogeneous the kernel-2 does not dependent on kernel-1, i.e. kernel-2 system, which in turn can improve the performance. does not require the results produced by kernel-1for its In this paper, we present our approach for the concurrent execution. For an application having two independent execution of independent kernels of a CUDA application on CUDA kernels, say kernel-1 and kernl-2, we perform two CPU and GPU. For a set of test applications we evaluate the separate runs to determine which kernel is to be executed on effectiveness of our approach by orchestrating the CPU-GPU a particular processor (CPU or GPU) so that the overall concurrent execution. Through experiments we analyze the concurrent execution time is minimal. Accordingly, the suitability of a processing device (CPU or GPU) for the combination for first run is kernel-1 on CPU and kernel-2 on execution a given computational task and thereby determine GPU, and the vice versa for the second run. For the first run, an optimal assignment of tasks to processors. Finally, we we implement the concurrent execution of the two kernels compare the performance of CPU-GPU concurrent execution using the following steps: to the GPU-only execution of the application. 1) Declare the host and device copies of the input and The rest of this paper is organized as follows. Section II output data. Initialize the host copy of the input data provides the implementation details of concurrent execution values. of two independent kernels of a CUDA application. Section 2) Send the input data from the host memory to the device III presents the details of concurrent execution of multiple memory that is required for the execution of the CUDA kernels. In Section IV, we analyze the results of our kernel-1on the GPU. experiments. The related work is discussed in Section V, and 3) Launch the kernel-1 on GPU. we conclude our work in Section VI. 4) Soon after the kernel launch, the host thread spawns a child thread which in turn invokes the CPU function II. CONCURRENT EXECUTION OF TWO equivalent to the kernel-2. Among the various APIs and INDEPENDENT KERNELS frameworks available for threading on the host side, we In this section we present the method to concurrently opted to use Pthreads (POSIX threads) due to the low execute two independent kernels, one each on CPU and GPU. overhead associated with thread creation and Fig. 2 depicts the GPU-only execution flow of two management. independent kernels of a CUDA application. 5) After the execution of the kernel-1 on the GPU, the output data is transferred back to the CPU memory. 6) The CPU-GPU execution time for the two kernels is measured including the input and output data transfer time between CPU and GPU. Similarly, a second run is performed by interchanging the assignment of kernels to processors. Among these two runs the one which yields low execution time is considered for computing execution speedup. The pseudocode that translates the above six steps is as shown below. In this pseudocode we have taken 1D stencil operation as kernel-1 and binary search operation as Fig. 2: GPU-only execution of two CUDA kernels. kernel-2. Step1: // For 1d stencil, allocate space for host and // device copies of input and output data (in, out, // d_din, d_out) of size bytes, and setup values. int *in, *out; int *d_in, *d_out; in = (int *)malloc(size); out = (int *)malloc(size); cudaMalloc((void **)&d_in, size); cudaMalloc((void **)&d_out, size); // for binary search declare the input parameters // in a structure inpar and initialize values Fig. 3: CPU-GPU concurrent execution of two CUDA kernels. Published By: Blue Eyes Intelligence Engineering Retrieval Number: F2540037619 /19©BEIESP 533 & Sciences Publication International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8, Issue-1, May 2019 We have executed seven different CUDA C test Step2: cudaEventRecord(start, 0); applications labeled A1 through A7, listed in Table 1. In each //Copy input data from CPU to GPU.
Recommended publications
  • On Heterogeneous Compute and Memory Systems
    ON HETEROGENEOUS COMPUTE AND MEMORY SYSTEMS by Jason Lowe-Power A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2017 Date of final oral examination: 05/31/2017 The dissertation is approved by the following members of the Final Oral Committee: Mark D. Hill, Professor, Computer Sciences Dan Negrut, Professor, Mechanical Engineering Jignesh M. Patel, Professor, Computer Sciences Karthikeyan Sankaralingam, Associate Professor, Computer Sciences David A. Wood, Professor, Computer Sciences © Copyright by Jason Lowe-Power 2017 All Rights Reserved i Acknowledgments I would like to acknowledge all of the people who helped me along the way to completing this dissertation. First, I would like to thank my advisors, Mark Hill and David Wood. Often, when students have multiple advisors they find there is high “synchronization overhead” between the advisors. However, Mark and David complement each other well. Mark is a high-level thinker, focusing on the structure of the argument and distilling ideas to their essentials; David loves diving into the details of microarchitectural mechanisms. Although ever busy, at least one of Mark or David were available to meet with me, and they always took the time to help when I needed it. Together, Mark and David taught me how to be a researcher, and they have given me a great foundation to build my career. I thank my committee members. Jignesh Patel for his collaborations, and for the fact that each time I walked out of his office after talking to him, I felt a unique excitement about my research.
    [Show full text]
  • Heterogeneous Cpu+Gpu Computing
    HETEROGENEOUS CPU+GPU COMPUTING Ana Lucia Varbanescu – University of Amsterdam [email protected] Significant contributions by: Stijn Heldens (U Twente), Jie Shen (NUDT, China), Heterogeneous platforms • Systems combining main processors and accelerators • e.g., CPU + GPU, CPU + Intel MIC, AMD APU, ARM SoC • Everywhere from supercomputers to mobile devices Heterogeneous platforms • Host-accelerator hardware model Accelerator FPGAs Accelerator PCIe / Shared memory ... Host MICs Accelerator GPUs Accelerator CPUs Our focus today … • A heterogeneous platform = CPU + GPU • Most solutions work for other/multiple accelerators • An application workload = an application + its input dataset • Workload partitioning = workload distribution among the processing units of a heterogeneous system Few cores Thousands of Cores 5 Generic multi-core CPU 6 Programming models • Pthreads + intrinsics • TBB – Thread building blocks • Threading library • OpenCL • To be discussed … • OpenMP • Traditional parallel library • High-level, pragma-based • Cilk • Simple divide-and-conquer model abstractionincreasesLevel of 7 A GPU Architecture Offloading model Kernel Host code 9 Programming models • CUDA • NVIDIA proprietary • OpenCL • Open standard, functionally portable across multi-cores • OpenACC • High-level, pragma-based • Different libraries, programming models, and DSLs for different domains Level of abstractionincreasesLevel of CPU vs. GPU 10 ALU ALU CPU Control Low latency, high Throughput: ~ALU 500 GFLOPsALU flexibility. Bandwidth: ~ 60 GB/s Excellent for
    [Show full text]
  • State-Of-The-Art in Heterogeneous Computing
    Scientific Programming 18 (2010) 1–33 1 DOI 10.3233/SPR-2009-0296 IOS Press State-of-the-art in heterogeneous computing Andre R. Brodtkorb a,∗, Christopher Dyken a, Trond R. Hagen a, Jon M. Hjelmervik a and Olaf O. Storaasli b a SINTEF ICT, Department of Applied Mathematics, Blindern, Oslo, Norway E-mails: {Andre.Brodtkorb, Christopher.Dyken, Trond.R.Hagen, Jon.M.Hjelmervik}@sintef.no b Oak Ridge National Laboratory, Future Technologies Group, Oak Ridge, TN, USA E-mail: [email protected] Abstract. Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, available software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing. Keywords: Power-efficient architectures, parallel computer architecture, stream or vector architectures, energy and power consumption, microprocessor performance 1. Introduction the speed of logic gates, making computers smaller and more power efficient. Noyce and Kilby indepen- The goal of this article is to provide an overview of dently invented the integrated circuit in 1958, leading node-level heterogeneous computing, including hard- to further reductions in power and space required for ware, software tools and state-of-the-art algorithms.
    [Show full text]
  • Summarizing CPU and GPU Design Trends with Product Data
    Summarizing CPU and GPU Design Trends with Product Data Yifan Sun, Nicolas Bohm Agostini, Shi Dong, and David Kaeli Northeastern University Email: fyifansun, agostini, shidong, [email protected] Abstract—Moore’s Law and Dennard Scaling have guided the products. Equipped with this data, we answer the following semiconductor industry for the past few decades. Recently, both questions: laws have faced validity challenges as transistor sizes approach • Are Moore’s Law and Dennard Scaling still valid? If so, the practical limits of physics. We are interested in testing the validity of these laws and reflect on the reasons responsible. In what are the factors that keep the laws valid? this work, we collect data of more than 4000 publicly-available • Do GPUs still have computing power advantages over CPU and GPU products. We find that transistor scaling remains CPUs? Is the computing capability gap between CPUs critical in keeping the laws valid. However, architectural solutions and GPUs getting larger? have become increasingly important and will play a larger role • What factors drive performance improvements in GPUs? in the future. We observe that GPUs consistently deliver higher performance than CPUs. GPU performance continues to rise II. METHODOLOGY because of increases in GPU frequency, improvements in the thermal design power (TDP), and growth in die size. But we We have collected data for all CPU and GPU products (to also see the ratio of GPU to CPU performance moving closer to our best knowledge) that have been released by Intel, AMD parity, thanks to new SIMD extensions on CPUs and increased (including the former ATI GPUs)1, and NVIDIA since January CPU core counts.
    [Show full text]
  • Heterogeneous Computing in the Edge
    Heterogeneous Computing in the Edge Authors: Charles Byers Associate Chief Technical Officer Industrial Internet Consortium [email protected] IIC Journal of Innovation - 1 - Heterogeneous Computing in the Edge INTRODUCTION Heterogeneous computing is the technique where different types of processors with different data path architectures are applied together to optimize the execution of specific computational workloads. Traditional CPUs are often inefficient for the types of computational workloads we will run on edge computing nodes. By adding additional types of processing resources like GPUs, TPUs, and FPGAs, system operation can be optimized. This technique is growing in popularity in cloud data centers, but is nascent in edge computing nodes. This paper will discuss some of the types of processors used in heterogenous computing, leading suppliers of these technologies, example edge use cases that benefit from each type, partitioning techniques to optimize its application, and hardware / software architectures to implement it in edge nodes. Edge computing is a technique through which the computational, storage, and networking functions of an IoT network are distributed to a layer or layers of edge nodes arranged between the bottom of the cloud and the top of IoT devices1. There are many tradeoffs to consider when deciding how to partition workloads between cloud data centers and edge computing nodes, and which processor data path architecture(s) are optimum at each layer for different applications. Figure 1 is an abstracted view of a cloud-edge network that employs heterogenous computing. A cloud data center hosts a number of types of computing resources, with a central interconnect. These computing resources consist of traditional Complex Instruction Set Computing / Reduced Instruction Set Computing (CISC/RISC) servers, but also include Graphics Processing Unit (GPU) accelerators, Tensor Processing Units (TPUs), and Field Programmable Gate Array (FPGA) farms and a few other processor types to help accelerate certain types of workloads.
    [Show full text]
  • Enginecl: Usability and Performance in Heterogeneous Computing
    Accepted in Future Generation Computer Systems: https://doi.org/10.1016/j.future.2020.02.016 EngineCL: Usability and Performance in Heterogeneous Computing Ra´ulNozala,∗, Jose Luis Bosquea, Ramon Beividea aComputer Science and Electronics Department, Universidad de Cantabria, Spain Abstract Heterogeneous systems have become one of the most common architectures today, thanks to their excellent performance and energy consumption. However, due to their heterogeneity they are very complex to program and even more to achieve performance portability on different devices. This paper presents EngineCL, a new OpenCL-based runtime system that outstandingly simplifies the co-execution of a single massive data-parallel kernel on all the devices of a heterogeneous system. It performs a set of low level tasks regarding the management of devices, their disjoint memory spaces and scheduling the workload between the system devices while providing a layered API. EngineCL has been validated in two compute nodes (HPC and commodity system), that combine six devices with different architectures. Experimental results show that it has excellent usability compared with OpenCL; a maximum 2.8% of overhead compared to the native version under loads of less than a second of execution and a tendency towards zero for longer execution times; and it can reach an average efficiency of 0.89 when balancing the load. Keywords: Heterogeneous Computing, Usability, Performance portability, OpenCL, Parallel Programming, Scheduling, Load balancing, Productivity, API 1. Introduction On the other hand, OpenCL follows the Host-Device programming model. Usually the host (CPU) offloads a The emergence of heterogeneous systems is one of the very time-consuming function (kernel) to execute in one most important milestones in parallel computing in recent of the devices.
    [Show full text]
  • A Survey on Hardware-Aware and Heterogeneous Computing on Multicore Processors and Accelerators
    A Survey on Hardware-aware and Heterogeneous Computing on Multicore Processors and Accelerators Rainer Buchty Vincent Heuveline Wolfgang Karl Jan-Philipp Weiß No. 2009-02 Preprint Series of the Engineering Mathematics and Computing Lab (EMCL) KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.emcl.kit.edu Preprint Series of the Engineering Mathematics and Computing Lab (EMCL) ISSN 2191–0693 No. 2009-02 Impressum Karlsruhe Institute of Technology (KIT) Engineering Mathematics and Computing Lab (EMCL) Fritz-Erler-Str. 23, building 01.86 76133 Karlsruhe Germany KIT – University of the State of Baden Wuerttemberg and National Laboratory of the Helmholtz Association Published on the Internet under the following Creative Commons License: http://creativecommons.org/licenses/by-nc-nd/3.0/de . www.emcl.kit.edu A Survey on Hardware-aware and Heterogeneous Computing on Multicore Processors and Accelerators Rainer Buchty 1, Vincent Heuveline 2, Wolfgang Karl 1 and Jan-Philipp Weiss 2,3 1 Chair for Computer Architecture, Institute of Computer Science and Engineering 2 Engineering Mathematics Computing Lab 3 Shared Research Group New Frontiers in High Performance Computing Karlsruhe Institute of Technology, Kaiserstr. 12, 76128 Karlsruhe, Germany Abstract. The paradigm shift towards multicore technologies is offer- ing a great potential of computational power for scientific and industrial applications. It is, however, posing considerable challenges to software de- velopment. This problem is impaired by increasing heterogeneity of hard- ware platforms on both, processor level, and by adding dedicated accel- erators. Performance gains for data- and compute-intensive applications can currently only be achieved by exploiting coarse- and fine-grained parallelism on all system levels, and improved scalability with respect to constantly increasing core counts.
    [Show full text]
  • Take GPU Processing Power Beyond Graphics with GPU Computing on Mali
    Take GPU Processing Power Beyond Graphics with Mali GPU Computing Roberto Mijat Visual Computing Marketing Manager August 2012 Introduction Modern processor and SoC architectures endorse parallelism as a pathway to get more performance more efficiently. GPUs deliver superior computational power for massive data-parallel workloads. Modern GPUs are becoming increasingly programmable and can be used for general purpose processing. Frameworks such as OpenCL™ and Android™ Renderscript enable this. In order to achieve uncompromised features support and performance you need a processor specifically designed for general purpose computation. After an introduction to the technology and how it is enabled, this presentation will explore design considerations of the ARM Mali-T600 series of GPUs that make them the perfect fit for GPU Computing. Copyright © 2012 ARM Limited. All rights reserved. The ARM logo is a registered trademark of ARM Ltd. All other trademarks are the property of their respective owners and are acknowledged Page 1 of 6 The rise of parallel computation Parallelism is at the core of modern processor architecture design: it enables increased processing performance and efficiency. Superscalar CPUs implement instruction level parallelism (ILP). Single Instruction Multiple Data (SIMD) architectures enable faster computation of vector data. Simultaneous multithreading (SMT) is used to mitigate memory latency overheads. Multi-core SMP can provide significant performance uplift and energy savings by executing multiple threads/programs in parallel. SoC designers combine diverse accelerators together on the same die sharing a unified bus matrix. All these technologies enable increased performance and more efficient computation, by doing things in parallel. They are all well established techniques in modern computing.
    [Show full text]
  • Everything You Always Wanted to Know About HSA*
    Everything You Always Wanted to Know About HSA* Explained by Nathan Brookwood Research Fellow, Insight 64 October, 2013 * But Were Afraid To Ask Abstract For several years, AMD and its technology partners have tossed around terms like HSA, FSA, APU, heterogeneous computing, GP/GPU computing and the like, leaving many innocent observers confused and bewildered. In this white paper, sponsored by AMD, Insight 64 lifts the veil on this important technology, and explains why, even if HSA doesn’t entirely change your life, it will change the way you use your desktop, laptop, tablet, smartphone and the cloud. Table of Contents 1. What is Heterogeneous Computing? .................................................................................................... 3 2. Why does Heterogeneous Computing matter? .................................................................................... 3 3. What is Heterogeneous Systems Architecture (HSA)? ......................................................................... 3 4. How can end-users take advantage of HSA? ........................................................................................ 4 5. How does HSA affect system power consumption? ............................................................................. 4 6. Will HSA make the smartphone, tablet or laptop I just bought run better? ........................................ 5 7. What workloads benefit the most from HSA? ...................................................................................... 5 8. How does HSA
    [Show full text]
  • Exploiting Heterogeneous Cpus/Gpus
    Exploiting Heterogeneous CPUs/GPUs David Kaeli Department of Electrical and Computer Engineering Northeastern University Boston, MA General Purpose Computing . With the introduction of multi-core CPUs, there has been a renewed interest in parallel computing paradigms and languages . Existing multi-/many-core architectures are being considered for general-purpose platforms (e.g., Cell, GPUs, DSPs) . Heterogeneous systems are becoming a common theme . Are we returning to the days of the X87 co-processor? . How should we combine multi-core and many-core systems into a single design? Heterogeneous Computing “….electronic systems that use a variety of different types of computational units…..” Wikipedia The elements could have different instruction set architectures The elements could have different memory byte orderings (i.e., endianness) The elements may have different memory coherency and consistency models The elements may only work with specific operating systems and application programming interfaces (APIs) The elements could be integrated on the same or different chips/boards/system Trends in Heterogeneous Computing: X86 Microprocessors . 1978 – Intel 8086 . Designed to run integer-based CPU-bound programs (e.g., Dhrystone) efficiently . No explicit floating point support . 1980 – Intel 8087 . 50 KFLOPS!!!!! . IEEE 754 definition . 1982 – Intel 80286/287 . 1985 – Intel 80386/387 and AMD AM386 w/387 . 1989 – Intel 80486DX . First integrated on-chip X87 Trends in Heterogeneous Computing: X86 Microprocessors . 1996 – Intel Pentium . MMX multimedia extensions . 1997 – AMD K6 . MMX and FP support . 1998 – AMD K6-2 . Extends MMX with 3DNow . SIMD vector instructions for graphics processing . 1999 – Intel Pentium III . Introduces SSE to X86 . 2001-2005 – Intel Pentium IV/Prescott and AMD Opteron/Athalon .
    [Show full text]
  • Heterogeneous Many-Core Computing Trends: Past, Present and Future
    Heterogeneous Many-core Computing Trends: Past, Present and Future Simon McIntosh-Smith University of Bristol, UK 1 ! "Agenda •" Important technology trends •" Heterogeneous Computing •" The Seven Dwarfs •" Important implications •" Conclusions 2 ! "The real Moore’s Law 45 years ago, Gordon Moore observed that the number of transistors on a single chip was doubling rapidly http://www.intel.com/technology/mooreslaw/ 3 ! "Important technology trends The real Moore’s Law The clock speed plateau The power ceiling Instruction level parallelism limit Herb Sutter, “The free lunch is over”, Dr. Dobb's Journal, 30(3), March 2005. On-line version, August 2009. 4 http://www.gotw.ca/publications/concurrency-ddj.htm ! "Moore’s Law today http://www.itrs.net/Links/2009ITRS/2009Chapters_2009Tables/2009_ExecSum.pdf 5 ! "Moore’s Law today Average Moore’s Law = 2x/2yrs 2x/2yrs High-performance MPU, e.g. Intel Nehalem Cost-performance MPU, e.g. Nvidia Tegra http://www.itrs.net/Links/2009ITRS/2009Chapters_2009Tables/2009_ExecSum.pdf 6 ! "Moore’s Law today Average Moore’s Law = 2x/2yrs 2x/2yrs 2-3B transistors High-performance MPU, e.g. Intel Nehalem Cost-performance MPU, e.g. Nvidia Tegra http://www.itrs.net/Links/2009ITRS/2009Chapters_2009Tables/2009_ExecSum.pdf 7 ! "Moore’s Law today Average Moore’s Law = 2x/2yrs 2x/2yrs ~1BHigh-performance transistors MPU, e.g. Intel Nehalem Cost-performance MPU, e.g. Nvidia Tegra http://www.itrs.net/Links/2009ITRS/2009Chapters_2009Tables/2009_ExecSum.pdf 8 ! "Moore’s Law today Average Moore’s Law = 2x/2yrs 2x/3yrs 2x/2yrs High-performance MPU, e.g. Intel Nehalem Cost-performance MPU, e.g.
    [Show full text]
  • Abstract (Pdf)
    A Manycore Coprocessor Architecture for Heterogeneous Computing Andreas Olofsson Adapteva Inc Over the last two decades we have seen amazing strides in raw computing performance in stationary as well as portable systems. Unfortunately, advancements in processing efficiency have come at a slower pace. Without significant improvements in energy efficiency, advances in high performance computing systems will soon stall. As an industry we are now faced with a tough choice. Do we continue using general purpose processors for high performance computing and accept the fact that year over year performance improvement will slow down and or do we look for a new approach with significantly better energy efficiency but which may be harder to use? In this talk, I will present some possible paths forward and propose a heterogeneous computing architecture with an order of magnitude improvement in power efficiency. The proposed architecture leverages the strengths of FPGA, microprocessor, and coprocessor technology to simultaneously offer great processing efficiency and a familiar programming model. Biography: Andreas Olofsson has over ten years of experience in the specification and design of Digital Signal Processors, microcontrollers,and mixed signal chips at Analog Devices and Texas Instruments and has completed over twenty tapeouts in process technologies from 0.35um to 65nm. From 1998-2006, Andreas was a design leader in the development of the TigerSHARC DSP family, a revolutionary new computer architecture which at the time of release was the world leader in floating point energy performance with an efficiency of 0.75GFLOPS/Watt in a 0.13um process. He is currently the president of Adapteva Inc, a fabless semiconductor company with a mission to drastically improve power efficiency in high performance computing applications.
    [Show full text]