Heterogeneous Computing with Opencl 2.0 This Page Intentionally Left Blank Heterogeneous Computing with Opencl 2.0 Third Edition

Total Page:16

File Type:pdf, Size:1020Kb

Heterogeneous Computing with Opencl 2.0 This Page Intentionally Left Blank Heterogeneous Computing with Opencl 2.0 Third Edition Heterogeneous Computing with OpenCL 2.0 This page intentionally left blank Heterogeneous Computing with OpenCL 2.0 Third Edition David Kaeli Perhaad Mistry Dana Schaa Dong Ping Zhang AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Morgan Kaufmann is an imprint of Elsevier Acquiring Editor: Todd Green Editorial Project Manager: Charlie Kent Project Manager: Priya Kumaraguruparan Cover Designer: Matthew Limbert Morgan Kaufmann is an imprint of Elsevier 225 Wyman Street, Waltham, MA 02451, USA Copyright © 2015, 2013, 2012 Advanced Micro Devices, Inc. Published by Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN: 978-0-12-801414-1 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress For information on all MK publications visit our website at www.mkp.com Contents ListofFigures ......................................................................... xi ListofTables ........................................................................ xvii Foreword.............................................................................. xix Acknowledgments.................................................................... xxi CHAPTER 1 Introduction ....................................................... 1 1.1 Introduction to Heterogeneous Computing......................... 1 1.2 TheGoalsofThisBook............................................. 2 1.3 ThinkingParallel .................................................... 2 1.4 ConcurrencyandParallelProgrammingModels................... 7 1.5 ThreadsandSharedMemory ....................................... 8 1.6 Message-PassingCommunication.................................. 9 1.7 DifferentGrainsofParallelism ....................................10 1.7.1 DataSharingandSynchronization..........................11 1.7.2 SharedVirtualMemory .....................................11 1.8 HeterogeneousComputingwithOpenCL.........................12 1.9 BookStructure......................................................13 References................................................................14 CHAPTER 2 Device Architectures ..........................................15 2.1 Introduction .........................................................15 2.2 HardwareTrade-offs................................................15 2.2.1 Performance Increase with Frequency, anditsLimitations...........................................17 2.2.2 SuperscalarExecution.......................................18 2.2.3 VeryLongInstructionWord.................................19 2.2.4 SIMDandVectorProcessing................................21 2.2.5 Hardware Multithreading....................................22 2.2.6 Multicore Architectures .....................................25 2.2.7 Integration:Systems-on-ChipandtheAPU................26 2.2.8 CacheHierarchiesandMemorySystems...................28 2.3 TheArchitecturalDesignSpace ...................................29 2.3.1 CPUDesigns.................................................29 2.3.2 GPUArchitectures...........................................33 2.3.3 APUandAPU-likeDesigns.................................37 2.4 Summary............................................................38 References................................................................39 v vi Contents CHAPTER 3 Introduction to OpenCL........................................41 3.1 Introduction .........................................................41 3.1.1 TheOpenCLStandard.......................................41 3.1.2 TheOpenCLSpecification..................................42 3.2 TheOpenCLPlatformModel......................................43 3.2.1 PlatformsandDevices.......................................44 3.3 TheOpenCLExecutionModel ....................................45 3.3.1 Contexts......................................................45 3.3.2 Command-Queues...........................................47 3.3.3 Events ........................................................48 3.3.4 Device-Side Enqueuing .....................................49 3.4 KernelsandtheOpenCLProgrammingModel ...................50 3.4.1 CompilationandArgumentHandling ......................53 3.4.2 StartingKernelExecutionona Device.....................55 3.5 OpenCLMemoryModel...........................................56 3.5.1 MemoryObjects.............................................56 3.5.2 DataTransferCommands ...................................59 3.5.3 MemoryRegions ............................................60 3.5.4 GenericAddressSpace......................................62 3.6 TheOpenCLRuntimewithanExample ..........................62 3.6.1 Complete Vector Addition Listing ..........................66 3.7 Vector Addition Using an OpenCL C++ Wrapper ................69 3.8 OpenCLforCUDAProgrammers .................................71 3.9 Summary............................................................73 Reference.................................................................73 CHAPTER 4 Examples ..........................................................75 4.1 OpenCLExamples..................................................75 4.2 Histogram...........................................................75 4.3 ImageRotation......................................................83 4.4 ImageConvolution .................................................91 4.5 Producer-Consumer ................................................99 4.6 Utility Functions .................................................. 107 4.6.1 ReportingCompilationErrors............................. 107 4.6.2 Creatinga ProgramString................................. 108 4.7 Summary.......................................................... 109 CHAPTER 5 OpenCL Runtime and Concurrency Model ............. 111 5.1 CommandsandtheQueuingModel............................. 111 5.1.1 BlockingMemoryOperations............................. 111 Contents vii 5.1.2 Events ...................................................... 112 5.1.3 CommandBarriersandMarkers.......................... 113 5.1.4 EventCallbacks............................................ 114 5.1.5 ProfilingUsingEvents..................................... 114 5.1.6 UserEvents ................................................ 115 5.1.7 Out-of-OrderCommand-Queues.......................... 116 5.2 Multiple Command-Queues...................................... 118 5.3 The Kernel Execution Domain: Work-Items, Work-Groups, and NDRanges ................................... 121 5.3.1 Synchronization............................................ 124 5.3.2 Work-GroupBarriers...................................... 125 5.3.3 Built-InWork-GroupFunctions........................... 128 5.3.4 PredicateEvaluationFunctions ........................... 128 5.3.5 BroadcastFunctions....................................... 129 5.3.6 ParallelPrimitiveFunctions............................... 129 5.4 NativeandBuilt-InKernels...................................... 130 5.4.1 Nativekernels.............................................. 130 5.4.2 Built-in kernels ............................................ 132 5.5 Device-SideQueuing............................................. 132 5.5.1 Creatinga Device-SideQueue............................ 135 5.5.2 Enqueuing Device-Side Kernels .......................... 136 5.6 Summary.......................................................... 142 Reference............................................................... 142 CHAPTER 6 OpenCL Host-Side Memory Model ....................... 143 6.1 MemoryObjects.................................................. 144 6.1.1 Buffers...................................................... 144 6.1.2 Images...................................................... 145 6.1.3 Pipes.......................................................
Recommended publications
  • Other Apis What’S Wrong with Openmp?
    Threaded Programming Other APIs What’s wrong with OpenMP? • OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming CPU cycles. – cannot arbitrarily start/stop threads – cannot put threads to sleep and wake them up later • OpenMP is good for programs where each thread is doing (more-or-less) the same thing. • Although OpenMP supports C++, it’s not especially OO friendly – though it is gradually getting better. • OpenMP doesn’t support other popular base languages – e.g. Java, Python What’s wrong with OpenMP? (cont.) Can do this Can do this Can’t do this Threaded programming APIs • Essential features – a way to create threads – a way to wait for a thread to finish its work – a mechanism to support thread private data – some basic synchronisation methods – at least a mutex lock, or atomic operations • Optional features – support for tasks – more synchronisation methods – e.g. condition variables, barriers,... – higher levels of abstraction – e.g. parallel loops, reductions What are the alternatives? • POSIX threads • C++ threads • Intel TBB • Cilk • OpenCL • Java (not an exhaustive list!) POSIX threads • POSIX threads (or Pthreads) is a standard library for shared memory programming without directives. – Part of the ANSI/IEEE 1003.1 standard (1996) • Interface is a C library – no standard Fortran interface – can be used with C++, but not OO friendly • Widely available – even for Windows – typically installed as part of OS – code is pretty portable • Lots of low-level control over behaviour of threads • Lacks a proper memory consistency model Thread forking #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void*(*start_routine, void*), void *arg) • Creates a new thread: – first argument returns a pointer to a thread descriptor.
    [Show full text]
  • GPTPU: Accelerating Applications Using Edge Tensor Processing Units Kuan-Chieh Hsu and Hung-Wei Tseng University of California, Riverside {Khsu037, Htseng}@Ucr.Edu
    GPTPU: Accelerating Applications using Edge Tensor Processing Units Kuan-Chieh Hsu and Hung-Wei Tseng University of California, Riverside {khsu037, htseng}@ucr.edu This paper is a pre-print of a paper in the 2021 SC, the Interna- Two decades ago, graphics processing units (GPUs) were just tional Conference for High Performance Computing, Networking, domain-specific accelerators used for shading and rendering. But Storage and Analysis. Please refer to the conference proceedings intensive research into high-performance algorithms, architectures, for the most complete version. systems, and compilers [3–12] and the availability of frameworks like CUDA [13] and OpenCL [14], have revolutionized GPUs and ABSTRACT transformed them into high-performance, general-purpose vector Neural network (NN) accelerators have been integrated into a wide- processors. We expect a similar revolution to take place with NN spectrum of computer systems to accommodate the rapidly growing accelerators—a revolution that will create general-purpose matrix demands for artificial intelligence (AI) and machine learning (ML) processors for a broader spectrum of applications. However, de- applications. NN accelerators share the idea of providing native mocratizing these NN accelerators for non-AI/ML workloads will hardware support for operations on multidimensional tensor data. require the system framework and the programmer to tackle the Therefore, NN accelerators are theoretically tensor processors that following issues: can improve system performance for any problem that uses ten- (1) The microarchitectures and instructions of NN accelerators sors as inputs/outputs. Unfortunately, commercially available NN are optimized for NN workloads, instead of general matrix/tensor accelerators only expose computation capabilities through AI/ML- algebra.
    [Show full text]
  • Opencl on Shared Memory Multicore Cpus
    OpenCL on shared memory multicore CPUs Akhtar Ali, Usman Dastgeer and Christoph Kessler Book Chapter N.B.: When citing this work, cite the original article. Part of: Proceedings of the 5th Workshop on MULTIPROG -2012. E. Ayguade, B. Gaster, L. Howes, P. Stenström, O. Unsal (eds), 2012. Copyright: HiPEAC Network of Excellence Available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93951 Published in: Proc. Fifth Workshop on Programmability Issues for Multi-Core Computers (MULTIPROG-2012) at HiPEAC-2012 conference, Paris, France, Jan. 2012. OpenCL for programming shared memory multicore CPUs Akhtar Ali, Usman Dastgeer, and Christoph Kessler PELAB, Dept. of Computer and Information Science, Linköping University, Sweden [email protected] {usman.dastgeer,christoph.kessler}@liu.se Abstract. Shared memory multicore processor technology is pervasive in mainstream computing. This new architecture challenges programmers to write code that scales over these many cores to exploit the full compu- tational power of these machines. OpenMP and Intel Threading Build- ing Blocks (TBB) are two of the popular frameworks used to program these architectures. Recently, OpenCL has been defined as a standard by Khronos group which focuses on programming a possibly heteroge- neous set of processors with many cores such as CPU cores, GPUs, DSP processors. In this work, we evaluate the effectiveness of OpenCL for programming multicore CPUs in a comparative case study with OpenMP and Intel TBB for five benchmark applications: matrix multiply, LU decomposi- tion, 2D image convolution, Pi value approximation and image histogram generation. The evaluation includes the effect of compiler optimizations for different frameworks, OpenCL performance on different vendors’ plat- forms and the performance gap between CPU-specific and GPU-specific OpenCL algorithms for execution on a modern GPU.
    [Show full text]
  • Owner's Manual
    Dell Latitude 3330 Owner's Manual Regulatory Model: P18S Regulatory Type: P18S002 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2013 Dell Inc. Trademarks used in this text: Dell™, the Dell logo, Dell Boomi™, Dell Precision™ , OptiPlex™, Latitude™, PowerEdge™, PowerVault™, PowerConnect™, OpenManage™, EqualLogic™, Compellent™, KACE™, FlexAddress™, Force10™ and Vostro™ are trademarks of Dell Inc. Intel®, Pentium®, Xeon®, Core® and Celeron® are registered trademarks of Intel Corporation in the U.S. and other countries. AMD® is a registered trademark and AMD Opteron™, AMD Phenom™ and AMD Sempron™ are trademarks of Advanced Micro Devices, Inc. Microsoft®, Windows®, Windows Server®, Internet Explorer®, MS-DOS®, Windows Vista® and Active Directory® are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat® and Red Hat® Enterprise Linux® are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell® and SUSE® are registered trademarks of Novell Inc. in the United States and other countries. Oracle® is a registered trademark of Oracle Corporation and/or its affiliates. Citrix®, Xen®, XenServer® and XenMotion® are either registered trademarks or trademarks of Citrix Systems, Inc. in the United States and/or other countries. VMware®, Virtual SMP®, vMotion®, vCenter® and vSphere® are registered trademarks or trademarks of VMware, Inc. in the United States or other countries.
    [Show full text]
  • Amd Filed: February 24, 2009 (Period: December 27, 2008)
    FORM 10-K ADVANCED MICRO DEVICES INC - amd Filed: February 24, 2009 (period: December 27, 2008) Annual report which provides a comprehensive overview of the company for the past year Table of Contents 10-K - FORM 10-K PART I ITEM 1. 1 PART I ITEM 1. BUSINESS ITEM 1A. RISK FACTORS ITEM 1B. UNRESOLVED STAFF COMMENTS ITEM 2. PROPERTIES ITEM 3. LEGAL PROCEEDINGS ITEM 4. SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS PART II ITEM 5. MARKET FOR REGISTRANT S COMMON EQUITY, RELATED STOCKHOLDER MATTERS AND ISSUER PURCHASES OF EQUITY SECURITIES ITEM 6. SELECTED FINANCIAL DATA ITEM 7. MANAGEMENT S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS ITEM 7A. QUANTITATIVE AND QUALITATIVE DISCLOSURE ABOUT MARKET RISK ITEM 8. FINANCIAL STATEMENTS AND SUPPLEMENTARY DATA ITEM 9. CHANGES IN AND DISAGREEMENTS WITH ACCOUNTANTS ON ACCOUNTING AND FINANCIAL DISCLOSURE ITEM 9A. CONTROLS AND PROCEDURES ITEM 9B. OTHER INFORMATION PART III ITEM 10. DIRECTORS, EXECUTIVE OFFICERS AND CORPORATE GOVERNANCE ITEM 11. EXECUTIVE COMPENSATION ITEM 12. SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER MATTERS ITEM 13. CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS AND DIRECTOR INDEPENDENCE ITEM 14. PRINCIPAL ACCOUNTANT FEES AND SERVICES PART IV ITEM 15. EXHIBITS, FINANCIAL STATEMENT SCHEDULES SIGNATURES EX-10.5(A) (OUTSIDE DIRECTOR EQUITY COMPENSATION POLICY) EX-10.19 (SEPARATION AGREEMENT AND GENERAL RELEASE) EX-21 (LIST OF AMD SUBSIDIARIES) EX-23.A (CONSENT OF ERNST YOUNG LLP - ADVANCED MICRO DEVICES) EX-23.B
    [Show full text]
  • AMD Accelerated Parallel Processing Opencl Programming Guide
    AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • Processamento Paralelo Em Cuda Aplicado Ao Modelo De Geração De Cenários Sintéticos De Vazões E Energias - Gevazp
    PROCESSAMENTO PARALELO EM CUDA APLICADO AO MODELO DE GERAÇÃO DE CENÁRIOS SINTÉTICOS DE VAZÕES E ENERGIAS - GEVAZP André Emanoel Rabello Quadros Dissertação de Mestrado apresentada ao Programa de Pós-Graduação em Engenharia Elétrica, COPPE, da Universidade Federal do Rio de Janeiro, como parte dos requisitos necessários à obtenção do título de Mestre em Engenharia Elétrica. Orientadora: Carmen Lucia Tancredo Borges Rio de Janeiro Março de 2016 PROCESSAMENTO PARALELO EM CUDA APLICADO AO MODELO DE GERAÇÃO DE CENÁRIOS SINTÉTICOS DE VAZÕES E ENERGIAS - GEVAZP André Emanoel Rabello Quadros DISSERTAÇÃO SUBMETIDA AO CORPO DOCENTE DO INSTITUTO ALBERTO LUIZ COIMBRA DE PÓS-GRADUAÇÃO E PESQUISA DE ENGENHARIA (COPPE) DA UNIVERSIDADE FEDERAL DO RIO DE JANEIRO COMO PARTE DOS REQUISITOS NECESSÁRIOS PARA A OBTENÇÃO DO GRAU DE MESTRE EM CIÊNCIAS EM ENGENHARIA ELÉTRICA. Examinada por: ________________________________________________ Profa. Carmen Lucia Tancredo Borges, D.Sc. ________________________________________________ Prof. Antônio Carlos Siqueira de Lima, D.Sc. ________________________________________________ Dra. Maria Elvira Piñeiro Maceira, D.Sc. ________________________________________________ Prof. Sérgio Barbosa Villas-Boas, Ph.D. RIO DE JANEIRO, RJ – BRASIL MARÇO DE 2016 iii Quadros, André Emanoel Rabello Processamento Paralelo em CUDA Aplicada ao Modelo de Geração de Cenários Sintéticos de Vazões e Energias – GEVAZP / André Emanoel Rabello Quadros. – Rio de Janeiro: UFRJ/COPPE, 2016. XII, 101 p.: il.; 29,7 cm. Orientador(a): Carmen Lucia Tancredo Borges Dissertação (mestrado) – UFRJ/ COPPE/ Programa de Engenharia Elétrica, 2016. Referências Bibliográficas: p. 9 6-99. 1. Programação Paralela. 2. GPU. 3. CUDA. 4. Planejamento da Operação Energética. 5. Geração de cenários sintéticos. I. Borges, Carmen Lucia Tancredo. II. Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Elétrica.
    [Show full text]
  • On Heterogeneous Compute and Memory Systems
    ON HETEROGENEOUS COMPUTE AND MEMORY SYSTEMS by Jason Lowe-Power A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2017 Date of final oral examination: 05/31/2017 The dissertation is approved by the following members of the Final Oral Committee: Mark D. Hill, Professor, Computer Sciences Dan Negrut, Professor, Mechanical Engineering Jignesh M. Patel, Professor, Computer Sciences Karthikeyan Sankaralingam, Associate Professor, Computer Sciences David A. Wood, Professor, Computer Sciences © Copyright by Jason Lowe-Power 2017 All Rights Reserved i Acknowledgments I would like to acknowledge all of the people who helped me along the way to completing this dissertation. First, I would like to thank my advisors, Mark Hill and David Wood. Often, when students have multiple advisors they find there is high “synchronization overhead” between the advisors. However, Mark and David complement each other well. Mark is a high-level thinker, focusing on the structure of the argument and distilling ideas to their essentials; David loves diving into the details of microarchitectural mechanisms. Although ever busy, at least one of Mark or David were available to meet with me, and they always took the time to help when I needed it. Together, Mark and David taught me how to be a researcher, and they have given me a great foundation to build my career. I thank my committee members. Jignesh Patel for his collaborations, and for the fact that each time I walked out of his office after talking to him, I felt a unique excitement about my research.
    [Show full text]
  • Evaluation of AMD EPYC
    Evaluation of AMD EPYC Chris Hollowell <[email protected]> HEPiX Fall 2018, PIC Spain What is EPYC? EPYC is a new line of x86_64 server CPUs from AMD based on their Zen microarchitecture Same microarchitecture used in their Ryzen desktop processors Released June 2017 First new high performance series of server CPUs offered by AMD since 2012 Last were Piledriver-based Opterons Steamroller Opteron products cancelled AMD had focused on low power server CPUs instead x86_64 Jaguar APUs ARM-based Opteron A CPUs Many vendors are now offering EPYC-based servers, including Dell, HP and Supermicro 2 How Does EPYC Differ From Skylake-SP? Intel’s Skylake-SP Xeon x86_64 server CPU line also released in 2017 Both Skylake-SP and EPYC CPU dies manufactured using 14 nm process Skylake-SP introduced AVX512 vector instruction support in Xeon AVX512 not available in EPYC HS06 official GCC compilation options exclude autovectorization Stock SL6/7 GCC doesn’t support AVX512 Support added in GCC 4.9+ Not heavily used (yet) in HEP/NP offline computing Both have models supporting 2666 MHz DDR4 memory Skylake-SP 6 memory channels per processor 3 TB (2-socket system, extended memory models) EPYC 8 memory channels per processor 4 TB (2-socket system) 3 How Does EPYC Differ From Skylake (Cont)? Some Skylake-SP processors include built in Omnipath networking, or FPGA coprocessors Not available in EPYC Both Skylake-SP and EPYC have SMT (HT) support 2 logical cores per physical core (absent in some Xeon Bronze models) Maximum core count (per socket) Skylake-SP – 28 physical / 56 logical (Xeon Platinum 8180M) EPYC – 32 physical / 64 logical (EPYC 7601) Maximum socket count Skylake-SP – 8 (Xeon Platinum) EPYC – 2 Processor Inteconnect Skylake-SP – UltraPath Interconnect (UPI) EYPC – Infinity Fabric (IF) PCIe lanes (2-socket system) Skylake-SP – 96 EPYC – 128 (some used by SoC functionality) Same number available in single socket configuration 4 EPYC: MCM/SoC Design EPYC utilizes an SoC design Many functions normally found in motherboard chipset on the CPU SATA controllers USB controllers etc.
    [Show full text]
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • Amd Athlon Ii X2 270 Manual
    Amd Athlon Ii X2 270 Manual Specifications. Please visit AMD Athlon II X2 215 (rev. C3) and AMD Athlon II X2 270 pages for more detailed specifications. Review, Differences, Benchmarks, Specifications, Comments Athlon II X2 270. CPUBoss recommends the AMD Athlon II X2 270 based on its. See full details. Specifications. Please visit AMD Athlon II X2 270 and AMD Athlon II X2 280 pages for more detailed specifications of both. Far Cry 4 on AMD Athlon x2 340(Dual Core) 4GB RAM HD 6570 PC Specifications. Specifications. Please visit AMD Athlon II X2 270 and AMD FX-6300 pages for more detailed specifications of both. AMD Athlon II x2 260 (3.2GHz) Although the specifications of this cpu list 74C as the max temp, i prefer to stick to the old "65C max" rule-of-thumb for amd cpus. Amd Athlon Ii X2 270 Manual Read/Download AMD Athlon II X2 270u. 2 GHz, Dual core. Front view of AMD Athlon II X2 270u. 5.9 Out of 10. VS Review, Differences, Benchmarks, Specifications, Comments. Photos of the AMD Athlon II X2 270 Black Edition from the KitGuru Price Comparison Engine. Specifications. Please visit AMD Athlon II X2 270 and AMD FX-4300 pages for more detailed specifications of both. SPECIFICATIONS : Model : AMD Athlon II X2 270. CPU Clock Speed : 3.4 GHz. Core : 2. Total L2 Cache : 2 MB Sockets : Socket AM2+,Socket AM3 Supported. up vote -5 down vote favorite. These are my specifications: AMD Athlon II x2 270 CPU, AMD 760g GPU, 4GB RAM. grand-theft-auto-5.
    [Show full text]
  • Multiprocessing and Scalability
    Multiprocessing and Scalability A.R. Hurson Computer Science and Engineering The Pennsylvania State University 1 Multiprocessing and Scalability Large-scale multiprocessor systems have long held the promise of substantially higher performance than traditional uni- processor systems. However, due to a number of difficult problems, the potential of these machines has been difficult to realize. This is because of the: Fall 2004 2 Multiprocessing and Scalability Advances in technology ─ Rate of increase in performance of uni-processor, Complexity of multiprocessor system design ─ This drastically effected the cost and implementation cycle. Programmability of multiprocessor system ─ Design complexity of parallel algorithms and parallel programs. Fall 2004 3 Multiprocessing and Scalability Programming a parallel machine is more difficult than a sequential one. In addition, it takes much effort to port an existing sequential program to a parallel machine than to a newly developed sequential machine. Lack of good parallel programming environments and standard parallel languages also has further aggravated this issue. Fall 2004 4 Multiprocessing and Scalability As a result, absolute performance of many early concurrent machines was not significantly better than available or soon- to-be available uni-processors. Fall 2004 5 Multiprocessing and Scalability Recently, there has been an increased interest in large-scale or massively parallel processing systems. This interest stems from many factors, including: Advances in integrated technology. Very
    [Show full text]