Efficient Management of Last-Level Caches in Graphics Processors For

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Management of Last-Level Caches in Graphics Processors For Efficient Management of Last-level Caches in Graphics Processors for 3D Scene Rendering Workloads ∗ Jayesh Gaury Raghuram Srinivasanz Sreenivas Subramoneyy Mainak Chaudhuri] y Intel Architecture Group, Bangalore 560103, INDIA z The Ohio State University, Columbus, OH 43210, USA ] Indian Institute of Technology, Kanpur 208016, INDIA ABSTRACT General Terms Three-dimensional (3D) scene rendering is implemented in the Algorithms, design, measurement, performance form of a pipeline in graphics processing units (GPUs). In different stages of the pipeline, different types of data get ac- Keywords cessed. These include, for instance, vertex, depth, stencil, ren- der target (same as pixel color), and texture sampler data. The Caches, graphics processing units, 3D scene rendering GPUs traditionally include small caches for vertex, render tar- get, depth, and stencil data as well as multi-level caches for the 1. INTRODUCTION texture sampler units. Recent introduction of reasonably large High-quality and high-performance 3D scene rendering is last-level caches (LLCs) shared among these data streams in central to the success of several important graphics applica- discrete as well as integrated graphics hardware architectures tions. A general 3D rendering pipeline found in a typical GPU has opened up new opportunities for improving 3D rendering. consists of a front-end that processes the input geometry by The GPUs equipped with such large LLCs can enjoy far-flung transforming the vertices of each primitive polygon from the intra- and inter-stream reuses. However, there is no comprehen- local co-ordinates to the perspective view co-ordinates. The sive study that can help graphics cache architects understand back-end of the pipeline assigns color to each pixel within each how to effectively manage a large multi-megabyte LLC shared transformed primitive by possibly blending multiple render tar- between different 3D graphics streams. gets in the screen-space frame buffer (also known as the back In this paper, we characterize the intra-stream and inter- buffer in DirectX1), applies texture maps to each pixel to bring stream reuses in 52 frames captured from eight DirectX game realism to the scene, and resolves the visible pixels in the frame titles and four DirectX benchmark applications spanning three buffer through a depth test. Today's GPUs routinely carry out different frame resolutions. Based on this characterization, we hierarchical depth tests and early depth tests to reduce depth propose graphics stream-aware probabilistic caching (GSPC) buffer bandwidth and eliminate shading of pixels belonging to that dynamically learns the reuse probabilities and accordingly the occluded parts of a surface [12, 35, 36, 37]. Also, these manages the LLC of the GPU. Our detailed trace-driven simu- processors incorporate stencil tests to apply per-pixel masks lation of a typical GPU equipped with 768 shader thread con- for realizing sophisticated control over the set of retained or texts, twelve fixed-function texture samplers, and an 8 MB 16- discarded pixels in the rendering pipeline. The stencil buffer way LLC shows that GSPC saves up to 29.6% and on average stores these masks and is often used together with the depth 13.1% LLC misses across 52 frames compared to the baseline buffer. In summary, a 3D rendering pipeline generates access state-of-the-art two-bit dynamic re-reference interval predic- streams to different data structures such as vertices of the ge- tion (DRRIP) policy. These savings in the LLC misses result ometry primitives, hierarchical depth buffer (HiZ buffer), reg- in a speedup of up to 18.2% and on average 8.0%. On a 16 MB ular depth buffer (Z buffer), render targets (the pixel colors of LLC, the average speedup achieved by GSPC further improves the surfaces being rendered), texture maps, and stencil buffer. to 11.8% compared to DRRIP. Traditionally, the GPUs have included small independent on- die caches for each access stream type. For example, a single Categories and Subject Descriptors level of vertex and vertex index cache, Z cache, render tar- get cache (also known as the color cache), stencil cache, HiZ B.3 [Memory Structures]: Design Styles cache, and multiple levels of texture caches can be found in ∗Contributed to this work as an intern at Intel India. any typical GPU. We will refer to these caches collectively as render caches. While these small render caches (few tens to few Permission to make digital or hard copies of all or part of this work for hundreds of KB) improve performance by exploiting near-term personal or classroom use is granted without fee provided that copies are not temporal locality, they fail to offer much in terms of exploit- made or distributed for profit or commercial advantage and that copies bear ing far-flung reuses even within a frame of animation. Recent this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with discrete and integrated graphics hardware architectures from credit is permitted. To copy otherwise, or republish, to post on servers or to 1Rendering takes place in the back buffer. A color or texture redistribute to lists, requires prior specific permission and/or a fee. Request surface that needs to be rendered is referred to as a render permissions from [email protected]. MICRO ’46, December 7-11, 2013, Davis, CA, USA target. When a frame is completely rendered and ready for Copyright 2013 ACM 978-1-4503-2638-4/13/12 ...$15.00. presentation, the back buffer is swapped with the front buffer http://dx.doi.org/10.1145/2540708.2540742. and the front buffer pixels get displayed. Nvidia, AMD, and Intel have included reasonably large last- count by 6.2% on average. Belady's optimal policy saves 36.6% level caches that can be shared by all the data streams. For ex- of LLC misses averaged over all the frames compared to the ample, the Fermi and Kepler architectures from Nvidia respec- baseline DRRIP policy pointing to the large opportunity of tively include 768 KB and up to 1.5 MB of shared L2 cache [34, saving LLC misses, DRAM traffic, and system bandwidth. 49]. The GPUs used in the AMD Radeon 7900, 7800, and 7700 Inspired by the data in Figure 1, we characterize the reuses series (Tahiti, Pitcairn, and Cape Verde of the Southern Is- observed within each graphics data stream as well as across lands family) designed based on the AMD Graphics Core Next different streams (Section 2). This characterization shows that architecture have 64 KB to 128 KB of shared read/write L2 Belady's optimal policy benefits from both intra-stream and cache attached to each memory controller channel [32]. Finally, inter-stream reuses. The inter-stream reuses primarily stem Intel's integrated GPUs found in Sandy Bridge, Ivy Bridge, from the process of dynamic texture mapping where the dy- and Haswell share a large multi-megabyte LLC with the CPU namically produced render targets (i.e., color surfaces) get con- cores [9, 21, 22, 39, 44, 52]. sumed by the texture samplers and reused for texturing other Data from different 3D graphics streams can co-exist in such surfaces [14]. Based on this detailed characterization, we sys- a large shared graphics LLC. An efficiently-managed LLC can tematically derive our proposal of graphics stream-aware prob- offer far-flung intra-stream reuses and significantly accelerate abilistic caching (GSPC), which dynamically learns the reuse inter-stream reuses, which was impossible in the architectures probability of a block belonging to a 3D graphics stream and with an independent cache hierarchy for each different stream. accordingly modulates the RRPV of the block (Section 3). Our Efficient management of LLC shared among the 3D graphics detailed trace-driven simulation results (Sections 4 and 5) show streams is the central focus of this paper. This is the first that GSPC saves 13.1% LLC misses averaged over 52 discrete study that characterizes the reuse profile of 3D scene rendering frames drawn from twelve DirectX 3D rendering workloads workloads and exploits the reuse behavior of different graphics compared to the baseline DRRIP policy. The LLC miss sav- data to design policies for the LLC of a GPU. ings achieved by our proposal lead to an average speedup of 1.3 1.2 DirectX 10 DirectX 11 NRU 8.0% compared to the baseline. For a 16 MB 16-way LLC, the 1.1 Belady 1 speedup further improves to 11.8%. 0.9 0.8 0.7 0.6 0.5 1.1 Related Work 0.4 0.3 0.2 In this section, we review the contributions on the manage- 0.1 Normalized LLC miss count 0 ment of LLCs in general-purpose processors and studies explor- Dirt DMC ing memory management in 3D rendering hardware. HAWX Heaven Unigine 3DMarkVAGT1 BioShock Civilization 3DMarkVAGT2Assn Creed Lost PlanetStalker COP Frame AVG Figure 1: Number of LLC misses in twelve DirectX ap- 1.1.1 General-purpose LLC Management plications for NRU and Belady's optimal policy normal- We focus on three classes of the general-purpose LLC man- ized to two-bit DRRIP in an 8 MB 16-way LLC. agement algorithms that most closely relate to our proposal. To understand the potential of improving the LLC perfor- We discuss algorithms for deciding the age of a block on inser- mance for 3D graphics workloads, Figure 1 shows the num- tion and subsequent hits in the LLC, algorithms for predicting ber of LLC misses for two different LLC replacement policies, dead blocks, and algorithms for dynamically partitioning the namely, single-bit not-recently-used (NRU) and Belady's op- LLC among multiple independent threads in a multi-core set- timal [2, 33] normalized to the baseline two-bit dynamic re- ting to meet certain performance, fairness, or quality goals.
Recommended publications
  • Small Form Factor 3D Graphics for Your Pc
    VisionTek Part# 900701 PRODUCTIVITY SERIES: SMALL FORM FACTOR 3D GRAPHICS FOR YOUR PC The VisionTek Radeon R7 240SFF graphics card offers a perfect balance of performance, features, and affordability for the gamer seeking a complete solution. It offers support for the DIRECTX® 11.2 graphics standard and 4K Ultra HD for stunning 3D visual effects, realistic lighting, and lifelike imagery. Its Short Form Factor design enables it to fit into the latest Low Profile desktops and workstations, yet the R7 240SFF can be converted to a standard ATX design with the included tall bracket. With 2GB of DDR3 memory and award-winning Graphics Core Next (GCN) architecture, and DVI-D/HDMI outputs, the VisionTek Radeon R7 240SFF is big on features and light on your wallet. RADEON R7 240 SPECS • Graphics Engine: RADEON R7 240 • Video Memory: 2GB DDR3 • Memory Interface: 128bit • DirectX® Support: 11.2 • Bus Standard: PCI Express 3.0 • Core Speed: 780MHz • Memory Speed: 800MHz x2 • VGA Output: VGA* • DVI Output: SL DVI-D • HDMI Output: HDMI (Video/Audio) • UEFI Ready: Support SYSTEM REQUIREMENTS • PCI Express® based PC is required with one X16 lane graphics slot available on the motherboard. • 400W (or greater) power supply GCN Architecture: A new design for AMD’s unified graphics processing and compute cores that allows recommended. 500 Watt for AMD them to achieve higher utilization for improved performance and efficiency. CrossFire™ technology in dual mode. • Minimum 1GB of system memory. 4K Ultra HD Support: Experience what you’ve been missing even at 1080P! With support for 3840 x • Installation software requires CD-ROM 2160 output via the HDMI port, textures and other detail normally compressed for lower resolutions drive.
    [Show full text]
  • Monte Carlo Evaluation of Financial Options Using a GPU a Thesis
    Monte Carlo Evaluation of Financial Options using a GPU Claus Jespersen 20093084 A thesis presented for the degree of Master of Science Computer Science Department Aarhus University Denmark 02-02-2015 Supervisor: Gerth Brodal Abstract The financial sector has in the last decades introduced several new fi- nancial instruments. Among these instruments, are the financial options, which for some cases can be difficult if not impossible to evaluate analyti- cally. In those cases the Monte Carlo method can be used for pricing these instruments. The Monte Carlo method is a computationally expensive al- gorithm for pricing options, but is at the same time an embarrassingly parallel algorithm. Modern Graphical Processing Units (GPU) can be used for general purpose parallel-computing, and the Monte Carlo method is an ideal candidate for GPU acceleration. In this thesis, we will evaluate the classical vanilla European option, an arithmetic Asian option, and an Up-and-out barrier option using the Monte Carlo method accelerated on a GPU. We consider two scenarios; a single option evaluation, and a se- quence of a varying amount of option evaluations. We report performance speedups of up to 290x versus a single threaded CPU implementation and up to 53x versus a multi threaded CPU implementation. 1 Contents I Theoretical aspects of Computational Finance 5 1 Computational Finance 5 1.1 Options . .7 1.1.1 Types of options . .7 1.1.2 Exotic options . .9 1.2 Pricing of options . 11 1.2.1 The Black-Scholes Partial Differential Equation . 11 1.2.2 Solving the PDE and pricing vanilla European options .
    [Show full text]
  • Deep Dive: Asynchronous Compute
    Deep Dive: Asynchronous Compute Stephan Hodes Developer Technology Engineer, AMD Alex Dunn Developer Technology Engineer, NVIDIA Joint Session AMD NVIDIA ● Graphics Core Next (GCN) ● Maxwell, Pascal ● Compute Unit (CU) ● Streaming Multiprocessor (SM) ● Wavefronts ● Warps 2 Terminology Asynchronous: Not independent, async work shares HW Work Pairing: Items of GPU work that execute simultaneously Async. Tax: Overhead cost associated with asynchronous compute 3 Async Compute More Performance 4 Queue Fundamentals 3 Queue Types: 3D ● Copy/DMA Queue ● Compute Queue COMPUTE ● Graphics Queue COPY All run asynchronously! 5 General Advice ● Always profile! 3D ● Can make or break perf ● Maintain non-async paths COMPUTE ● Profile async on/off ● Some HW won’t support async ● ‘Member hyper-threading? COPY ● Similar rules apply ● Avoid throttling shared HW resources 6 Regime Pairing Good Pairing Poor Pairing Graphics Compute Graphics Compute Shadow Render Light culling G-Buffer SSAO (Geometry (ALU heavy) (Bandwidth (Bandwidth limited) limited) limited) (Technique pairing doesn’t have to be 1-to-1) 7 - Red Flags Problem/Solution Format Topics: ● Resource Contention - ● Descriptor heaps - ● Synchronization models ● Avoiding “async-compute tax” 8 Hardware Details - ● 4 SIMD per CU ● Up to 10 Wavefronts scheduled per SIMD ● Accomplish latency hiding ● Graphics and Compute can execute simultanesouly on same CU ● Graphics workloads usually have priority over Compute 9 Resource Contention – Problem: Per SIMD resources are shared between Wavefronts SIMD executes
    [Show full text]
  • Contributions of Hybrid Architectures to Depth Imaging: a CPU, APU and GPU Comparative Study
    Contributions of hybrid architectures to depth imaging : a CPU, APU and GPU comparative study Issam Said To cite this version: Issam Said. Contributions of hybrid architectures to depth imaging : a CPU, APU and GPU com- parative study. Hardware Architecture [cs.AR]. Université Pierre et Marie Curie - Paris VI, 2015. English. NNT : 2015PA066531. tel-01248522v2 HAL Id: tel-01248522 https://tel.archives-ouvertes.fr/tel-01248522v2 Submitted on 20 May 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE` DE DOCTORAT DE l’UNIVERSITE´ PIERRE ET MARIE CURIE sp´ecialit´e Informatique Ecole´ doctorale Informatique, T´el´ecommunications et Electronique´ (Paris) pr´esent´eeet soutenue publiquement par Issam SAID pour obtenir le grade de DOCTEUR en SCIENCES de l’UNIVERSITE´ PIERRE ET MARIE CURIE Apports des architectures hybrides `a l’imagerie profondeur : ´etude comparative entre CPU, APU et GPU Th`esedirig´eepar Jean-Luc Lamotte et Pierre Fortin soutenue le Lundi 21 D´ecembre 2015 apr`es avis des rapporteurs M. Fran¸cois Bodin Professeur, Universit´ede Rennes 1 M. Christophe Calvin Chef de projet, CEA devant le jury compos´ede M. Fran¸cois Bodin Professeur, Universit´ede Rennes 1 M.
    [Show full text]
  • AMD APP SDK V2.8.1 Developer Release Notes
    AMD APP SDK v2.8.1 Developer Release Notes 1 What’s New in AMD APP SDK v2.8.1 1.1 New features in AMD APP SDK v2.8.1 AMD APP SDK v2.8.1 includes the following new features: Bolt: With the launch of Bolt 1.0, several new samples have been added to demonstrate the use of the features of Bolt 1.0. These features showcase the use of valuable Bolt APIs such as scan, sort, reduce and transform. Other new samples highlight the ease of porting from STL and the performance benefits achieved over equivalent STL implementations. Other samples demonstrate the different fallback options in Bolt 1.0 when no GPU is available. These options include a fallback to multicore-CPU if TBB libraries are installed, or falling all the way back to serial-CPU if needed to ensure your code runs correctly on any platform. OpenCV: AMD has been working closely with the OpenCV open source community to add heterogeneous acceleration capability to the world’s most popular computer vision library. These changes are already integrated into OpenCV and are readily available for developers who want to improve the performance and efficiency of their computer vision applications. The new samples illustrate these improvements and highlight how simple it is to include them in your application. For information on the latest OpenCV enhancements, see Harris’ blog. GCN: AMD recently launched a new Graphics Core Next (GCN) Architecture on several AMD products. GCN is based on a scalar architecture versus the VLIW vector architecture of prior generations, so carefully hand-tuned vectorization to optimize hardware utilization is no longer needed.
    [Show full text]
  • AMD Firepro™ W4100 Professional Graphics in a Class of Its Own
    AMD FirePro™ W4100 Professional Graphics In a class of its own Key Features: The AMD FirePro™ W4100 professional • Application optimizations graphics card represents a completely and certifications • AMD Graphics Core Next (GCN) new class of product – one that provides GPU architecture you with great graphics performance • Four Mini DisplayPort outputs • DisplayPort 1.2a support and display versatility while housed in a • AMD Eyefinity technology1 compact, low-power design. • 4K display resolution (up to 4096 x 2160) • 512 stream processors Increase your productivity by working across up to four high-resolution displays with AMD Eyefinity 1 • 645.1 GFLOPS peak single precision technology. Manipulate 3D models and large data sets with ease thanks to 2GB of ultrafast GDDR5 memory. With a stable driver that supports a growing list of optimized and certified applications, the • 2GB GDDR5 memory AMD FirePro W4100 is uniquely suited to provide the performance and quality you expect, and more, • 128-bit memory interface from a professional graphics card. • Up to 72GB/s memory bandwidth • PCIe® 3.0 compliant Peformance Get solid, midrange performance with the AMD FirePro W4100, delivering CAD performance that is up • OpenCL™, DirectX® and OpenGL support to 100% faster than the previous generation2. Equipped with 2GB of ultrafast GDDR5 memory with a • 50W maximum power consumption 128-bit memory interface, the AMD FirePro W4100 delivers up to 72GB/s of memory bandwidth, helping • Discreet active cooling solution improve application responsiveness to your workflows. Accelerate your 3D applications with 512 stream • Low profile single-slot form factor processors and enable more efficient data transfers between the GPU and CPU with PCIe® 3.0 support.
    [Show full text]
  • How to Sell the AMD Radeon™ HD 7790 Graphics Cards Outstanding 1080P Performance and Unbeatable Value for Gamers
    How to Sell the AMD Radeon™ HD 7790 Graphics Cards Outstanding 1080p performance and unbeatable value for gamers. Who’s it for? Gamers who want 1080p gaming and outstanding image quality at a great value. Sell it in 5 seconds. This is where high-quality 1080p gaming begins. Get ready to turn on that graphics eye-candy. With the AMD Radeon™ HD 7790 GPU, you get outstanding 1080p performance in the latest DirectX® 11 games at an unbeatable value. It offers great performance per dollar and allows you to play modern games with all the settings turned up to the max. It’s an all new chip built just for gaming featuring AMD’s latest refinement of AMD PowerTune Technology. Sell it in 60 seconds. > Outstanding 1080p performance in the latest DirectX® 11 games: The AMD Radeon™ HD 7790 Graphics card was engineered to provide superior DirectX® 11.1 performance for gamers with 1080p monitors and, being built on the Graphics Core Next Architecture, is the perfect opportunity to ready your rig for the hottest games of the year. > Unbeatable value for gamers: If you’re looking for great gaming on a budget, it doesn’t get any better than this product. In fact it is up to 21% faster than the competition.1 > Featuring an all-new AMD PowerTune Technology designed to squeeze every bit of performance out of the GPU, the AMD Radeon™ HD 7790 is engineered with intelligent, automatic overclocking to provide the most frame-rates possible. Don’t take our word for it. Here is what others are saying… “…power efficiency, its low noise levels, and the free copy of BioShock Infinite in the box…looks like we have a winning recipe from AMD.” – The Tech Report 2 “…even without BioShock Infinite coming along for the ride, the HD 7790 represents a phenomenal value.” – Hardware Canucks 3 Why it’s great..
    [Show full text]
  • AMD Graphics Core Next | June 2011 SCALABLE MULTI-TASK GRAPHICS ENGINE
    AMD GRAPHIC CORE NEXT Low Power High Performance Graphics & Parallel Compute Michael Mantor Mike Houston AMD Senior Fellow Architect AMD Fellow Architect [email protected] [email protected] At the heart of every AMD APU/GPU is a power aware high performance set of compute units that have been advancing to bring users new levels of programmability, precision and performance. AGENDA AMD Graphic Core Next Architecture .Unified Scalable Graphic Processing Unit (GPU) optimized for Graphics and Compute – Multiple Engine Architecture with Multi-Task Capabilities – Compute Unit Architecture – Multi-Level R/W Cache Architecture .What will not be discussed – Roadmaps/Schedules – New Product Configurations – Feature Rollout 3 | AMD Graphics Core Next | June 2011 SCALABLE MULTI-TASK GRAPHICS ENGINE GFX Command Processor Work Distributor Scalable Graphics Engine MC Primitive Primitive Pipe 0 Pipe n HUB & HOS HOS CS Pixel Pixel R/W MEM Pipe 0 Pipe n L2 Pipe Tessellate Tessellate Scan Scan Geometry Geometry Conversion Conversion RB RB HOS – High Order Surface RB - Render Backend Unified Shader Core CS - Compute Shader GFX - Graphics 4 | AMD Graphics Core Next | June 2011 SCALABLE MULTI-TASK GRAPHICS ENGINE PrimitiveGFX Scaling Multiple Primitive Pipelines Command ProcessorPixel Scaling Multiple Screen Partitions Multi-task graphics engine use of unified shader Work Distributor Scalable Graphics Engine MC Primitive Primitive Pipe 0 Pipe n HUB & HOS HOS CS Pixel Pixel R/W MEM Pipe 0 Pipe n L2 Pipe Tessellate Tessellate Scan Scan Geometry Geometry Conversion Conversion RB RB Unified Shader Core 5 | AMD Graphics Core Next | June 2011 MULTI-ENGINE UNIFIED COMPUTING GPU Asynchronous Compute Engine (ACE) ACE ACE GFX 0 n .
    [Show full text]
  • Real-World Design and Evaluation of Compiler-Managed GPU Redundant Multithreading ∗
    Real-World Design and Evaluation of Compiler-Managed GPU Redundant Multithreading ∗ Jack Wadden Alexander Lyashevsky§ Sudhanva Gurumurthi† Vilas Sridharan‡ Kevin Skadron University of Virginia, Charlottesville, Virginia, USA †AMD Research, Advanced Micro Devices, Inc., Boxborough, MA, USA §AMD Research, Advanced Micro Devices, Inc., Sunnyvale, CA, USA ‡ RAS Architecture, Advanced Micro Devices, Inc., Boxborough, MA, USA {wadden,skadron}@virginia.edu {Alexander.Lyashevsky,Sudhanva.Gurumurthi,Vilas.Sridharan}@amd.com Abstract Structure Size Estimated ECC Overhead Reliability for general purpose processing on the GPU Local data share 64 kB 14 kB (GPGPU) is becoming a weak link in the construction of re- Vector register file 256 kB 56 kB liable supercomputer systems. Because hardware protection Scalar register file 8 kB 1.75 kB is expensive to develop, requires dedicated on-chip resources, R/W L1 cache 16 kB 343.75 B and is not portable across different architectures, the efficiency Table 1: Reported sizes of structures in an AMD Graphics Core of software solutions such as redundant multithreading (RMT) Next compute unit [4] and estimated costs of SEC-DED ECC must be explored. assuming cache-line and register granularity protections. This paper presents a real-world design and evaluation of automatic software RMT on GPU hardware. We first describe These capabilities typically are provided by hardware. Such a compiler pass that automatically converts GPGPU kernels hardware support can manifest on large storage structures as into redundantly threaded versions. We then perform detailed parity or error-correction codes (ECC), or on pipeline logic power and performance evaluations of three RMT algorithms, via radiation hardening [19], residue execution [16], and other each of which provides fault coverage to a set of structures techniques.
    [Show full text]
  • Decoupled Vector-Fetch Architecture with a Scalarizing Compiler By
    Decoupled Vector-Fetch Architecture with a Scalarizing Compiler by Yunsup Lee A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor Krste Asanovic,´ Chair Professor David A. Patterson Professor Borivoje Nikolic´ Professor Paul K. Wright Spring 2016 Decoupled Vector-Fetch Architecture with a Scalarizing Compiler Copyright 2016 by Yunsup Lee 1 Abstract Decoupled Vector-Fetch Architecture with a Scalarizing Compiler by Yunsup Lee Doctor of Philosophy in Computer Science University of California, Berkeley Professor Krste Asanovic,´ Chair As we approach the end of conventional technology scaling, computer architects are forced to incorporate specialized and heterogeneous accelerators into general-purpose processors for greater energy efficiency. Among the prominent accelerators that have recently become more popular are data-parallel processing units, such as classic vector units, SIMD units, and graphics processing units (GPUs). Surveying a wide range of data-parallel architectures and their parallel program- ming models and compilers reveals an opportunity to construct a new data-parallel machine that is highly performant and efficient, yet a favorable compiler target that maintains the same level of programmability as the others. In this thesis, I present the Hwacha decoupled vector-fetch architecture as the basis of a new data-parallel machine. I reason through the design decisions while describing its programming model, microarchitecture, and LLVM-based scalarizing compiler that efficiently maps OpenCL kernels to the architecture. The Hwacha vector unit is implemented in Chisel as an accelerator at- tached to a RISC-V Rocket control processor within the open-source Rocket Chip SoC generator.
    [Show full text]
  • AMD Radeon ™ E8860 Embedded GPU
    Product Brief AMD Radeon ™ E8860 Embedded GPU SUPERIOR MULTIDISPLAY VERSATILITY The latest evolution in AMD Radeon™ embedded GPUs leverages advanced The AMD Radeon E8860 GPU provides multi-display flexibility, supporting up to five 3840x2160 @30Hz displays simultaneously in Graphics Core Next architecture, delivering clone mode and extended desktop in static screen. Competitive 5 breakthrough performance and power NVIDIA GPUs can only support up to four independent displays. efficiency gains. The AMD Radeon E8860 GPU supporting AMD Eyefinity technology6 PRODUCT OVERVIEW can expand a high-resolution picture across multiple displays. In addition, one display of 4096x2160 @60Hz over one HDMI™ or DP1.2 The AMD Radeon™ E8860 Embedded discrete GPU – the first interface can be supported, providing a superior viewing experience. embedded GPU developed on the groundbreaking Graphics Core This flexible, one-to-many system-to-display configuration Next (GCN) architecture – pushes AMD Radeon graphics and capability enables ultra-immersive visual experiences via a single parallel processing performance to unprecedented new heights small form factor system. while increasing power efficiency. OPTIMIZED FOR GRAPHICS-INTENSIVE APPLICATIONS Providing 2x higher 3D graphics performance1 and 33% higher The AMD Radeon E8860 GPU was designed to increase multimedia single-precision floating point performance than the AMD Radeon processing performance and power efficiency for a range of 2 E6760 GPU , the AMD Radeon E8860 GPU delivers industry- leading embedded applications, including: 3D video graphics performance, enabling stunning, multi- display visual experiences for a range of embedded applications spanning Digital gaming. Supporting rich 3D and 4K video graphics and digital gaming, digital signage, medical imaging, and avionics.
    [Show full text]
  • PC Powerplay Hardware & Tech Special
    HARDWARE & TECH SPECIAL 2016 THE FUTURFUTURE OF VR COMPUTEX 2016 THE LATEST AND GREATEST IN THE NEW PC HARDWARE STRAIGHT FROM THE SHOW FLOOR 4K GAMING WHAT YOU REALISTICALLY NEED FOR 60 FPS 4K GPU GAMING WAR EVERYONE WINS HOW AMD AND NVIDIA ARE STREAMING PCS DOMINATING OPPOSITE HOW TO BUILD A STREAMING PC FOR ONLY $500! ENDS OF THE MARKET NUC ROUNDUP THEY MAY BE SMALL, BUT THE NEW GENERATION NUCS ARE SURPRISINGLY POWERFUL PRE-BUI T NEW G US! 4K GAMING PC NVIDIA'S NEW RANGE OF 4K GAMING IS AR EALITY, BUT CARDS BENCHMARKEDH D POWER COMES AT A COST AND REVIEWEDE HARDWARE & TECH SPECIAL 2016 Computex 2016 20 Thelatestandgreatestdirectfromtheshowfloor 21 Aorus | 22 Asrock | 24 Corsair | 26 Coolermaster | 28 MSI | 30 In W 34 Asus | 36 Gigabyte | 38 Roccat/Fractal/FSP | 39 OCZ/Crucial | 40 Nvidia 1080 Launch 42 Nvidia dominates the enthusiast market 8 PC PowerPlay TECH SPECIAL 2016 12 52 56 64 Hotware Case Modding Streaming PC 4K Gaming The most drool-worthy Stuart Tonks talks case How to build a streaming PC What you really need for 60+ products on show modding in Australia for around $500 FPS 4K gaming 74 52 56 50 80 VR Peripherals Case Mod gallery Interviews Industry Update VR is now a reality but The greatest and most We talk to MSI’s cooling guru VR and 4K are where it’s at control is still an issue extreme case mods and Supremacy Gaming HARDWARE & TECH BUYER’S GUIDES 66 SSDs 82 Pre-Built PCs 54 Graphics Cards PC PowerPlay 9 Ride the Whirlwind EDITORIAL Holy mother of god, what a month it’s been in the lead-up EDITOR Daniel Wilks to this year’s PCPP Tech Special.
    [Show full text]