Cache-Aware Roofline Model: Upgrading the Loft

Total Page:16

File Type:pdf, Size:1020Kb

Cache-Aware Roofline Model: Upgrading the Loft 1 Cache-aware Roofline model: Upgrading the loft Aleksandar Ilic, Frederico Pratas, and Leonel Sousa INESC-ID/IST, Technical University of Lisbon, Portugal ilic,fcpp,las @inesc-id.pt f g Abstract—The Roofline model graphically represents the attainable upper bound performance of a computer architecture. This paper analyzes the original Roofline model and proposes a novel approach to provide a more insightful performance modeling of modern architectures by introducing cache-awareness, thus significantly improving the guidelines for application optimization. The proposed model was experimentally verified for different architectures by taking advantage of built-in hardware counters with a curve fitness above 90%. Index Terms—Multicore computer architectures, Performance modeling, Application optimization F 1 INTRODUCTION in Tab. 1. The horizontal part of the Roofline model cor- Driven by the increasing complexity of modern applica- responds to the compute bound region where the Fp can tions, microprocessors provide a huge diversity of compu- be achieved. The slanted part of the model represents the tational characteristics and capabilities. While this diversity memory bound region where the performance is limited is important to fulfill the existing computational needs, it by BD. The ridge point, where the two lines meet, marks imposes challenges to fully exploit architectures’ potential. the minimum I required to achieve Fp. In general, the A model that provides insights into the system performance original Roofline modeling concept [10] ties the Fp and the capabilities is a valuable tool to assist in the development theoretical bandwidth of a single memory level, with the I and optimization of applications and future architectures. expressed in φ per accessed byte at that memory level. For In order to model the interrelation between architecture example, if working sets fit into the L2 cache, this concept capabilities and application characteristics, it is usually re- considers the peak L2 bandwidth (BL2) and I is defined as quired to develop architecture-specific testing and simula- φ per L2 bytes accessed (i.e., between L1 and L2 caches). tion environments, which result in accurate but complex, Hence, when improving the memory access pattern of ap- difficult to develop and hard to use models. However, plications, it is required to construct and simultaneously use simpler “bound and bottleneck” approaches can provide several instances of the model (one for each memory level) valuable insights into the primary factors that affect the in order to assess the optimization gains. system performance, and give useful guidelines for im- As shown in this paper, the original Roofline modeling proving applications and architectures [3]. In particular, the concept is usually not sufficient to fully describe the per- Roofline model [10] provides insights into inherent archi- formance of modern applications and architectures which tectural bottlenecks and potential application optimizations. rely on the on-chip memory hierarchy. This paper proposes Its usefulness is patent in several works [9], both at the the Cache-aware Roofline model, that is based on a different application [4], [6], [8] and at the architectural level [5], [7]. Core-centric concept where FP operations, data traffic and Modern multi-core systems can be represented as a set of memory bandwidth at different levels are perceived from a central processing units (Cores) with on-chip memory hier- consistent architectural point of view, i.e., as seen by the archy connected to a DRAM memory (see Fig. 1). Hence, the Core. These changes introduce a fundamentally different original Roofline model [10] shows the maximum attainable view on the attainable performance and behavior of modern performance of a computer architecture as a line in the form applications and architectures, thus providing more insight- of a roof. It relates the peak floating-point (FP) performance ful guidelines for application optimization. Moreover, the (Fp in flops=s), the peak DRAM memory bandwidth (BD proposed model is a single-plot model that translates into in βD=s), and the application’s operational intensity (I in an increase of the maximum attainable performance when flops/βD), where βD represents the data traffic in DRAM directly compared to the original DRAM roof, which can be bytes-accessed. It is based on the assumption that memory considered as an upgrade to the loft. transfers and computation can overlap, so the application execution time, T (I), is limited by Fp or BD: 2 CACHE-AWARE ROOFLINE MODEL φ 1 1 ff T (I) = T ( ) = φ max ; (1) βD × BD I Fp As stated before, in contrast to the original modeling con- × where φ is the number of performed FP operations (flops). cept that ties the Fp with the bandwidth and data traffic at a Thus, the original Roofline model defines the maximum single memory hierarchy level, the proposed Cache-aware attainable performance (Fa(I)), as: Roofline modeling concept differs in how memory traffic φ 1 and bandwidth are observed. There are two fundamental Fa(I) = = n o = min BD I;Fp (2) key differences: i) we account for all memory operations T (I) max 1 ; 1 f × g BD ×I Fp including accesses to the different cache levels; and ii) the Figure 2 shows the original Roofline model for the quad- bandwidth depends on the accessed memory levels and it core Intel 3770K processor with the characteristics presented is defined relatively to the Core. Since the proposed model 2 Original Cache-aware 1024 128 4 Cores MAD (Peak Performance) memory traffic memory traffic 4 Cores L1 C Measured → Theoretical F (φ ) 512 B(β ) 2 Cores on-chip 64 2 Cores ADD/MUL 256 1 Core L2 C BL1C B BLLC B L2 D DRAM → L3 C Core L1 ... LLC 128 → 32 1 Core (off-chip) 64 32 DRAM LLC 16 B → L2 C Performance [GFlops/s] Memory Bandwidth [GB/s] DRAM C Measured BLLCC LLC - Last Level Cache 16 Lx - Level X Cache → Theoretical B DC 0.125 1 8 64 512 4096 32768 262144 16 128 1024 8192 65536 524288 Data Traffic [KBytes] Double FP operations [Flops] Fig. 1: General computer architecture Fig. 3: Performance and bandwidth variation on the Intel 3770K th MAD (Peak Performance F ) 128 128 MAD (Peak Performance Fp) 128 id p w d ADD/MUL 64 64 ADD/MUL 64 an ) th b C id 1 32 32 w 32 L 1→ d k L C n a ( a 16 e 2 C 16 16 b P L → C L Intel 3770K 3 L C 8 L → 8 8 L M→ C Ivy Bridge L1 to Core (MAD) L 4 A R M→ 4 L1 to Core (ADD) 4 M→ D A A 2 R L1 to Core (MUL) 2 R D 2 D 1 L2 to Core k rRMSE=0.0819 a 1 1 e 0.5 L3 to Core Performance [GFlops/s] Performance [GFlops/s] Performance [GFlops/s] fitness=92.43% P DRAM to Core 0.5 0.5 0.25 DRAM to LLC 0.0078125 0.0625 0.5 4 32 256 2048 16384 0.0078125 0.0625 0.5 4 32 256 2048 16384 0.0078125 0.0625 0.5 4 32 256 2048 16384 Operational Intensity [Flops/DRAM Byte] Operational Intensity [Flops/Byte] Operational Intensity [Flops/Byte] Fig. 2: Original Roofline model (3770K) Fig. 4: Cache-aware Roofline model (3770K) Fig. 5: Model validation for Intel 3770K 128 128 128 64 64 64 32 32 32 16 16 16 Intel 2600K Intel 3820 Intel 3930K 8 8 Sandy Bridge Sandy Bridge−E 8 Sandy Bridge−E 4 4 4 2 2 2 rRMSE=0.0850 rRMSE=0.0871 rRMSE=0.0766 1 1 1 Performance [GFlops/s] Performance [GFlops/s] fitness=92.17% Performance [GFlops/s] fitness=91.98% fitness=92.88% 0.5 0.5 0.5 0.0078125 0.0625 0.5 4 32 256 2048 16384 0.0078125 0.0625 0.5 4 32 256 2048 16384 0.0078125 0.0625 0.5 4 32 256 2048 16384 Operational Intensity [Flops/Byte] Operational Intensity [Flops/Byte] Operational Intensity [Flops/Byte] Fig. 6: Cache-aware Roofline model validation for different general-purpose machines considers the complete volume of memory traffic, i.e., the value can be obtained from the processor specifications by total number of transferred bytes (β), the I in the Cache- multiplying the number of cores, the L1 bus width, and the aware Roofline Model is uniquely defined for all levels of clock frequency (see Tab. 1). Moreover, the insightfulness of the memory hierarchy, thus resulting in a single-plot model. the Cache-aware Roofline model can be further extended In fact, in the proposed model, I can be the arithmetic inten- by introducing additional memory ceilings depending on sity (φ/β). However, we use the term operational intensity the achievable bandwidth for each memory level (see Fig. 3 to reflect the possibility of applying the proposed model for and 4). It is worth mentioning that the bandwidth from the other types of operations (not necessarily arithmetic). DRAM to the Core (BD C ) is lower than the bandwidth In order to experimentally assess the bandwidth and considered in the original Roofline model (BD) due to the performance, we executed a series of tests based on Alg. 1. fact that the data goes through all cache levels before arriv- In particular, the bandwidth was characterized by vary- ing to the processor Cores (see Fig. 3 and 4). Overall, the ing the number of memory operations (test code A) to proposed model reveals a previously unexplored area, thus hit different memory levels by accessing contiguous and allowing even better insights on the attainable performance. increasing memory addresses. Figure 3 shows the results for the quad-core Intel 3770K architecture1 (see Tab.
Recommended publications
  • Data Storage the CPU-Memory
    Data Storage •Disks • Hard disk (HDD) • Solid state drive (SSD) •Random Access Memory • Dynamic RAM (DRAM) • Static RAM (SRAM) •Registers • %rax, %rbx, ... Sean Barker 1 The CPU-Memory Gap 100,000,000.0 10,000,000.0 Disk 1,000,000.0 100,000.0 SSD Disk seek time 10,000.0 SSD access time 1,000.0 DRAM access time Time (ns) Time 100.0 DRAM SRAM access time CPU cycle time 10.0 Effective CPU cycle time 1.0 0.1 CPU 0.0 1985 1990 1995 2000 2003 2005 2010 2015 Year Sean Barker 2 Caching Smaller, faster, more expensive Cache 8 4 9 10 14 3 memory caches a subset of the blocks Data is copied in block-sized 10 4 transfer units Larger, slower, cheaper memory Memory 0 1 2 3 viewed as par@@oned into “blocks” 4 5 6 7 8 9 10 11 12 13 14 15 Sean Barker 3 Cache Hit Request: 14 Cache 8 9 14 3 Memory 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Sean Barker 4 Cache Miss Request: 12 Cache 8 12 9 14 3 12 Request: 12 Memory 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Sean Barker 5 Locality ¢ Temporal locality: ¢ Spa0al locality: Sean Barker 6 Locality Example (1) sum = 0; for (i = 0; i < n; i++) sum += a[i]; return sum; Sean Barker 7 Locality Example (2) int sum_array_rows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } Sean Barker 8 Locality Example (3) int sum_array_cols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } Sean Barker 9 The Memory Hierarchy The Memory Hierarchy Smaller On 1 cycle to access CPU Chip Registers Faster Storage Costlier instrs can L1, L2 per byte directly Cache(s) ~10’s of cycles to access access (SRAM) Main memory ~100 cycles to access (DRAM) Larger Slower Flash SSD / Local network ~100 M cycles to access Cheaper Local secondary storage (disk) per byte slower Remote secondary storage than local (tapes, Web servers / Internet) disk to access Sean Barker 10.
    [Show full text]
  • ARM-Architecture Simulator Archulator
    Embedded ECE Lab Arm Architecture Simulator University of Arizona ARM-Architecture Simulator Archulator Contents Purpose ............................................................................................................................................ 2 Description ...................................................................................................................................... 2 Block Diagram ............................................................................................................................ 2 Component Description .............................................................................................................. 3 Buses ..................................................................................................................................................... 3 µP .......................................................................................................................................................... 3 Caches ................................................................................................................................................... 3 Memory ................................................................................................................................................. 3 CoProcessor .......................................................................................................................................... 3 Using the Archulator ......................................................................................................................
    [Show full text]
  • Multi-Tier Caching Technology™
    Multi-Tier Caching Technology™ Technology Paper Authored by: How Layering an Application’s Cache Improves Performance Modern data storage needs go far beyond just computing. From creative professional environments to desktop systems, Seagate provides solutions for almost any application that requires large volumes of storage. As a leader in NAND, hybrid, SMR and conventional magnetic recording technologies, Seagate® applies different levels of caching and media optimization to benefit performance and capacity. Multi-Tier Caching (MTC) Technology brings the highest performance and areal density to a multitude of applications. Each Seagate product is uniquely tailored to meet the performance requirements of a specific use case with the right memory, NAND, and media type and size. This paper explains how MTC Technology works to optimize hard drive performance. MTC Technology: Key Advantages Capacity requirements can vary greatly from business to business. While the fastest performance can be achieved using Dynamic Random Access Memory (DRAM) cache, the data in the DRAM is not persistent through power cycles and DRAM is very expensive compared to other media. NAND flash data survives through power cycles but it is still very expensive compared to a magnetic storage medium. Magnetic storage media cache offers good performance at a very low cost. On the downside, media cache takes away overall disk drive capacity from PMR or SMR main store media. MTC Technology solves this dilemma by using these diverse media components in combination to offer different levels of performance and capacity at varying price points. By carefully tuning firmware with appropriate cache types and sizes, the end user can experience excellent overall system performance.
    [Show full text]
  • CUDA 11 and A100 - WHAT’S NEW? Markus Hrywniak, 23Rd June 2020 TOPICS for TODAY
    CUDA 11 AND A100 - WHAT’S NEW? Markus Hrywniak, 23rd June 2020 TOPICS FOR TODAY Ampere architecture – A100, powering DGX–A100, HGX-A100... and soon, FZ Jülich‘s JUWELS Booster New CUDA 11 Toolkit release Overview of features Talk next week: Third generation Tensor Cores GTC talks go into much more details. See references! 2 HGX-A100 4-GPU HGX-A100 8-GPU • 4 A100 with NVLINK • 8 A100 with NVSwitch 3 HIERARCHY OF SCALES Multi-System Rack Multi-GPU System Multi-SM GPU Multi-Core SM Unlimited Scale 8 GPUs 108 Multiprocessors 2048 threads 4 AMDAHL’S LAW serial section parallel section serial section Amdahl’s Law parallel section Shortest possible runtime is sum of serial section times Time saved serial section Some Parallelism Increased Parallelism Infinite Parallelism Program time = Parallel sections take less time Parallel sections take no time sum(serial times + parallel times) Serial sections take same time Serial sections take same time 5 OVERCOMING AMDAHL: ASYNCHRONY & LATENCY serial section parallel section serial section Split up serial & parallel components parallel section serial section Some Parallelism Task Parallelism Infinite Parallelism Program time = Parallel sections overlap with serial sections Parallel sections take no time sum(serial times + parallel times) Serial sections take same time 6 OVERCOMING AMDAHL: ASYNCHRONY & LATENCY CUDA Concurrency Mechanisms At Every Scope CUDA Kernel Threads, Warps, Blocks, Barriers Application CUDA Streams, CUDA Graphs Node Multi-Process Service, GPU-Direct System NCCL, CUDA-Aware MPI, NVSHMEM 7 OVERCOMING AMDAHL: ASYNCHRONY & LATENCY Execution Overheads Non-productive latencies (waste) Operation Latency Network latencies Memory read/write File I/O ..
    [Show full text]
  • CS 575: the Roofline Model
    CS 575: The Roofline Model Wim Bohm Colorado State University Spring 2013 Roofline: An Insightful Visual Performance Model.. David Patterson.. U.C. Berkeley submitted to CAM April 2008 Analyzing Program Performance n In empirical Computer Science, we plot functions describing the run time (or the memory use) of a program: ¨ This can be as a function of the input size. We have seen this in e.g. cs320 or cs420, where we studied polynomial and exponential (monotonically growing) complexity. ¨ In this class we also study program performance as a function of the number of processors. n In this case the functions are positive and, hopefully decreasing. n Also we plot speedup curves, which are usually asymptotic ¨ The roofline model plots GFlops/second as a function of Operational Intensity (GFlops/byte) Straight Lines CS475: When plotting data we get the most information from straight lines! ¨ We can easily recognize a straight line (y = ax+b) n The slope (a) and y intercept (b) tells us all. ¨ So we need to turn our data sets into straight lines. ¨ This is easiest done using log-s, because they turn a multiplicative factor into a shift up (y intercept) , and an exponential into a multiplicative factor (slope) Exponential functions log(2n) = n log2 linear in n log(3n) = n log 3 slope is base of log log(4.3n) = n log3 + log4 * shifts it up log((3n)/4) = n log3 – log4 / shifts it down Exponentials: semi-log plot n 2n 3n 20*3n 10000000" 0 1 1 20 1000000" 1 2 3 60 2 4 9 180 100000" 3 8 27 540 10000" 4 16 81 1620 2^n" 3^n" 5 32 243 4860 1000" 20"3^n" 7 128 2087 41740 10 1024 56349 1126980 100" 10" semi-log plot: y–axis on log scale 1" x-axis linear 0" 2" 4" 6" 8" 10" 12" angle: base shift: multiplicative factor Polynomials n What if we take the log of a polynomial? e.g.
    [Show full text]
  • Beyond the Roofline: Cache-Aware Power and Energy-Efficiency
    This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TC.2016.2582151 TRANSACTIONS ON COMPUTERS, VOL. X, NO. X, MAY 2016 1 Beyond the Roofline: Cache-aware Power and Energy-Efficiency Modeling for Multi-cores Aleksandar Ilic, Member, IEEE, Frederico Pratas, Member, IEEE, and Leonel Sousa, Senior Member, IEEE Abstract—To foster the energy-efficiency in current and future multi-core processors, the benefits and trade-offs of a large set of optimization solutions must be evaluated. For this purpose, it is often crucial to consider how key micro-architecture aspects, such as accessing different memory levels and functional units, affect the attainable power and energy consumption. To ease this process, we propose a set of insightful cache-aware models to characterize the upper-bounds for power, energy and energy-efficiency of modern multi-cores in three different domains of the processor chip: cores, uncore and package. The practical importance of the proposed models is illustrated when optimizing matrix multiplication and deriving a set of power envelopes and energy-efficiency ranges of the micro-architecture for different operating frequencies. The proposed models are experimentally validated on a computing platform with a quad-core Intel 3770K processor by using hardware counters, on-chip power monitoring facilities and assembly micro-benchmarks. Index Terms—Multicore architectures, Modeling and simulation, Performance evaluation and measurement. F 1 INTRODUCTION N the road to pursuing highly energy-efficient execu- The key contribution of this paper is a set of insight- O tion, current and future trends in computer architec- ful cache-aware models to characterize the upper-bounds ture move towards complex and heterogeneous designs by for power, energy and energy-efficiency of modern multi- incorporating specialized functional units and cores [1].
    [Show full text]
  • A Characterization of Processor Performance in the VAX-1 L/780
    A Characterization of Processor Performance in the VAX-1 l/780 Joel S. Emer Douglas W. Clark Digital Equipment Corp. Digital Equipment Corp. 77 Reed Road 295 Foster Street Hudson, MA 01749 Littleton, MA 01460 ABSTRACT effect of many architectural and implementation features. This paper reports the results of a study of VAX- llR80 processor performance using a novel hardware Prior related work includes studies of opcode monitoring technique. A micro-PC histogram frequency and other features of instruction- monitor was buiit for these measurements. It kee s a processing [lo. 11,15,161; some studies report timing count of the number of microcode cycles execute z( at Information as well [l, 4,121. each microcode location. Measurement ex eriments were performed on live timesharing wor i loads as After describing our methods and workloads in well as on synthetic workloads of several types. The Section 2, we will re ort the frequencies of various histogram counts allow the calculation of the processor events in 5 ections 3 and 4. Section 5 frequency of various architectural events, such as the resents the complete, detailed timing results, and frequency of different types of opcodes and operand !!Iection 6 concludes the paper. specifiers, as well as the frequency of some im lementation-s ecific events, such as translation bu h er misses. ?phe measurement technique also yields the amount of processing time spent, in various 2. DEFINITIONS AND METHODS activities, such as ordinary microcode computation, memory management, and processor stalls of 2.1 VAX-l l/780 Structure different kinds. This paper reports in detail the amount of time the “average’ VAX instruction The llf780 processor is composed of two major spends in these activities.
    [Show full text]
  • How to Write Fast Numerical Code Fall 2016 Lecture: Roofline Model
    How to Write Fast Numerical Code Fall 2016 Lecture: Roofline model Instructor: Torsten Hoefler & Markus Püschel TA: Salvatore Di Girolamo Roofline model (Williams et al. 2008) Roofline model Resources in a processor that bound performance: Example: one core with π = 2 and β = 1 and no SSE • peak performance [flops/cycle] ops are double precision flops • memory bandwidth [bytes/cycle] • <others> performance [ops/cycle] Platform model bound based on β mem 4 bound based on π Bandwidth β carefully measured π = 2 [bytes/cycle] • raw bandwidth from manual is β: 1 unattainable (maybe 60% is) cache x • Stream benchmark may be 1/2 mem. compute bound conservative 1/4 bound P1 Pp 1/4 1/2 1 2 4 8 operational Peak performance π [ops/cycle] intensity some function [ops/bytes] run on some input Algorithm model (n is the input size) Operational intensity I(n) = W(n)/Q(n) = Bound based on β? - assume program as operational intensity of x ops/byte number of flops (cost) [ops/bytes] - it can get only β bytes/cycle number of bytes transferred - hence: performance = y ≤ βx between memory and cache - in log scale: log2(y) ≤ log2(β) + log2(x) - line with slope 1; y = β for x = 1 Q(n): assumes empty cache; Variations best measured with performance counters - vector instructions: peak bound goes up (e.g., 4 times for AVX) - multiple cores: peak bound goes up (p times for p cores) Notes - program has uneven mix adds/mults: peak bound comes down In general, Q and hence W/Q depend on the cache size m [bytes].
    [Show full text]
  • A Roofline Model of Energy
    Preprint – To appear in the IEEE Int’l. Parallel & Distributed Processing Symp. (IPDPS), May 2013 A roofline model of energy Jee Whan Choi Daniel Bedard, Robert Fowler Richard Vuduc Georgia Institute of Technology Renaissance Computing Institute Georgia Institute of Technology Atlanta, Georgia, USA Chapel Hill, North Carolina, USA Atlanta, Georgia, USA [email protected] fdanb,[email protected] [email protected] Abstract—We describe an energy-based analogue of the time- or alternative explanations about time, energy, and power based roofline model. We create this model from the perspective relationships. of algorithm designers and performance tuners, with the intent not of making exact predictions, but rather, developing high- First, when analyzing time, the usual first-order analytic level analytic insights into the possible relationships among tool is to assess the balance of the processing system [7]– the time, energy, and power costs of an algorithm. The model [12]. Recall that balance is the ratio of work the system expresses algorithms in terms of operations, concurrency, and can perform per unit of data transfer. To this notion of memory traffic; and characterizes the machine based on a time-balance, we define an energy-balance analogue, which small number of simple cost parameters, namely, the time and energy costs per operation or per word of communication. We measures the ratio of flops and bytes per unit-energy (e.g., confirm the basic form of the model experimentally. From this Joules). We compare balancing computations in time against model, we suggest under what conditions we ought to expect balancing in energy. [xII] an algorithmic time-energy trade-off, and show how algorithm Secondly, we use energy-balance to develop an energy- properties may help inform power management.
    [Show full text]
  • Instruction Set Innovations for Convey's HC-1 Computer
    Instruction Set Innovations for Convey's HC-1 Computer THE WORLD’S FIRST HYBRID-CORE COMPUTER. Hot Chips Conference 2009 [email protected] Introduction to Convey Computer • Company Status – Second round venture based startup company – Product beta systems are at customer sites – Currently staffing at 36 people – Located in Richardson, Texas • Investors – Four Venture Capital Investors • Interwest Partners (Menlo Park) • CenterPoint Ventures (Dallas) • Rho Ventures (New York) • Braemar Energy Ventures (Boston) – Two Industry Investors • Intel Capital • Xilinx Presentation Outline • Overview of HC-1 Computer • Instruction Set Innovations • Application Examples Page 3 Hot Chips Conference 2009 What is a Hybrid-Core Computer ? A hybrid-core computer improves application performance by combining an x86 processor with hardware that implements application-specific instructions. ANSI Standard Applications C/C++/Fortran Convey Compilers x86 Coprocessor Instructions Instructions Intel® Processor Hybrid-Core Coprocessor Oil & Gas& Oil Financial Sciences Custom CAE Application-Specific Personalities Cache-coherent shared virtual memory Page 4 Hot Chips Conference 2009 What Is a Personality? • A personality is a reloadable set of instructions that augment x86 application the x86 instruction set Processor specific – Applicable to a class of applications instructions or specific to a particular code • Each personality is a set of files that includes: – The bits loaded into the Coprocessor – Information used by the Convey compiler • List of
    [Show full text]
  • Roofline-Based Data Migration Methodology for Hybrid Memories 849
    Roofline-based Data Migration Methodology for Hybrid Memories 849 Roofline-based Data Migration Methodology for Hybrid Memories Jongmin Lee1, Kwangho Lee1, Mucheol Kim2, Geunchul Park3, Chan Yeol Park3 1 Department of Computer Engineering, Won-Kwang University, Korea 2 School of Software, Chung-Ang University, Korea 3 National Institute of Supercomputing and Networking, Korea Institute of Science and Technology Information, Korea [email protected], [email protected], [email protected], [email protected], [email protected]* Abstract be the future memory systems in high-performance computing (HPC) systems [4-7]. High-performance computing (HPC) systems provide Knights Landing (KNL) is the code name for the huge computational resources and large memories. The second-generation Intel Xeon Phi product family [1, 8]. hybrid memory is a promising memory technology that The KNL processor contains tens of cores and it contains different types of memory devices, which have provides the HBM 3D-stacked memory as a Multi- different characteristics regarding access time, retention Channel DRAM (MCDRAM). DRAM and MCDRAM time, and capacity. However, the increasing performance differ significantly in terms of access time, bandwidth and employing hybrid memories induce more complexity and capacity. Because of those differences between as well. In this paper, we propose a roofline-based data DRAM and MCDRAM, performance will vary migration methodology called HyDM to effectively use depending on the application characteristics and the hybrid memories targeting at Intel Knight Landing (KNL) usage of memory resources. The efficient use of these processor. HyDM monitors status of applications running systems requires prior application knowledge to on a system and migrates pages of selected applications determine which data of applications to place in which to the High Bandwidth Memory (HBM).
    [Show full text]
  • Fundamental CUDA Optimization
    Fundamental CUDA Optimization NVIDIA Corporation Outline Fermi/Kepler Architecture Kernel optimizations Most concepts in this presentation apply to Launch configuration any language or API Global memory throughput on NVIDIA GPUs Shared memory access Instruction throughput / control flow Optimization of CPU-GPU interaction Maximizing PCIe throughput Overlapping kernel execution with memory copies 20-Series Architecture (Fermi) 512 Scalar Processor (SP) cores execute parallel thread instructions 16 Streaming Multiprocessors (SMs) each contains 32 scalar processors 32 fp32 / int32 ops / clock, 16 fp64 ops / clock 4 Special Function Units (SFUs) Shared register file (128KB) 48 KB / 16 KB Shared memory 16KB / 48 KB L1 data cache Kepler cc 3.5 SM (GK110) “SMX” (enhanced SM) 192 SP units (“cores”) 64 DP units LD/ST units 4 warp schedulers Each warp scheduler is dual- issue capable K20: 13 SMX’s, 5GB K20X: 14 SMX’s, 6GB K40: 15 SMX’s, 12GB Execution Model Software Hardware Threads are executed by scalar processors Scalar Thread Processor Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on one Thread multiprocessor - limited by multiprocessor Block Multiprocessor resources (shared memory and register file) ... A kernel is launched as a grid of thread blocks Grid Device Warps A thread block consists of 32 Threads 32-thread warps ... = 32 Threads 32 Threads A warp is executed Thread physically in parallel Block Warps Multiprocessor (SIMD) on a multiprocessor Memory Architecture
    [Show full text]