Observing the Invisible: Live Cache Inspection for High-Performance Embedded Systems

Total Page:16

File Type:pdf, Size:1020Kb

Observing the Invisible: Live Cache Inspection for High-Performance Embedded Systems 1 ”This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.” arXiv:2007.12271v1 [cs.OH] 23 Jul 2020 2 Observing the Invisible: Live Cache Inspection for High-Performance Embedded Systems Dharmesh Tarapore∗, Shahin Roozkhosh∗, Steven Brzozowski∗ and Renato Mancuso∗ ∗Boston University, USA fdharmesh, shahin, sbrz, [email protected] Abstract—The vast majority of high-performance embedded systems implement multi-level CPU cache hierarchies. But the exact behavior of these CPU caches has historically been opaque to system designers. Absent expensive hardware debuggers, an understanding of cache makeup remains tenuous at best. This enduring opacity further obscures the complex interplay among applications and OS-level components, particularly as they compete for the allocation of cache resources. Notwithstanding the relegation of cache comprehension to proxies such as static cache analysis, performance counter-based profiling, and cache hierarchy simulations, the underpinnings of cache structure and evolution continue to elude software-centric solutions. In this paper, we explore a novel method of studying cache contents and their evolution via snapshotting. Our method complements extant approaches for cache profiling to better formulate, validate, and refine hypotheses on the behavior of modern caches. We leverage cache introspection interfaces provided by vendors to perform live cache inspections without the need for external hardware. We present CacheFlow, a proof-of-concept Linux kernel module which snapshots cache contents on an NVIDIA Tegra TX1 SoC (system on chip). Index Terms—cache, cache snapshotting, ramindex, cacheflow, cache debugging F 1 INTRODUCTION content. Importantly, our technique is not meant to replace other The burgeoning demand for high-performance embedded cache analysis approaches. Rather, it seeks to supplement systems among a diverse range of applications such as them with insights on the exact behavior of applications telemetry, embedded machine vision, and vector process- and system components that were not previously possi- ing has outstripped the capabilities of traditional micro- ble. While in this work we specifically focus on last-level controllers. For manufacturers, this has engendered a dis- (shared) cache analysis, the same technique can be used to cernible shift to system-on-chip modules (SoCs). Coupled profile private cache levels, TLBs, and the internal states of with their extensibility and improved mean time between coherence controllers. In summary, we make the following failures, SoCs offer improved reliability and functional- contributions: ity. To bridge the gap between increasingly faster CPU 1) This is the first paper to describe in detail an inter- speeds and comparatively slower main memory technolo- face, namely RAMINDEX, available on modern embedded gies (e.g. DRAM) most SoCs feature cache-based archi- CPUs that can be used to inspect the content of CPU tectures. Indeed, caches allow modern embedded CPUs caches. Despite its usefulness, the interface has received to meet the performance requirements of emerging data- little to no attention in the research community thus far; intensive workload. At the same time, the strong need 2) We present a technique called CacheFlow to perform for predictability in embedded applications has rendered cache content analysis via event-driven snapshotting; analyses of caches and their contents vital for system design, 3) We demonstrate that the technique can be implemented validation, and certification. on modern hardware by leveraging the RAMINDEX inter- Unfortunately, this interest runs counter to the general face and propose a proof-of-concept open-source Linux desire to abstract complexity. Cache mechanisms and poli- implementation1 cies are ensconced entirely in hardware, to eliminate soft- 4) We describe how to correlate information retrieved via ware interference and encourage portability. Consequently, cache snapshotting to user-level and kernel software software-based techniques used to study caches suffer from components deployed on the system under analysis; several shortcomings, which we detail in Section 5.5. 5) We evaluate some of the insights provided by the pro- In contrast, we propose CacheFlow: a technique that posed technique using real and synthetic benchmarks. can be implemented in software and in existing high- The rest of this paper is organized as follows: in Sec- performance SoCs to extract and analyze the contents of tion II, we document related research that inspired and cache memories. CacheFlow can be deployed in a live informed this paper. In Sections III and IV, we provide a system without the need for an external hardware debugger. bird’s-eye view of the concepts necessary to understand the By periodically sampling the cache state, we show that we mechanics of CacheFlow. In Sections V and VI, we detail the can reconstruct the behavior of multiple applications in the system; observe the impact of scheduling policies; and study 1. Our implementation can be found at: https://github.com/ how multi-core contention affects the composition of cached weirdindiankid/cacheflow 3 fundamentals of CacheFlow and its implementation. Section thus far, a few important limitations are worth noting. VII outlines the experiments we performed and documents Techniques that rely on cache models — i.e. static anal- their results. We also examine the implications of those ysis, symbolic execution, simulation — work under the results. Section VIII concludes the paper with an outlook assumption that the employed models accurately represent on future research. the true behavior of the hardware. Unfortunately, complex modern hardware often deviates from textbook models in unpredictable ways. Access to event counters subsequently 2 RELATED WORK only reveals partial information on the actual state of the Caches have a significant impact on the temporal behav- cache hierarchy. ior of embedded applications. But their design—oriented The technique proposed in this paper is meant to com- toward programming transparency and average-case opti- plement the analysis and profiling strategies reviewed thus mization—makes performance impact analysis difficult. A far. It does so by allowing system designers to snapshot the plethora of techniques have approached cache analysis from actual content of CPU caches. This, in turn, enables a new multiple angles. We hereby provide a brief overview of the set of strategies to extract/validate hardware models, or to research in this space. conduct application and system-level analysis on the utiliza- Static Cache Analysis derives bounds on the access tion of cache resources. Unlike works that proposed cache time of memory operations when caches are present [1]– snapshotting by means of hardware modifications [28]–[30] [3]. Works in this domain study the set of possible cache our technique can be entirely implemented in software and states in the control-flow graph (CFG) of applications. Ab- leverages hardware support that already exists in a broad stract interpretation is widely employed for static cache line of high-performance embedded CPUs. analysis, as first proposed in [4] and [5]. For static analysis to be carried out, a precise model of the cache behavior is required. Techniques that consider Least-Recently Used 3 BACKGROUND (LRU), Pseudo-LRU, and FIFO replacement policies [3], [6]– In this section, we introduce a few fundamental concepts [9] have been studied. required to understand CacheFlow’s inner workings. First, Symbolic Execution is a software technique for feasible we review the structure and functioning of multi-level set- path exploration and WCET analysis [10], [11] of a program associative caches. Next, we briefly explore the organization subject to variable input vectors. It proffers a middle ground of virtual memory in the target class of SoCs. between simulation and static analysis. An interpreter fol- lows the program; if the execution path depends on an unknown, a new symbolic executor is forked. Each symbolic 3.1 Multi-Level Caches executor stands for many actual program runs whose actual Modern high-performance embedded processors imple- values satisfy the path condition. ment multiple levels of caching. The first level (L1) is the As systems grow more complex, Cache Simulation tools closest to the CPU and its contents are usually private to are essential to study new designs and evaluate existing the local processor. Cache misses in L1 trigger a look-up in ones. Simulation of the entire processor — including cores, the next cache level (L2), which can be still private, shared cache hierarchy, and on-chip interconnect — was proposed among a subset (cluster) of cores, or globally shared by all in [12]–[15]. Simulators that only focus on the cache hierar- the cores. Additional levels (L3, L4, etc.) may also be present. chy were studied in [16]–[18]. Depending on the component The last act before a look-up in the main memory is to under analysis, simulations abound. In the (i) execution- query the Last-Level Cache (LLC). Without loss of generality, driven approach, the program to be traced runs locally on and to be more in line with our implementation, we consider the host platform; in the (ii) emulation-driven approach, the the typical cache layout of ARM-based processors. That is, target program runs on an emulated platform and environ- we assume private L1 caches and a globally shared L2, ment created
Recommended publications
  • Memory Hierarchy Memory Hierarchy
    Memory Key challenge in modern computer architecture Lecture 2: different memory no point in blindingly fast computation if data can’t be and variable types moved in and out fast enough need lots of memory for big applications Prof. Mike Giles very fast memory is also very expensive [email protected] end up being pushed towards a hierarchical design Oxford University Mathematical Institute Oxford e-Research Centre Lecture 2 – p. 1 Lecture 2 – p. 2 CPU Memory Hierarchy Memory Hierarchy Execution speed relies on exploiting data locality 2 – 8 GB Main memory 1GHz DDR3 temporal locality: a data item just accessed is likely to be used again in the near future, so keep it in the cache ? 200+ cycle access, 20-30GB/s spatial locality: neighbouring data is also likely to be 6 used soon, so load them into the cache at the same 2–6MB time using a ‘wide’ bus (like a multi-lane motorway) L3 Cache 2GHz SRAM ??25-35 cycle access 66 This wide bus is only way to get high bandwidth to slow 32KB + 256KB main memory ? L1/L2 Cache faster 3GHz SRAM more expensive ??? 6665-12 cycle access smaller registers Lecture 2 – p. 3 Lecture 2 – p. 4 Caches Importance of Locality The cache line is the basic unit of data transfer; Typical workstation: typical size is 64 bytes 8 8-byte items. ≡ × 10 Gflops CPU 20 GB/s memory L2 cache bandwidth With a single cache, when the CPU loads data into a ←→ 64 bytes/line register: it looks for line in cache 20GB/s 300M line/s 2.4G double/s ≡ ≡ if there (hit), it gets data At worst, each flop requires 2 inputs and has 1 output, if not (miss), it gets entire line from main memory, forcing loading of 3 lines = 100 Mflops displacing an existing line in cache (usually least ⇒ recently used) If all 8 variables/line are used, then this increases to 800 Mflops.
    [Show full text]
  • Bootstomp: on the Security of Bootloaders in Mobile Devices
    BootStomp: On the Security of Bootloaders in Mobile Devices Nilo Redini, Aravind Machiry, Dipanjan Das, Yanick Fratantonio, Antonio Bianchi, Eric Gustafson, Yan Shoshitaishvili, Christopher Kruegel, and Giovanni Vigna, UC Santa Barbara https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/redini This paper is included in the Proceedings of the 26th USENIX Security Symposium August 16–18, 2017 • Vancouver, BC, Canada ISBN 978-1-931971-40-9 Open access to the Proceedings of the 26th USENIX Security Symposium is sponsored by USENIX BootStomp: On the Security of Bootloaders in Mobile Devices Nilo Redini, Aravind Machiry, Dipanjan Das, Yanick Fratantonio, Antonio Bianchi, Eric Gustafson, Yan Shoshitaishvili, Christopher Kruegel, and Giovanni Vigna fnredini, machiry, dipanjan, yanick, antoniob, edg, yans, chris, [email protected] University of California, Santa Barbara Abstract by proposing simple mitigation steps that can be im- plemented by manufacturers to safeguard the bootloader Modern mobile bootloaders play an important role in and OS from all of the discovered attacks, using already- both the function and the security of the device. They deployed hardware features. help ensure the Chain of Trust (CoT), where each stage of the boot process verifies the integrity and origin of 1 Introduction the following stage before executing it. This process, in theory, should be immune even to attackers gaining With the critical importance of the integrity of today’s full control over the operating system, and should pre- mobile and embedded devices, vendors have imple- vent persistent compromise of a device’s CoT. However, mented a string of inter-dependent mechanisms aimed at not only do these bootloaders necessarily need to take removing the possibility of persistent compromise from untrusted input from an attacker in control of the OS in the device.
    [Show full text]
  • Make the Most out of Last Level Cache in Intel Processors In: Proceedings of the Fourteenth Eurosys Conference (Eurosys'19), Dresden, Germany, 25-28 March 2019
    http://www.diva-portal.org Postprint This is the accepted version of a paper presented at EuroSys'19. Citation for the original published paper: Farshin, A., Roozbeh, A., Maguire Jr., G Q., Kostic, D. (2019) Make the Most out of Last Level Cache in Intel Processors In: Proceedings of the Fourteenth EuroSys Conference (EuroSys'19), Dresden, Germany, 25-28 March 2019. ACM Digital Library N.B. When citing this work, cite the original published paper. Permanent link to this version: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-244750 Make the Most out of Last Level Cache in Intel Processors Alireza Farshin∗† Amir Roozbeh∗ KTH Royal Institute of Technology KTH Royal Institute of Technology [email protected] Ericsson Research [email protected] Gerald Q. Maguire Jr. Dejan Kostić KTH Royal Institute of Technology KTH Royal Institute of Technology [email protected] [email protected] Abstract between Central Processing Unit (CPU) and Direct Random In modern (Intel) processors, Last Level Cache (LLC) is Access Memory (DRAM) speeds has been increasing. One divided into multiple slices and an undocumented hashing means to mitigate this problem is better utilization of cache algorithm (aka Complex Addressing) maps different parts memory (a faster, but smaller memory closer to the CPU) in of memory address space among these slices to increase order to reduce the number of DRAM accesses. the effective memory bandwidth. After a careful study This cache memory becomes even more valuable due to of Intel’s Complex Addressing, we introduce a slice- the explosion of data and the advent of hundred gigabit per aware memory management scheme, wherein frequently second networks (100/200/400 Gbps) [9].
    [Show full text]
  • FAN53525 3.0A, 2.4Mhz, Digitally Programmable Tinybuck® Regulator
    FAN53525 — 3.0 A, 2.4 MHz, June 2014 FAN53525 3.0A, 2.4MHz, Digitally Programmable TinyBuck® Regulator Digitally Programmable TinyBuck Digitally Features Description . Fixed-Frequency Operation: 2.4 MHz The FAN53525 is a step-down switching voltage regulator that delivers a digitally programmable output from an input . Best-in-Class Load Transient voltage supply of 2.5 V to 5.5 V. The output voltage is 2 . Continuous Output Current Capability: 3.0 A programmed through an I C interface capable of operating up to 3.4 MHz. 2.5 V to 5.5 V Input Voltage Range Using a proprietary architecture with synchronous . Digitally Programmable Output Voltage: rectification, the FAN53525 is capable of delivering 3.0 A - 0.600 V to 1.39375 V in 6.25 mV Steps continuous at over 80% efficiency, maintaining that efficiency at load currents as low as 10 mA. The regulator operates at Programmable Slew Rate for Voltage Transitions . a nominal fixed frequency of 2.4 MHz, which reduces the . I2C-Compatible Interface Up to 3.4 Mbps value of the external components to 330 nH for the output inductor and as low as 20 µF for the output capacitor. PFM Mode for High Efficiency in Light Load . Additional output capacitance can be added to improve . Quiescent Current in PFM Mode: 50 µA (Typical) regulation during load transients without affecting stability, allowing inductance up to 1.2 µH to be used. Input Under-Voltage Lockout (UVLO) ® At moderate and light loads, Pulse Frequency Modulation Regulator Thermal Shutdown and Overload Protection . (PFM) is used to operate in Power-Save Mode with a typical .
    [Show full text]
  • Migration from IBM 750FX to MPC7447A by Douglas Hamilton European Applications Engineering Networking and Computing Systems Group Freescale Semiconductor, Inc
    Freescale Semiconductor AN2808 Application Note Rev. 1, 06/2005 Migration from IBM 750FX to MPC7447A by Douglas Hamilton European Applications Engineering Networking and Computing Systems Group Freescale Semiconductor, Inc. Contents 1 Scope and Definitions 1. Scope and Definitions . 1 2. Feature Overview . 2 The purpose of this application note is to facilitate migration 3. 7447A Specific Features . 12 from IBM’s 750FX-based systems to Freescale’s 4. Programming Model . 16 MPC7447A. It addresses the differences between the 5. Hardware Considerations . 27 systems, explaining which features have changed and why, 6. Revision History . 30 before discussing the impact on migration in terms of hardware and software. Throughout this document the following references are used: • 750FX—which applies to Freescale’s MPC750, MPC740, MPC755, and MPC745 devices, as well as to IBM’s 750FX devices. Any features specific to IBM’s 750FX will be explicitly stated as such. • MPC7447A—which applies to Freescale’s MPC7450 family of products (MPC7450, MPC7451, MPC7441, MPC7455, MPC7445, MPC7457, MPC7447, and MPC7447A) except where otherwise stated. Because this document is to aid the migration from 750FX, which does not support L3 cache, the L3 cache features of the MPC745x devices are not mentioned. © Freescale Semiconductor, Inc., 2005. All rights reserved. Feature Overview 2 Feature Overview There are many differences between the 750FX and the MPC7447A devices, beyond the clear differences of the core complex. This section covers the differences between the cores and then other areas of interest including the cache configuration and system interfaces. 2.1 Cores The key processing elements of the G3 core complex used in the 750FX are shown below in Figure 1, and the G4 complex used in the 7447A is shown in Figure 2.
    [Show full text]
  • Embedded Computer Solutions for Advanced Automation Control «
    » Embedded Computer Solutions for Advanced Automation Control « » Innovative Scalable Hardware » Qualifi ed for Industrial Software » Open Industrial Communication The pulse of innovation » We enable Automation! « Open Industrial Automation Platforms Kontron, one of the leaders of embedded computing technol- ogy has established dedicated global business units to provide application-ready OEM platforms for specifi c markets, includ- ing Industrial Automation. With our global corporate headquarters located in Germany, Visualization & Control Data Storage Internet-of-Things and regional headquarters in the United States and Asia-Pa- PanelPC Industrial Server cifi c, Kontron has established a strong presence worldwide. More than 1000 highly qualifi ed engineers in R&D, technical Industrie 4.0 support, and project management work with our experienced sales teams and sales partners to devise a solution that meets M2M SYMKLOUD your individual application’s demands. When it comes to embedded computing, you can focus on your core capabilities and rely on Kontron as your global OEM part- ner for a successful long-term business relationship. In addition to COTS standards based products, Kontron also of- fers semi- and full-custom ODM services for a full product port- folio that ranges from Computer-on-Modules and SBCs, up to embedded integrated systems and application ready platforms. Open for new technologies Kontron provides an exceptional range of hardware for any kind of control solution. Open for individual application Kontron systems are available either as readily integrated control solutions, or as open platforms for customers who build their own control applications with their own look and feel. Open for real-time Kontron’s Industrial Automation platforms are open for Real- Industrial Ethernet Time operating systems like VxWorks and Linux with real time extension.
    [Show full text]
  • Caches & Memory
    Caches & Memory Hakim Weatherspoon CS 3410 Computer Science Cornell University [Weatherspoon, Bala, Bracy, McKee, and Sirer] Programs 101 C Code RISC-V Assembly int main (int argc, char* argv[ ]) { main: addi sp,sp,-48 int i; sw x1,44(sp) int m = n; sw fp,40(sp) int sum = 0; move fp,sp sw x10,-36(fp) for (i = 1; i <= m; i++) { sw x11,-40(fp) sum += i; la x15,n } lw x15,0(x15) printf (“...”, n, sum); sw x15,-28(fp) } sw x0,-24(fp) li x15,1 sw x15,-20(fp) Load/Store Architectures: L2: lw x14,-20(fp) lw x15,-28(fp) • Read data from memory blt x15,x14,L3 (put in registers) . • Manipulate it .Instructions that read from • Store it back to memory or write to memory… 2 Programs 101 C Code RISC-V Assembly int main (int argc, char* argv[ ]) { main: addi sp,sp,-48 int i; sw ra,44(sp) int m = n; sw fp,40(sp) int sum = 0; move fp,sp sw a0,-36(fp) for (i = 1; i <= m; i++) { sw a1,-40(fp) sum += i; la a5,n } lw a5,0(x15) printf (“...”, n, sum); sw a5,-28(fp) } sw x0,-24(fp) li a5,1 sw a5,-20(fp) Load/Store Architectures: L2: lw a4,-20(fp) lw a5,-28(fp) • Read data from memory blt a5,a4,L3 (put in registers) . • Manipulate it .Instructions that read from • Store it back to memory or write to memory… 3 1 Cycle Per Stage: the Biggest Lie (So Far) Code Stored in Memory (also, data and stack) compute jump/branch targets A memory register ALU D D file B +4 addr PC B control din dout M inst memory extend new imm forward pc Stack, Data, Code detect unit hazard Stored in Memory Instruction Instruction Write- ctrl ctrl ctrl Fetch Decode Execute Memory Back IF/ID
    [Show full text]
  • Stealing the Shared Cache for Fun and Profit
    IT 13 048 Examensarbete 30 hp Juli 2013 Stealing the shared cache for fun and profit Moncef Mechri Institutionen för informationsteknologi Department of Information Technology Abstract Stealing the shared cache for fun and profit Moncef Mechri Teknisk- naturvetenskaplig fakultet UTH-enheten Cache pirating is a low-overhead method created by the Uppsala Architecture Besöksadress: Research Team (UART) to analyze the Ångströmlaboratoriet Lägerhyddsvägen 1 effect of sharing a CPU cache Hus 4, Plan 0 among several cores. The cache pirate is a program that will actively and Postadress: carefully steal a part of the shared Box 536 751 21 Uppsala cache by keeping its working set in it. The target application can then be Telefon: benchmarked to see its dependency on 018 – 471 30 03 the available shared cache capacity. The Telefax: topic of this Master Thesis project 018 – 471 30 00 is to implement a cache pirate and use it on Ericsson’s systems. Hemsida: http://www.teknat.uu.se/student Handledare: Erik Berg Ämnesgranskare: David Black-Schaffer Examinator: Ivan Christoff IT 13 048 Sponsor: Ericsson Tryckt av: Reprocentralen ITC Contents Acronyms 2 1 Introduction 3 2 Background information 5 2.1 A dive into modern processors . 5 2.1.1 Memory hierarchy . 5 2.1.2 Virtual memory . 6 2.1.3 CPU caches . 8 2.1.4 Benchmarking the memory hierarchy . 13 3 The Cache Pirate 17 3.1 Monitoring the Pirate . 18 3.1.1 The original approach . 19 3.1.2 Defeating prefetching . 19 3.1.3 Timing . 20 3.2 Stealing evenly from every set .
    [Show full text]
  • Low-Power Ultra-Small Edge AI Accelerators for Image Recog- Nition with Convolution Neural Networks: Analysis and Future Directions
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 July 2021 doi:10.20944/preprints202107.0375.v1 Review Low-power Ultra-small Edge AI Accelerators for Image Recog- nition with Convolution Neural Networks: Analysis and Future Directions Weison Lin 1, *, Adewale Adetomi 1 and Tughrul Arslan 1 1 Institute for Integrated Micro and Nano Systems, University of Edinburgh, Edinburgh EH9 3FF, UK; [email protected]; [email protected] * Correspondence: [email protected] Abstract: Edge AI accelerators have been emerging as a solution for near customers’ applications in areas such as unmanned aerial vehicles (UAVs), image recognition sensors, wearable devices, ro- botics, and remote sensing satellites. These applications not only require meeting performance tar- gets but also meeting strict reliability and resilience constraints due to operations in harsh and hos- tile environments. Numerous research articles have been proposed, but not all of these include full specifications. Most of these tend to compare their architecture with other existing CPUs, GPUs, or other reference research. This implies that the performance results of the articles are not compre- hensive. Thus, this work lists the three key features in the specifications such as computation ability, power consumption, and the area size of prior art edge AI accelerators and the CGRA accelerators during the past few years to define and evaluate the low power ultra-small edge AI accelerators. We introduce the actual evaluation results showing the trend in edge AI accelerator design about key performance metrics to guide designers on the actual performance of existing edge AI acceler- ators’ capability and provide future design directions and trends for other applications with chal- lenging constraints.
    [Show full text]
  • Tegra Linux Driver Package
    TEGRA LINUX DRIVER PACKAGE RN_05071-R32 | March 18, 2019 Subject to Change 32.1 Release Notes RN_05071-R32 Table of Contents 1.0 About this Release ................................................................................... 3 1.1 Login Credentials ............................................................................................... 4 2.0 Known Issues .......................................................................................... 5 2.1 General System Usability ...................................................................................... 5 2.2 Boot .............................................................................................................. 6 2.3 Camera ........................................................................................................... 6 2.4 CUDA Samples .................................................................................................. 7 2.5 Multimedia ....................................................................................................... 7 3.0 Top Fixed Issues ...................................................................................... 9 3.1 General System Usability ...................................................................................... 9 3.2 Camera ........................................................................................................... 9 4.0 Documentation Corrections ..................................................................... 10 4.1 Adaptation and Bring-Up Guide ............................................................................
    [Show full text]
  • IBM Power Systems Performance Report Apr 13, 2021
    IBM Power Performance Report Power7 to Power10 September 8, 2021 Table of Contents 3 Introduction to Performance of IBM UNIX, IBM i, and Linux Operating System Servers 4 Section 1 – SPEC® CPU Benchmark Performance 4 Section 1a – Linux Multi-user SPEC® CPU2017 Performance (Power10) 4 Section 1b – Linux Multi-user SPEC® CPU2017 Performance (Power9) 4 Section 1c – AIX Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 5 Section 1d – Linux Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 6 Section 2 – AIX Multi-user Performance (rPerf) 6 Section 2a – AIX Multi-user Performance (Power8, Power9 and Power10) 9 Section 2b – AIX Multi-user Performance (Power9) in Non-default Processor Power Mode Setting 9 Section 2c – AIX Multi-user Performance (Power7 and Power7+) 13 Section 2d – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power8) 15 Section 2e – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power7 and Power7+) 20 Section 3 – CPW Benchmark Performance 19 Section 3a – CPW Benchmark Performance (Power8, Power9 and Power10) 22 Section 3b – CPW Benchmark Performance (Power7 and Power7+) 25 Section 4 – SPECjbb®2015 Benchmark Performance 25 Section 4a – SPECjbb®2015 Benchmark Performance (Power9) 25 Section 4b – SPECjbb®2015 Benchmark Performance (Power8) 25 Section 5 – AIX SAP® Standard Application Benchmark Performance 25 Section 5a – SAP® Sales and Distribution (SD) 2-Tier – AIX (Power7 to Power8) 26 Section 5b – SAP® Sales and Distribution (SD) 2-Tier – Linux on Power (Power7 to Power7+)
    [Show full text]
  • Cache & Memory System
    COMP 212 Computer Organization & Architecture Re-Cap of Lecture #2 • The text book is required for the class – COMP 212 Fall 2008 You will need it for homework, project, review…etc. – To get it at a good price: Lecture 3 » Check with senior student for used book » Check with university book store Cache & Memory System » Try this website: addall.com » Anyone got the book ? Care to share experience ? Comp 212 Computer Org & Arch 1 Z. Li, 2008 Comp 212 Computer Org & Arch 2 Z. Li, 2008 Components & Connections Instruction – CPU: processing data • Instruction word has 2 parts – Mem: store data – Opcode: eg, 4 bit, will have total 24=16 different instructions – I/O & Network: exchange data with – Operand: address or immediate number an outside world instruction can operate on – Connection: Bus, a – In von Neumann computer, instruction and data broadcasting medium share the same memory space: – Address space: 2W for address width W. • eg, 8 bit has 28=256 addressable space, • 216=65536 addressable space (room number) Comp 212 Computer Org & Arch 3 Z. Li, 2008 Comp 212 Computer Org & Arch 4 Z. Li, 2008 Instruction Cycle Register & Memory Operations During Instruction Cycle • Instruction Cycle has 3 phases • Pay attention to the – Instruction fetch: following registers’ • pull instruction from mem to IR, according to PC change over cycles: • CPU can’t process memory data directly ! – Instruction execution: – PC, IR • Operate on the operand, either load or save data to – AC memory, or move data among registers, or ALU – Mem at [940], [941] operations – Interruption Handling: to achieve parallel operation with slower IO device • Sequential • Nested Comp 212 Computer Org & Arch 5 Z.
    [Show full text]