Paccagnella, Licheng Luo, and Christopher W

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect Are Practical Riccardo Paccagnella, Licheng Luo, and Christopher W. Fletcher, University of Illinois at Urbana-Champaign https://www.usenix.org/conference/usenixsecurity21/presentation/paccagnella This paper is included in the Proceedings of the 30th USENIX Security Symposium. August 11–13, 2021 978-1-939133-24-3 Open access to the Proceedings of the 30th USENIX Security Symposium is sponsored by USENIX. Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect Are Practical Riccardo Paccagnella Licheng Luo Christopher W. Fletcher University of Illinois at Urbana-Champaign Abstract Fortunately, recent years have also seen an increase in the We introduce the first microarchitectural side channel at- awareness of such attacks, and the availability of counter- tacks that leverage contention on the CPU ring interconnect. measures to mitigate them. To start with, a large number of There are two challenges that make it uniquely difficult to existing attacks (e.g., [6,7, 15, 25, 26, 35, 36, 77]) can be miti- exploit this channel. First, little is known about the ring inter- gated by disabling simultaneous multi-threading (SMT) and connect’s functioning and architecture. Second, information cleansing the CPU microarchitectural state (e.g., the cache) that can be learned by an attacker through ring contention is when context switching between different security domains. noisy by nature and has coarse spatial granularity. To address Second, cross-core cache-based attacks (e.g., [20,38,62]) can the first challenge, we perform a thorough reverse engineering be blocked by partitioning the last-level cache (e.g., with Intel of the sophisticated protocols that handle communication on CAT [61, 71]), and disabling shared memory between pro- the ring interconnect. With this knowledge, we build a cross- cesses in different security domains [99]. The only known core covert channel over the ring interconnect with a capacity attacks that would still work in such a restrictive environment of over 4 Mbps from a single thread, the largest to date for a (e.g., DRAMA [79]) exist outside of the CPU chip. cross-core channel not relying on shared memory. To address In this paper, we present the first on-chip, cross-core side the second challenge, we leverage the fine-grained temporal channel attack that works despite the above countermeasures. patterns of ring contention to infer a victim program’s secrets. Our attack leverages contention on the ring interconnect, We demonstrate our attack by extracting key bits from vulner- which is the component that enables communication between able EdDSA and RSA implementations, as well as inferring the different CPU units (cores, last-level cache, system agent, the precise timing of keystrokes typed by a victim user. and graphics unit) on many modern Intel processors. There are two main reasons that make our attack uniquely chal- 1 Introduction lenging. First, the ring interconnect is a complex architecture with many moving parts. As we show, understanding how Modern computers use multicore CPUs that comprise sev- these often-undocumented components interact is an essential eral heterogeneous, interconnected components often shared prerequisite of a successful attack. Second, it is difficult to across computing units. While such resource sharing has of- learn sensitive information through the ring interconnect. Not fered significant benefits to efficiency and cost, it has also only is the ring a contention-based channel—requiring precise created an opportunity for new attacks that exploit CPU mi- measurement capabilities to overcome noise—but also it only croarchitectural features. One class of these attacks consists sees contention due to spatially coarse-grained events such as of software-based covert channels and side channel attacks. private cache misses. Indeed, at the outset of our investigation Through these attacks, an adversary exploits unintended ef- it was not clear to us whether leaking sensitive information fects (e.g., timing variations) in accessing a particular shared over this channel would even be possible. resource to surreptitiously exfiltrate data (in the covert chan- To address the first challenge, we perform a thorough nel case) or infer a victim program’s secrets (in the side reverse engineering of Intel’s “sophisticated ring proto- channel case). These attacks have been shown to be capa- col” [57,87] that handles communication on the ring intercon- ble of leaking information in numerous contexts. For ex- nect. Our work reveals what physical resources are allocated ample, many cache-based side channel attacks have been to what ring agents (cores, last-level cache slices, and system demonstrated that can leak sensitive information (e.g., cryp- agent) to handle different protocol transactions (loads from tographic keys) in cloud environments [48,62,82,105,112], the last-level cache and loads from DRAM), and how those web browsers [37, 58, 73, 86] and smartphones [59, 90]. physical resources arbitrate between multiple in-flight trans- USENIX Association 30th USENIX Security Symposium 645 action packets. Understanding these details is necessary for CPU Core 0 … CPU Core n an attacker to measure victim program behavior. For example, we find that the ring prioritizes in-flight over new traffic, and System Graphics that it consists of two independent lanes (each with four phys- Agent ical sub-rings to service different packet types) that service interleaved subsets of agents. Contrary to our initial hypothe- LLC Slice 0 … LLC Slice n sis, this implies that two agents communicating “in the same direction, on overlapping ring segments” is not sufficient to Figure 1: Logical block diagram of the ring interconnect on create contention. Putting our analysis together, we formulate client-class Intel CPUs. Ring agents are represented as white for the first time the necessary and sufficient conditions for boxes, the interconnect is in red and the ring stops are in blue. two or more processes to contend with each other on the ring While the positioning of cores and slices on the die varies [21], interconnect, as well as plausible explanations for what the the ordering of their respective ring stops in the ring is fixed. ring microarchitecture may look like to be consistent with our observations. We expect the latter to be a useful tool for future work that relies on the CPU uncore. is divided into LLC slices of equal size, one per core. Next, we investigate the security implications of our find- Caches of many Intel CPUs are inclusive, meaning that ings. First, leveraging the facts that i) when a process’s loads data contained in the L1 and L2 caches must also reside in the are subject to contention their mean latency is larger than that LLC [47], and set-associative, meaning that they are divided of regular loads, and ii) an attacker with knowledge of our into a fixed number of cache sets, each of which contains reverse engineering efforts can set itself up in such a way a fixed number of cache ways. Each cache way can fit one that its loads are guaranteed to contend with the first pro- cache line which is typically of 64 bytes in size and represents cess’ loads, we build the first cross-core covert channel on the the basic unit for cache transactions. The cache sets and the ring interconnect. Our covert channel does not require shared LLC slice in which each cache line is stored are determined memory (as, e.g., [38, 108]), nor shared access to any uncore by its address bits. Since the private caches generally have structure (e.g., the RNG [23]). We show that our covert chan- fewer sets than the LLC, it is possible for cache lines that map nel achieves a capacity of up to 4.14 Mbps (518 KBps) from a to different LLC sets to map to the same L2 or L1 set. single thread which, to our knowledge, is faster than all prior When a core performs a load from a memory address, it channels that do not rely on shared memory (e.g., [79]), and first looks up if the data associated to that address is available within the same order of magnitude as state-of-the-art covert in the L1 and L2. If available, the load results in a hit in channels that do rely on shared memory (e.g., [38]). the private caches, and the data is serviced locally. If not, it Finally, we show examples of side channel attacks that ex- results in a miss in the private caches, and the core checks ploit ring contention. The first attack extracts key bits from if the data is available in the LLC. In case of an LLC miss, vulnerable RSA and EdDSA implementations. Specifically, it the data needs to be copied from DRAM through the memory abuses mitigations to preemptive scheduling cache attacks to controller, which is integrated in the system agent to manage cause the victim’s loads to miss in the cache, monitors ring communication between the main memory and the CPU [47]. contention while the victim is computing, and employs a stan- Intel also implements hardware prefetching which may result dard machine learning classifier to de-noise traces and leak in additional cache lines being copied from memory or from bits. The second attack targets keystroke timing information the LLC to the private caches during a single load. (which can be used to infer, e.g., passwords [56, 88, 111]). In Ring Interconnect The ring interconnect, often referred particular, we discover that keystroke events cause spikes in to as ring bus, is a high-bandwidth on-die interconnect ring contention that can be detected by an attacker, even in which was introduced by Intel with the Nehalem-EX micro- the presence of background noise.
Recommended publications
  • Intel® Processor Architecture

    Intel® Processor Architecture

    Intel® Processor Architecture January 2013 Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Agenda •Overview Intel® processor architecture •Intel x86 ISA (instruction set architecture) •Micro-architecture of processor core •Uncore structure •Additional processor features – Hyper-threading – Turbo mode •Summary Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 2 Intel® Processor Segments Today Architecture Target ISA Specific Platforms Features Intel® phone, tablet, x86 up to optimized for ATOM™ netbook, low- SSSE-3, 32 low-power, in- Architecture power server and 64 bit order Intel® mainstream x86 up to flexible Core™ notebook, Intel® AVX, feature set Architecture desktop, server 32 and 64bit covering all needs Intel® high end server IA64, x86 by RAS, large Itanium® emulation address space Architecture Intel® MIC accelerator for x86 and +60 cores, Architecture HPC Intel® MIC optimized for Instruction Floating-Point Set performance Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 3 Itanium® 9500 (Poulson) New Itanium Processor Poulson Processor • Compatible with Itanium® 9300 processors (Tukwila) Core Core 32MB Core Shared Core • New micro-architecture with 8 Last Core Level Core Cores Cache Core Core • 54 MB on-die cache Memory Link • Improved RAS and power Controllers Controllers management capabilities • Doubles execution width from 6 to 12 instructions/cycle 4 Intel 4 Full + 2 Half Scalable Width Intel • 32nm process technology Memory QuickPath Interface Interconnect • Launched in November 2012 (SMI) Compatibility provides protection for today’s Itanium® investment Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation.
  • HC20.26.630.Inside Intel® Core™ Microarchitecture (Nehalem)

    HC20.26.630.Inside Intel® Core™ Microarchitecture (Nehalem)

    Inside Intel® Core™ Microarchitecture (Nehalem) Ronak Singhal Senior Principal Engineer Intel Corporation Hot Chips 20 August 26, 2008 1 Legal Disclaimer • INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL® PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS. • Intel may make changes to specifications and product descriptions at any time, without notice. • All products, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice. • Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request. • Nehalem, Merom, Wolfdale, Harpertown, Tylersburg, Penryn, Westmere, Sandy Bridge and other code names featured are used internally within Intel to identify products that are in development and not yet publicly announced for release. Customers, licensees and other third parties are not authorized by Intel to use code names in advertising, promotion or marketing of any product or services and any such use of Intel's internal code names is at the sole risk of the user • Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests.
  • Hardware-Accelerated Platforms and Infrastructures for Network Functions: a Survey of Enabling Technologies and Research Studies

    Hardware-Accelerated Platforms and Infrastructures for Network Functions: a Survey of Enabling Technologies and Research Studies

    Received June 3, 2020, accepted July 7, 2020, date of publication July 9, 2020, date of current version July 29, 2020. Digital Object Identifier 10.1109/ACCESS.2020.3008250 Hardware-Accelerated Platforms and Infrastructures for Network Functions: A Survey of Enabling Technologies and Research Studies PRATEEK SHANTHARAMA 1, (Graduate Student Member, IEEE), AKHILESH S. THYAGATURU 2, (Member, IEEE), AND MARTIN REISSLEIN 1, (Fellow, IEEE) 1School of Electrical, Computer, and Energy Engineering, Arizona State University (ASU), Tempe, AZ 85287, USA 2Programmable Solutions Group (PSG), Intel Corporation, Chandler, AZ 85226, USA Corresponding author: Martin Reisslein ([email protected]) This work was supported by the National Science Foundation under Grant 1716121. ABSTRACT In order to facilitate flexible network service virtualization and migration, network func- tions (NFs) are increasingly executed by software modules as so-called ``softwarized NFs'' on General- Purpose Computing (GPC) platforms and infrastructures. GPC platforms are not specifically designed to efficiently execute NFs with their typically intense Input/Output (I/O) demands. Recently, numerous hardware-based accelerations have been developed to augment GPC platforms and infrastructures, e.g., the central processing unit (CPU) and memory, to efficiently execute NFs. This article comprehensively surveys hardware-accelerated platforms and infrastructures for executing softwarized NFs. This survey covers both commercial products, which we consider to be enabling technologies, as well as relevant research studies. We have organized the survey into the main categories of enabling technologies and research studies on hardware accelerations for the CPU, the memory, and the interconnects (e.g., between CPU and memory), as well as custom and dedicated hardware accelerators (that are embedded on the platforms); furthermore, we survey hardware-accelerated infrastructures that connect GPC platforms to networks (e.g., smart network interface cards).
  • The Fourth-Generation Intel Core Processor

    The Fourth-Generation Intel Core Processor

    ................................................................................................................................................................................................................. HASWELL:THE FOURTH-GENERATION INTEL CORE PROCESSOR ................................................................................................................................................................................................................. HASWELL, THE FOURTH-GENERATION INTEL CORE PROCESSOR ARCHITECTURE, DELIVERS A Per Hammarlund RANGE OF CLIENT PARTS, A CONVERGED CORE FOR THE CLIENT AND SERVER, AND Alberto J. Martinez TECHNOLOGIES USED ACROSS MANY PRODUCTS.ITUSESANOPTIMIZEDVERSIONOFINTEL Atiq A. Bajwa David L. Hill 22-NM PROCESS TECHNOLOGY.HASWELL PROVIDES ENHANCEMENTS IN POWER- Erik Hallnor PERFORMANCE EFFICIENCY, POWER MANAGEMENT, FORM FACTOR AND COST, CORE AND Hong Jiang UNCORE MICROARCHITECTURE, AND THE CORE’S INSTRUCTION SET. Martin Dixon ......Haswell, the fourth-generation Haswell is a “tock”—a significant micro- Intel Core Processor, delivers a family of pro- architecture change over the previous- Michael Derr cessors with new innovations.1,2 Haswell generation Ivy Bridge. Haswell is built with delivers a range of client parts, a converged an SoC design approach that allows fast and Mikal Hunsaker microprocessor core for the client and server, easy creation of derivatives and variations on Rajesh Kumar and technologies used across many products. the baseline. Graphics and media come with Many of Haswell’s
  • Execution-Cache-Memory Performance Model

    Execution-Cache-Memory Performance Model

    Execution-Cache-Memory Performance Model: Introduction and Validation Johannes Hofmann Jan Eitzinger Dietmar Fey Chair for Computer Architecture Erlangen Regional Computing Center (RRZE) Chair for Computer Architecture University Erlangen–Nuremberg University Erlangen–Nuremberg University Erlangen–Nuremberg Email: [email protected] Email: [email protected] Email: [email protected] Abstract—This report serves two purposes: To introduce Computer (CISC) architecture. It originates from the late and validate the Execution-Cache-Memory (ECM) performance 70s and initially was a 16 bit ISA. During its long history model and to provide a thorough analysis of current Intel proces- it was extended to 32bit and finally 64bit. The ISA con- sor architectures with a special emphasis on Intel Xeon Haswell- EP. The ECM model is a simple analytical performance model tains complex instructions that can have variable number of which focuses on basic architectural resources. The architectural operands, have variable opcode length and allow for address analysis and model predictions are showcased and validated using references in almost all instructions. To execute this type a set of elementary microbenchmarks. of ISA on a modern processor with aggressive Instruction Level Parallelism (ILP) the instructions must be converted I. INTRODUCTION on the fly into a Reduced Instruction Set Computer (RISC) Today’s processor architectures appear complex and in- like internal instruction representation. Intel refers to these transparent to software developers. While the machine alone instructions as micro-operations, µops for short. Fortunately is already complicated the major complexity is introduced decoding of CISC instructions to µops works so well that by the interaction between software and hardware.
  • Mcsima+: a Manycore Simulator with Application-Level+ Simulation and Detailed Microarchitecture Modeling

    Mcsima+: a Manycore Simulator with Application-Level+ Simulation and Detailed Microarchitecture Modeling

    McSimA+: A Manycore Simulator with Application-level+ Simulation and Detailed Microarchitecture Modeling Jung Ho Ahn†, Sheng Li‡, Seongil O†, and Norman P. Jouppi‡ †Seoul National University, ‡Hewlett-Packard Labs †{gajh, swdfish}@snu.ac.kr, ‡{sheng.li, norm.jouppi}@hp.com Abstract—With their significant performance and energy ad- 30MB L3 cache. Scalable Network-on-Chip (NoC) and cache vantages, emerging manycore processors have also brought new coherency implementation efforts have also emerged in real challenges to the architecture research community. Manycore industry designs, such as the Intel Xeon Phi [9]. Moreover, processors are highly integrated complex system-on-chips with emerging manycore designs usually require system software complicated core and uncore subsystems. The core subsystems (such as OSes) to be heavily modified or specially patched. can consist of a large number of traditional and asymmetric For example, current OSes do not support the multi-processing cores. The uncore subsystems have also become unprecedentedly powerful and complex with deeper cache hierarchies, advanced (MP) mode in ARM big.LITTLE, where both fat A15 cores on-chip interconnects, and high-performance memory controllers. and thin A7 cores are active. A special software switcher [12] In order to conduct research for emerging manycore processor is needed to support thread migration on the big.LITTLE systems, a microarchitecture-level and cycle-level manycore sim- processor. ulation infrastructure is needed. Simulators have been prevalent tools in the computer This paper introduces McSimA+, a new timing simulation architecture research community to validate innovative ideas, infrastructure, to meet these needs. McSimA+ models x86- as prototyping requires significant investments in both time and based asymmetric manycore microarchitectures in detail for money.
  • Nehalem-EX CPU Architecture Sailesh Kottapalli, Jeff Baxter NHM-EX Architecture

    Nehalem-EX CPU Architecture Sailesh Kottapalli, Jeff Baxter NHM-EX Architecture

    Nehalem-EX CPU Architecture Sailesh Kottapalli, Jeff Baxter NHM-EX Architecture 11 Intel Confidential Legal Disclaimer • INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL® PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS. • Intel may make changes to specifications and product descriptions at any time, without notice. • All products, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice. • Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request. • Nehalem, Merom, Boxboro, Millbrook, Penryn, Westmere, Sandy Bridge and other code names featured are used internally within Intel to identify products that are in development and not yet publicly announced for release. Customers, licensees and other third parties are not authorized by Intel to use code names in advertising, promotion or marketing of any product or services and any such use of Intel's internal code names is at the sole risk of the user • Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests.
  • (19) United States (12) Patent Application Publication (10) Pub

    (19) United States (12) Patent Application Publication (10) Pub

    US 20130262910A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/0262910 A1 NATHAN (43) Pub. Date: Oct. 3, 2013 (54) TIME KEEPING IN UNKNOWN AND (52) US. Cl. UNSTABLE CLOCK ARCHITECTURE USPC ........................................................ .. 713/502 (75) Inventor: Ofer NATHAN, Kiryat Yam (IL) (57) ABSTRACT (73) Assignee: INTEL CORPORATION’ Santa Clara’ The present invention may provide a system With a ?xed CA (Us) clock to provide a ?xed clock signal, and a variable clock to _ provide a variable clock signal. The system may also include (21) Appl' No" 13/435992 a chipset With a chipset time stamp counter (TSC) based on (22) Filed. Man 30, 2012 the ?xed clock signal. A processor may include a fast counter that may be based on the variable clock signal and generate a Publication Classi?cation fast count value. A sloW counter may doWnload a time stamp value based on the chipset TSC at Wakeup. The sloW counter (51) Int. Cl. may be based on the ?xed clock signal and may generate a G06F 1/14 (2006.01) sloW count value. A central TSC may combine the fast count G06F 1/04 (2006.01) and sloW count value to generate a central TSC value. CHIPSET PCHTS‘C VARIABLE FIXED CLOCK m 512 CLOCK (CRY CLK) UNCORE @ RST FAST COUNTER SLOW COUNTER w 2.7.2 Patent Application Publication Oct. 3, 2013 Sheet 1 0f 6 US 2013/0262910 A1 PROCESSOR ExEcuTIoN UNIT m m Pl-YCKED CACHE REGISTER FILE INSTRUCTION “24 m SET 109 | + PROCESSOR BUS ] 110i 3 E MEMORY GRAPHICS/ /\_J\ MEMORY /1_[\ INsTRuCTION VIDEO CARD 1_1& CONTROLLER HUB 1_18_ g; H L15 W DATA m E; LEGACY DATA STORAGE I/O CONTROLLER 24' USER INPUT INTERFACE WIRELESS I/O TRANSCEIVER CI) CONTROLLER HUB 1_26 l) /\\‘:> SERIAL EXPANSION PORT FLASH BIDS 4 h AUDIO ii \I—/ CONTROLLER @ NETWORK CONTROLLER 100 54‘ FIG.
  • Measurement, Modeling, and Characterization for Energy-Efficient Computing

    THESIS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Measurement, Modeling, and Characterization for Energy-Efficient Computing BHAVISHYA GOEL Division of Computer Engineering Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY Göteborg, Sweden 2016 Measurement, Modeling, and Characterization for Energy-Efficient Computing Bhavishya Goel ISBN 978-91-7597-411-8 Copyright c Bhavishya Goel, 2016. Doktorsavhandlingar vid Chalmers tekniska högskola Ny serie Nr 4092 ISSN 0346-718X Technical Report 129D Department of Computer Science and Engineering Computer Architecture Research Group Department of Computer Science and Engineering Chalmers University of Technology SE-412 96 GÖTEBORG, Sweden Phone: +46 (0)31-772 10 00 Author e-mail: [email protected] Chalmers Reposervice Göteborg, Sweden 2016 Measurement, Modeling, and Characterization for Energy-Efficient Computing Bhavishya Goel Division of Computer Engineering, Chalmers University of Technology ABSTRACT The ever-increasing ecological footprint of Information Technology (IT) sector coupled with adverse effects of high power consumption on electronic circuits has increased the significance of energy-efficient computing in the last decade. Making energy-efficient computing a norm rather than an exception requires that system designers and program- mers understand the energy implications of their design and implementation choices. This necessitates a detailed view of system’s energy expenditure and/or power consump- tion. We explore this aspect of energy-efficient computing in this thesis through power measurement, power modeling, and energy characterization. First, we present a quantitative comparison between power measurement data col- lected for computer systems using four techniques: a power meter at wall outlet, current transducers at ATX power rails, CPU voltage regulator’s current monitor, and Intel’s proprietary RAPL (Running Average Power Limit) interface.
  • Core and Uncore Joint Frequency Scaling Strategy

    Core and Uncore Joint Frequency Scaling Strategy

    Journal of Computer and Communications, 2018, 6, 184-201 http://www.scirp.org/journal/jcc ISSN Online: 2327-5227 ISSN Print: 2327-5219 Core and Uncore Joint Frequency Scaling Strategy Vaibhav Sundriyal1, Masha Sosonkina1, Bryce Westheimer2, Mark Gordon2 1Old Dominion University, Norfolk, VA, USA 2Department of Chemistry, Iowa State University, Ames Laboratory, Ames, IA, USA How to cite this paper: Sundriyal, V., Abstract Sosonkina, M., Westheimer, B. and Gor- don, M. (2018) Core and Uncore Joint Energy-proportional computing is one of the foremost constraints in the de- Frequency Scaling Strategy. Journal of sign of next generation exascale systems. These systems must have a very high Computer and Communications, 6, FLOP-per-watt ratio to be sustainable, which requires tremendous improve- 184-201. https://doi.org/10.4236/jcc.2018.612018 ments in power efficiency for modern computing systems. This paper focuses on the processor—as still the biggest contributor to the power usage—by Received: August 17, 2018 considering both its core and uncore power subsystems. The uncore describes Accepted: December 26, 2018 those processor functions that are not handled by the core, such as L3 cache Published: December 29, 2018 and on-chip interconnect, and contributes significantly to the total system Copyright © 2018 by author(s) and power. The uncore frequency scaling (UFS) capability has been available to Scientific Research Publishing Inc. the user since the Intel Haswell processor generation. In this paper, perfor- This work is licensed under the Creative mance and power models are proposed to use both the UFS and dynamic Commons Attribution International voltage and frequency scaling (DVFS) to reduce the energy consumption in License (CC BY 4.0).
  • Intel 64 and IA-32 Architectures Optimization Reference Manual [PDF]

    Intel 64 and IA-32 Architectures Optimization Reference Manual [PDF]

    Intel® 64 and IA-32 Architectures Optimization Reference Manual Order Number: 248966-033 June 2016 Intel technologies features and benefits depend on system configuration and may require enabled hardware, software, or service ac- tivation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting http://www.intel.com/design/literature.htm.
  • Architecture 2 CPU, DSP, GPU, NPU Contents

    Architecture 2 CPU, DSP, GPU, NPU Contents

    Architecture 2 CPU, DSP, GPU, NPU Contents 1 Central processing unit 1 1.1 History ................................................. 1 1.1.1 Transistor CPUs ....................................... 2 1.1.2 Small-scale integration CPUs ................................. 3 1.1.3 Large-scale integration CPUs ................................. 3 1.1.4 Microprocessors ....................................... 4 1.2 Operation ............................................... 4 1.2.1 Fetch ............................................. 5 1.2.2 Decode ............................................ 5 1.2.3 Execute ............................................ 5 1.3 Structure and implementation ..................................... 5 1.3.1 Control unit .......................................... 6 1.3.2 Arithmetic logic unit ..................................... 6 1.3.3 Memory management unit .................................. 6 1.3.4 Integer range ......................................... 6 1.3.5 Clock rate ........................................... 7 1.3.6 Parallelism .......................................... 8 1.4 Performance .............................................. 11 1.5 See also ................................................ 11 1.6 Notes ................................................. 11 1.7 References ............................................... 12 1.8 External links ............................................. 13 2 Digital signal processor 14 2.1 Overview ............................................... 14 2.2 Architecture .............................................