White Paper | ADVANCED POWER MANAGEMENT HELPS BRING

Total Page:16

File Type:pdf, Size:1020Kb

White Paper | ADVANCED POWER MANAGEMENT HELPS BRING White Paper | ADVANCED POWER MANAGEMENT HELPS BRING IMPROVED PERFORMANCE TO HIGHLY INTEGRATED X86 PROCESSORS TABLE OF CONTENTS THE IMPORTANCE OF POWER MANAGEMENT 3 THE X86 EXAMPLES 3 ESTABLISH A REALISTIC WORST-CASE FOR POWER 4 POWER LIMITS CAN TRANSLATE TO PERFORMANCE LIMITS 4 AMD TACKLES THE UNDERUSED TDP HEADROOM ISSUE 5 GOING ABOVE TDP 6 INTELLIGENT BOOST 7 CONFIGURABLE TDP 8 SUMMARY 9 Complex heterogeneous processors have the potential to leave a large amount of performance headroom untapped when workloads don’t utilize all cores. Advanced power management techniques for x86 processors are designed to reduce the power of underutilized cores while also allowing for dynamic allocation of the thermal budget between cores for improved performance. THE IMPORTANCE OF THE X86 EXAMPLE POWER MANAGEMENT Typical x86 processors widely used Those with experience implementing in both consumer and embedded microprocessors know the importance applications are a perfect example: of proper power management. Whether Integration of network and security for simple applications processors or engines, memory controllers, graphics high-end server processors, the ability processing units (GPUs), and video to down-clock, clock-gate, power-off, encode/decode engines has effectively or in some manner disable unused or turned them into heterogeneous underused hardware blocks is crucial in compute units that excel at a wide limiting power consumption. variety of workloads. Better power management benefits The notable thing about traditional range from energy savings within the reduction-based power management data center to improved battery life in is that a particular functional block is mobile devices. But don’t underestimate only turned off when unused, or down- the value of reducing power and clocked when higher performance is increasing efficiency. In fact, power not needed by the application. What reduction and increased efficiency about applications that desire more is even more important today, as performance? Shouldn’t saving power processors integrate more and varied in one area allow you to utilize it functional blocks. in another? WHITE PAPER | ADVANCED POWER MANAGEMENT HELPS BRING IMPROVED 3 PERFORMANCE TO HIGHLY INTEGRATED X86 PROCESSORS Specifying power usage is complex, ESTABLISH A REALISTIC particularly with highly integrated WORST-CASE FOR POWER processors. If the worst-case power The pragmatic approach for silicon for each individual hardware block in a providers is to survey real-world heterogeneous processor were added application software to establish a more together, the resulting total could be realistic worst-case power and add several times the achievable worst-case some guard-band for safety. Both AMD power for the device. The fact that it is and Intel use this type of methodology nearly impossible to write software that and specify it as thermal design power will simultaneously utilize all functional (TDP). TDP is essentially the maximum blocks to their fullest extent is one sustained power a processor can reason. Simply feeding the various draw with “real world” software while compute engines and I/O ports with operating under defined temperature enough data to keep them all 100% and voltage limits. utilized would likely exceed the available bandwidth of internal buses. Central processing unit (CPU) cores manage POWER LIMITS CAN data movement, and time spent there TRANSLATE TO is less time spent executing higher- PERFORMANCE LIMITS power instructions. Most embedded x86-based systems are power-constrained in some Another issue is that different way. Designers will look for the best instruction sequences can incur vastly performance they can get in a given different power usage, which can further power envelope, at a price they can complicate specifying processor power. afford. The worst-case power limit can For instance, complex floating-point translate directly into a performance instructions burn much more power limit for a given processor product than a simple I/O data read due to the by effectively defining the maximum significant difference in transistor logic operating frequency. they activate during execution. The combination of varying instruction types Using TDP as a worst-case power and utilized hardware blocks makes the specification instead of the cumulative actual power usage of the processor per-block maximum power helps to highly workload-dependent, and increase that operating frequency, but explains why it is rare to see a “typical” it’s also based on an assumption of the power specification for this device type. software workload. Applications using Still, implementers expect a maximum fewer hardware blocks, or using them power specification on which to base to a lesser extent, use less power and their design. effectively leave performance headroom on the table. WHITE PAPER | ADVANCED POWER MANAGEMENT HELPS BRING IMPROVED 4 PERFORMANCE TO HIGHLY INTEGRATED X86 PROCESSORS AMD TACKLES THE UNDERUSED TDP HEADROOM ISSUE AMD Turbo CORE technology1 was "PILEDRIVER" 2MB L2 launched several years ago to address DUAL-CORE underutilized TDP headroom. AMD Turbo X86 MODULE CORE began with a simple core-counting mechanism that allowed some CPU PCI EXPRESS® cores to use higher-frequency “boost” NORTHBRIDGE states while other CPU cores were idle. This approach only affected the CPU cores, and was primarily targeted at accelerating single-threaded "PILEDRIVER" applications that didn’t leverage DUAL-CORE a multi-core architecture. X86 MODULE 2MB L2 MEMORY INTERFACE MEMORY DP & VGA Generational improvements have increased the granularity and effectiveness of the technology by adding more boost states for CPU and GPU cores, real-time power and GRAPHICS CORES temperature monitors, and enabling & MULTIMEDIA dynamic power budget allocation between cores. Increasing performance by boosting to Integration of large GPU cores, as done in AMD R-Series APUs, higher frequencies is relatively simple, increases the potential for unused power budget. since the use of multiple performance states (voltage and frequency operating AMD’s recent move to integrate points) has been around for a while. discrete-class GPUs with x86 processor However, the complexity lies in cores in accelerated processing determining when and which cores to units (APUs) underscores this power boost. For AMD Embedded R-Series management challenge. Some APUs APUs, the process starts by dividing contain a GPU that accounts for the processor into separate thermal more than half of the silicon die and entities: one for each CPU core-pair and a proportional amount of the power one for the GPU. I/O power is small by budget. A much larger potential for comparison, so it is defined as a fixed under-utilization of the APU’s power value based on characterization to envelope exists in this scenario if the reduce complexity. software workload is highly CPU- centric or GPU-centric. The trend An integrated microcontroller manages toward integration of these complex, AMD Turbo CORE calculations, allowing heterogeneous cores is likely to continue a more complex and therefore more and necessitates a means of harnessing effective algorithm. In deciding whether the excess thermal headroom. boosting a given core is possible, the WHITE PAPER | ADVANCED POWER MANAGEMENT HELPS BRING IMPROVED 5 PERFORMANCE TO HIGHLY INTEGRATED X86 PROCESSORS power usage of each thermal entity be explained later. Total instantaneous must be determined. On-die analog power of the thermal entity can then 2 power measurement at many amps be calculated by P=CAC*V *f + Pstatic, is not practical in a 32nm silicon on and total power for the APU equals insulator (SOI) process, and external the summation of the power for each measurement is not possible because thermal entity and the I/O power offset. the various cores share power rails. The instantaneous power calculation result is compared to an allocated power MAX DIE TEMP LIMIT budget for the thermal entity, as well as the device’s thermal design current TDP BUDGET specification to ensure that current demand does not exceed what the Unused voltage regulator can provide. If either CPU Power CORE Budget value is too close to the limit, firmware PWR can impose throttling by reducing the CPU CORE core’s performance state. The ability PWR to boost the performance state is CPU DIE TEMP APU POWER CORE CPU maintained when headroom exists PWR CORE on both parameters. PWR I/O I/O GOING ABOVE TDP PWR PWR Even if an application with a high CAC drives the APU to consume the full APP 1 APP 2 high CAC low CAC TDP, operation at this level may occur in bursts or be preceded by idle time Applications with a low CAC can leave unused such that the die temperature at the TDP and temperature headroom. New power management techniques can exploit both for start of the high CAC period is far below improved performance. the maximum specification. The latest version of AMD Turbo CORE also takes Alternatively, proprietary activity the opportunity to boost in this scenario monitors that are integrated throughout by allowing brief excursions above TDP the processor architecture model current when there is adequate temperature logic activity as an AC capacitance (CAC). headroom. After all, the purpose The CAC monitors effectively profile the of a TDP limit is only to ensure die running application to determine if it is temperature stays in check. one of those “worst-case” workloads that defines TDP or something less Real-time temperature values from laborious. Static power of the core
Recommended publications
  • CFD Analyses of a Notebook Computer Thermal Management
    PREPRINT. 1 Ilker Tari and Fidan Seza Yalçin, "CFD Analyses of a Notebook Computer Thermal Management System and a Proposed Passive Cooling Alternative, IEEE Transactions on Components and Packaging Technologies, Vol. 33, No. 2, pp. 443-452 (2010). CFD Analyses of a Notebook Computer Thermal Management System and a Proposed Passive Cooling Alternative Ilker Tari, and Fidan Seza Yalcin H Fin height, mm. Abstract— A notebook computer thermal management system L Heat sink vertical length, mm. is analyzed using a commercial CFD software package (ANSYS Nu Nusselt Number. Fluent). The active and passive paths that are used for heat Pr Prandtl Number. dissipation are examined for different steady state operating Ra Rayleigh Number. conditions. For each case, average and hot-spot temperatures of Re Reynolds Number. the components are compared with the maximum allowable T Temperature, °C or K. operating temperatures. It is observed that when low heat W Heat sink width, mm. dissipation components are put on the same passive path, the 2 increased heat load of the path may cause unexpected hot spot g Gravitational acceleration, m/s . 2 temperatures. Especially, Hard Disk Drive (HDD) is susceptible h Convection heat transfer coefficient, W/(m ·K). to overheating and the keyboard surface may reach k Thermal conductivity, W/(m·K). ergonomically undesirable temperatures. Based on the analysis q Heat transfer rate, W. results and observations, a new component arrangement s Fin spacing, mm. considering passive paths and using the back side of the LCD screen is proposed and a simple correlation based thermal Greek Symbols analysis of the proposed system is presented.
    [Show full text]
  • Power Management 24
    Power Management 24 The embedded Pentium® processor family implements Intel’s System Management Mode (SMM) architecture. This chapter describes the hardware interface to SMM and Clock Control. 24.1 Power Management Features • System Management Interrupt can be delivered through the SMI# signal or through the local APIC using the SMI# message, which enhances the SMI interface, and provides for SMI delivery in APIC-based Pentium processor dual processing systems. • In dual processing systems, SMIACT# from the bus master (MRM) behaves differently than in uniprocessor systems. If the LRM processor is the processor in SMM mode, SMIACT# will be inactive and remain so until that processor becomes the MRM. • The Pentium processor is capable of supporting an SMM I/O instruction restart. This feature is automatically disabled following RESET. To enable the I/O instruction restart feature, set bit 9 of the TR12 register to “1”. • The Pentium processor default SMM revision identifier has a value of 2 when the SMM I/O instruction restart feature is enabled. • SMI# is NOT recognized by the processor in the shutdown state. 24.2 System Management Interrupt Processing The system interrupts the normal program execution and invokes SMM by generating a System Management Interrupt (SMI#) to the processor. The processor will service the SMI# by executing the following sequence. See Figure 24-1. 1. Wait for all pending bus cycles to complete and EWBE# to go active. 2. The processor asserts the SMIACT# signal while in SMM indicating to the system that it should enable the SMRAM. 3. The processor saves its state (context) to SMRAM, starting at address location SMBASE + 0FFFFH, proceeding downward in a stack-like fashion.
    [Show full text]
  • Power Management Using FPGA Architectural Features Abu Eghan, Principal Engineer Xilinx Inc
    Power Management Using FPGA Architectural Features Abu Eghan, Principal Engineer Xilinx Inc. Agenda • Introduction – Impact of Technology Node Adoption – Programmability & FPGA Expanding Application Space – Review of FPGA Power characteristics • Areas for power consideration – Architecture Features, Silicon design & Fabrication – now and future – Power & Package choices – Software & Implementation of Features – The end-user choices & Enablers • Thermal Management – Enabling tools • Summary Slide 2 2008 MEPTEC Symposium “The Heat is On” Abu Eghan, Xilinx Inc Technology Node Adoption in FPGA • New Tech. node Adoption & level of integration: – Opportunities – at 90nm, 65nm and beyond. FPGAs at leading edge of node adoption. • More Programmable logic Arrays • Higher clock speeds capability and higher performance • Increased adoption of Embedded Blocks: Processors, SERDES, BRAMs, DCM, Xtreme DSP, Ethernet MAC etc – Impact – general and may not be unique to FPGA • Increased need to manage leakage current and static power • Heat flux (watts/cm2) trend is generally up and can be non-uniform. • Potentially higher dynamic power as transistor counts soar. • Power Challenges -- Shared with Industry – Reliability limitation & lower operating temperatures – Performance & Cost Trade-offs – Lower thermal budgets – Battery Life expectancy challenges Slide 3 2008 MEPTEC Symposium “The Heat is On” Abu Eghan, Xilinx Inc FPGA-101: FPGA Terms • FPGA – Field Programmable Gate Arrays • Configurable Logic Blocks – used to implement a wide range of arbitrary digital
    [Show full text]
  • Clock Gating for Power Optimization in ASIC Design Cycle: Theory & Practice
    Clock Gating for Power Optimization in ASIC Design Cycle: Theory & Practice Jairam S, Madhusudan Rao, Jithendra Srinivas, Parimala Vishwanath, Udayakumar H, Jagdish Rao SoC Center of Excellence, Texas Instruments, India (sjairam, bgm-rao, jithendra, pari, uday, j-rao) @ti.com 1 AGENDA • Introduction • Combinational Clock Gating – State of the art – Open problems • Sequential Clock Gating – State of the art – Open problems • Clock Power Analysis and Estimation • Clock Gating In Design Flows JS/BGM – ISLPED08 2 AGENDA • Introduction • Combinational Clock Gating – State of the art – Open problems • Sequential Clock Gating – State of the art – Open problems • Clock Power Analysis and Estimation • Clock Gating In Design Flows JS/BGM – ISLPED08 3 Clock Gating Overview JS/BGM – ISLPED08 4 Clock Gating Overview • System level gating: Turn off entire block disabling all functionality. • Conditions for disabling identified by the designer JS/BGM – ISLPED08 4 Clock Gating Overview • System level gating: Turn off entire block disabling all functionality. • Conditions for disabling identified by the designer • Suspend clocks selectively • No change to functionality • Specific to circuit structure • Possible to automate gating at RTL or gate-level JS/BGM – ISLPED08 4 Clock Network Power JS/BGM – ISLPED08 5 Clock Network Power • Clock network power consists of JS/BGM – ISLPED08 5 Clock Network Power • Clock network power consists of – Clock Tree Buffer Power JS/BGM – ISLPED08 5 Clock Network Power • Clock network power consists of – Clock Tree Buffer
    [Show full text]
  • Desktop 3Rd Generation Intel® Core™ Processor Family, Desktop Intel® Pentium® Processor Family, Desktop Intel® Celeron® Processor Family, and LGA1155 Socket
    Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel® Pentium® Processor Family, Desktop Intel® Celeron® Processor Family, and LGA1155 Socket Thermal Mechanical Specifications and Design Guidelines (TMSDG) January 2013 Document Number: 326767-005 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. A “Mission Critical Application” is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined”.
    [Show full text]
  • Computer Architecture Techniques for Power-Efficiency
    MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 COMPUTER ARCHITECTURE TECHNIQUES FOR POWER-EFFICIENCY i MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 ii MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 iii Synthesis Lectures on Computer Architecture Editor Mark D. Hill, University of Wisconsin, Madison Synthesis Lectures on Computer Architecture publishes 50 to 150 page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi 2008 Chip Mutiprocessor Architecture: Techniques to Improve Throughput and Latency Kunle Olukotun, Lance Hammond, James Laudon 2007 Transactional Memory James R. Larus, Ravi Rajwar 2007 Quantum Computing for Computer Architects Tzvetan S. Metodi, Frederic T. Chong 2006 MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 Copyright © 2008 by Morgan & Claypool All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher. Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi www.morganclaypool.com ISBN: 9781598292084 paper ISBN: 9781598292091 ebook DOI: 10.2200/S00119ED1V01Y200805CAC004 A Publication in the Morgan & Claypool Publishers
    [Show full text]
  • Thermal Guide: Intel® Xeon® Processor E5 V4 Product Family
    Intel® Xeon® Processor E5 v4 Product Family Thermal Mechanical Specification and Design Guide June 2016 Document Number: 333812-002 IntelLegal Lines and Disclaimerstechnologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at Intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppal or otherwise) to any intellectual property rights is granted by this document. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Intel® Turbo Boost Technology requires a PC with a processor with Intel Turbo Boost Technology capability. Intel Turbo Boost Technology performance varies depending on hardware, software and overall system configuration. Check with your PC manufacturer on whether your system delivers Intel Turbo Boost Technology. For more information, see http://www.intel.com/technology/turboboost Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548- 4725 or by visiting www.intel.com/design/literature.htm.
    [Show full text]
  • Dynamic Voltage/Frequency Scaling and Power-Gating of Network-On-Chip with Machine Learning
    Dynamic Voltage/Frequency Scaling and Power-Gating of Network-on-Chip with Machine Learning A thesis presented to the faculty of the Russ College of Engineering and Technology of Ohio University In partial fulfillment of the requirements for the degree Master of Science Mark A. Clark May 2019 © 2019 Mark A. Clark. All Rights Reserved. 2 This thesis titled Dynamic Voltage/Frequency Scaling and Power-Gating of Network-on-Chip with Machine Learning by MARK A. CLARK has been approved for the School of Electrical Engineering and Computer Science and the Russ College of Engineering and Technology by Avinash Karanth Professor of Electrical Engineering and Computer Science Dennis Irwin Dean, Russ College of Engineering and Technology 3 Abstract CLARK, MARK A., M.S., May 2019, Electrical Engineering Dynamic Voltage/Frequency Scaling and Power-Gating of Network-on-Chip with Machine Learning (89 pp.) Director of Thesis: Avinash Karanth Network-on-chip (NoC) continues to be the preferred communication fabric in multicore and manycore architectures as the NoC seamlessly blends the resource efficiency of the bus with the parallelization of the crossbar. However, without adaptable power management the NoC suffers from excessive static power consumption at higher core counts. Static power consumption will increase proportionally as the size of the NoC increases to accommodate higher core counts in the future. NoC also suffers from excessive dynamic energy as traffic loads fluctuate throughout the execution of an application. Power- gating (PG) and Dynamic Voltage and Frequency Scaling (DVFS) are two highly effective techniques proposed in literature to reduce static power and dynamic energy in the NoC respectively.
    [Show full text]
  • Power Reduction Techniques for Microprocessor Systems
    Power Reduction Techniques For Microprocessor Systems VASANTH VENKATACHALAM AND MICHAEL FRANZ University of California, Irvine Power consumption is a major factor that limits the performance of computers. We survey the “state of the art” in techniques that reduce the total power consumed by a microprocessor system over time. These techniques are applied at various levels ranging from circuits to architectures, architectures to system software, and system software to applications. They also include holistic approaches that will become more important over the next decade. We conclude that power management is a multifaceted discipline that is continually expanding with new techniques being developed at every level. These techniques may eventually allow computers to break through the “power wall” and achieve unprecedented levels of performance, versatility, and reliability. Yet it remains too early to tell which techniques will ultimately solve the power problem. Categories and Subject Descriptors: C.5.3 [Computer System Implementation]: Microcomputers—Microprocessors;D.2.10 [Software Engineering]: Design— Methodologies; I.m [Computing Methodologies]: Miscellaneous General Terms: Algorithms, Design, Experimentation, Management, Measurement, Performance Additional Key Words and Phrases: Energy dissipation, power reduction 1. INTRODUCTION of power; so much power, in fact, that their power densities and concomitant Computer scientists have always tried to heat generation are rapidly approaching improve the performance of computers. levels comparable to nuclear reactors But although today’s computers are much (Figure 1). These high power densities faster and far more versatile than their impair chip reliability and life expectancy, predecessors, they also consume a lot increase cooling costs, and, for large Parts of this effort have been sponsored by the National Science Foundation under ITR grant CCR-0205712 and by the Office of Naval Research under grant N00014-01-1-0854.
    [Show full text]
  • Happy: Hyperthread-Aware Power Profiling Dynamically
    HaPPy: Hyperthread-aware Power Profiling Dynamically Yan Zhai, University of Wisconsin; Xiao Zhang and Stephane Eranian, Google Inc.; Lingjia Tang and Jason Mars, University of Michigan https://www.usenix.org/conference/atc14/technical-sessions/presentation/zhai This paper is included in the Proceedings of USENIX ATC ’14: 2014 USENIX Annual Technical Conference. June 19–20, 2014 • Philadelphia, PA 978-1-931971-10-2 Open access to the Proceedings of USENIX ATC ’14: 2014 USENIX Annual Technical Conference is sponsored by USENIX. HaPPy: Hyperthread-aware Power Profiling Dynamically Yan Zhai Xiao Zhang, Stephane Eranian Lingjia Tang, Jason Mars University of Wisconsin Google Inc. University of Michigan [email protected] xiaozhang,eranian @google.com lingjia,profmars @eesc.umich.edu { } { } Abstract specified power threshold by suspending a subset of jobs. Quantifying the power consumption of individual appli- Scheduling can also be used to limit processor utilization cations co-running on a single server is a critical compo- to reach energy consumption goals. Beyond power bud- nent for software-based power capping, scheduling, and geting, pricing the power consumed by jobs in datacen- provisioning techniques in modern datacenters. How- ters is also important in multi-tenant environments. ever, with the proliferation of hyperthreading in the last One capability that proves critical in enabling software few generations of server-grade processor designs, the to monitor and manage power resources in large-scale challenge of accurately and dynamically performing this datacenter infrastructures is the attribution of power con- power attribution to individual threads has been signifi- sumption to the individual applications co-running on cantly exacerbated.
    [Show full text]
  • Learning-Directed Dynamic Voltage and Frequency Scaling Scheme with Adjustable Performance for Single-Core and Multi-Core Embedded and Mobile Systems †
    sensors Article Learning-Directed Dynamic Voltage and Frequency Scaling Scheme with Adjustable Performance for Single-Core and Multi-Core Embedded and Mobile Systems † Yen-Lin Chen 1,* , Ming-Feng Chang 2, Chao-Wei Yu 1 , Xiu-Zhi Chen 1 and Wen-Yew Liang 1 1 Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan; [email protected] (C.-W.Y.); [email protected] (X.-Z.C.); [email protected] (W.-Y.L.) 2 MediaTek Inc., Hsinchu 30078, Taiwan; [email protected] * Correspondence: [email protected]; Tel.: +886-2-27712171 (ext. 4239) † This paper is an expanded version of “Learning-Directed Dynamic Volt-age and Frequency Scaling for Computation Time Prediction” published in Proceedings of 2011 IEEE 10th International Conference on Trust, Security and Privacy in Computing and Communications, Changsha, China, 16–18 November 2011. Received: 6 August 2018; Accepted: 8 September 2018; Published: 12 September 2018 Abstract: Dynamic voltage and frequency scaling (DVFS) is a well-known method for saving energy consumption. Several DVFS studies have applied learning-based methods to implement the DVFS prediction model instead of complicated mathematical models. This paper proposes a lightweight learning-directed DVFS method that involves using counter propagation networks to sense and classify the task behavior and predict the best voltage/frequency setting for the system. An intelligent adjustment mechanism for performance is also provided to users under various performance requirements. The comparative experimental results of the proposed algorithms and other competitive techniques are evaluated on the NVIDIA JETSON Tegra K1 multicore platform and Intel PXA270 embedded platforms.
    [Show full text]
  • Summarizing CPU and GPU Design Trends with Product Data
    Summarizing CPU and GPU Design Trends with Product Data Yifan Sun, Nicolas Bohm Agostini, Shi Dong, and David Kaeli Northeastern University Email: fyifansun, agostini, shidong, [email protected] Abstract—Moore’s Law and Dennard Scaling have guided the products. Equipped with this data, we answer the following semiconductor industry for the past few decades. Recently, both questions: laws have faced validity challenges as transistor sizes approach • Are Moore’s Law and Dennard Scaling still valid? If so, the practical limits of physics. We are interested in testing the validity of these laws and reflect on the reasons responsible. In what are the factors that keep the laws valid? this work, we collect data of more than 4000 publicly-available • Do GPUs still have computing power advantages over CPU and GPU products. We find that transistor scaling remains CPUs? Is the computing capability gap between CPUs critical in keeping the laws valid. However, architectural solutions and GPUs getting larger? have become increasingly important and will play a larger role • What factors drive performance improvements in GPUs? in the future. We observe that GPUs consistently deliver higher performance than CPUs. GPU performance continues to rise II. METHODOLOGY because of increases in GPU frequency, improvements in the thermal design power (TDP), and growth in die size. But we We have collected data for all CPU and GPU products (to also see the ratio of GPU to CPU performance moving closer to our best knowledge) that have been released by Intel, AMD parity, thanks to new SIMD extensions on CPUs and increased (including the former ATI GPUs)1, and NVIDIA since January CPU core counts.
    [Show full text]