Unlocking Energy
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Approaches in Green Computing
Special Issue - 2015 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 NSRCL-2015 Conference Proceedings Approaches in Green Computing Reena Thomas Fedrina J Manjaly 3 rd BCA, Department of Computer Science 3 rd BCA , Department of Computer Science Carmel College Carmel College Mala, Thrissur Mala, Thrissur Abstract— In a 2008 article San Murugesan defined green computing as "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems — such as monitors, printers, storage devices, and networking and communications systems — efficiently and effectively with minimal or no impact on the environment."Murugesan lays out four paths along which he believes the environmental effects of computing should be addressed:Green use, green disposal, green design, and green manufacturing. Green computing can also develop solutions that offer benefits by "aligning all IT processes and practices with the core principles of sustainability, which are to reduce, reuse, and recycle; and finding innovative ways to use IT in business processes to deliver sustainability benefits across the Figure 1: Green Computing Migration Framework enterprise and beyond". I. INTRODUCTION II. APPROACHES In 1992, the U.S. Environmental Protection Agency A. Product longevity launched Energy Star, a voluntary labeling program that is Gartner maintains that the PC manufacturing process designed to promote and recognize energy-efficiency in accounts for 70% of the natural resources used in the life monitors, climate control equipment, and other cycle of a PC. More recently, Fujitsu released a Life Cycle technologies. This resulted in the widespread adoption of Assessment (LCA) of a desktop that show that sleep mode among consumer electronics. -
Power-Aware Design Methodologies for FPGA-Based Implementation of Video Processing Systems
Old Dominion University ODU Digital Commons Electrical & Computer Engineering Theses & Dissertations Electrical & Computer Engineering Winter 2007 Power-Aware Design Methodologies for FPGA-Based Implementation of Video Processing Systems Hau Trung Ngo Old Dominion University Follow this and additional works at: https://digitalcommons.odu.edu/ece_etds Part of the Electrical and Computer Engineering Commons Recommended Citation Ngo, Hau T.. "Power-Aware Design Methodologies for FPGA-Based Implementation of Video Processing Systems" (2007). Doctor of Philosophy (PhD), Dissertation, Electrical & Computer Engineering, Old Dominion University, DOI: 10.25777/j6kw-q685 https://digitalcommons.odu.edu/ece_etds/185 This Dissertation is brought to you for free and open access by the Electrical & Computer Engineering at ODU Digital Commons. It has been accepted for inclusion in Electrical & Computer Engineering Theses & Dissertations by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. POWER-AW ARE DESIGN METHODOLOGIES FOR FPGA-BASED IMPLEMENTATION OF VIDEO PROCESSING SYSTEMS By Hau Trung Ngo B. S. May 2001, Old Dominion University M. S. May 2003, Old Dominion University A Dissertation Submitted to the Faculty of Old Dominion University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY ELECTRICAL AND COMPUTER ENGINEERING OLD DOMINION UNIVERSITY December 2007 Approved by: Vijayan i K. Asari (Direc(Director) Shirshak K. Dhali (Member) Min Song (Member) Ravi Mukkdmala (Member) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ABSTRACT POWER-AWARE DESIGN METHODOLOGIES FOR FPGA-BASED IMPLEMENTATION OF VIDEO PROCESSING SYSTEMS Hau Trung Ngo Old Dominion University Director: Dr. Vijayan Asari The increasing capacity and capabilities of FPGA devices in recent years provide an attractive option for performance-hungry applications in the image and video processing domain. -
A Dynamic Voltage Scaling Algorithm for Sporadic Tasks∗
In: Proceedings of the 24th IEEE Real-Time Systems Symposium, Cancun, Mexico, December 2003, pp. 52-62. A Dynamic Voltage Scaling Algorithm for Sporadic Tasks¤ Ala0 Qadi Steve Goddard Shane Farritor Computer Science & Engineering Mechanical Engineering University of Nebraska—Lincoln University of Nebraska - Lincoln Lincoln, NE 68588-0115 Lincoln, NE 68588-0656 faqadi,[email protected] [email protected] Abstract In CMOS circuits the power consumed by a CMOS gate is proportional to the square of the voltage applied to the Dynamic voltage scaling (DVS) algorithms save energy circuit, as shown by Equation (1) where CL is the gate load by scaling down the processor frequency when the proces- capacitance (output capacitance),VDD is the supply voltage sor is not fully loaded. Many algorithms have been proposed and f is the clock frequency [29]. The circuit delay td is for periodic and aperiodic task models but none support the given by Equation (2) where k is a constant depending on canonical sporadic task model. A DVS algorithm, called the output gate size and the output capacitance and VT is DVSST, is presented that can be used with sporadic tasks the threshold voltage [29]. The clock frequency is inversely in conjunction with preemptive EDF scheduling. The algo- proportional to the circuit delay; it is expressed using td and rithm is proven to guarantee each task meets its deadline the logic depth of a critical path as in Equation (3) where Ld while saving the maximum amount of energy possible with is the depth of the critical path [29]. processor frequency scaling. -
Comptia Fc0-Gr1 Exam Questions & Answers
COMPTIA FC0-GR1 EXAM QUESTIONS & ANSWERS Number : FC0-GR1 Passing Score : 800 Time Limit : 120 min File Version : 31.4 http://www.gratisexam.com/ COMPTIA FC0-GR1 EXAM QUESTIONS & ANSWERS Exam Name: CompTIA Strata Green IT Exam Visualexams QUESTION 1 A small business currently has a server room with a large cooling system that is appropriate for its size. The location of the server room is the top level of a building. The server room is filled with incandescent lighting that needs to continuously stay on for security purposes. Which of the following would be the MOST cost-effective way for the company to reduce the server rooms energy footprint? A. Replace all incandescent lighting with energy saving neon lighting. B. Set an auto-shutoff policy for all the lights in the room to reduce energy consumption after hours. C. Replace all incandescent lighting with energy saving fluorescent lighting. D. Consolidate server systems into a lower number of racks, centralizing airflow and cooling in the room. Correct Answer: C Section: (none) Explanation Explanation/Reference: QUESTION 2 Which of the following methods effectively removes data from a hard drive prior to disposal? (Select TWO). A. Use the remove hardware OS feature B. Formatting the hard drive C. Physical destruction D. Degauss the drive E. Overwriting data with 1s and 0s by utilizing software Correct Answer: CE Section: (none) Explanation Explanation/Reference: QUESTION 3 Which of the following terms is used when printing data on both the front and the back of paper? A. Scaling B. Copying C. Duplex D. Simplex Correct Answer: C Section: (none) Explanation Explanation/Reference: QUESTION 4 A user reports that their cell phone battery is dead and cannot hold a charge. -
Vertigo: Automatic Performance-Setting for Linux
USENIX Association Proceedings of the 5th Symposium on Operating Systems Design and Implementation Boston, Massachusetts, USA December 9–11, 2002 THE ADVANCED COMPUTING SYSTEMS ASSOCIATION © 2002 by The USENIX Association All Rights Reserved For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. Vertigo: Automatic Performance-Setting for Linux Krisztián Flautner Trevor Mudge [email protected] [email protected] ARM Limited The University of Michigan 110 Fulbourn Road 1301 Beal Avenue Cambridge, UK CB1 9NJ Ann Arbor, MI 48109-2122 Abstract player, game machine, camera, GPS, even the wallet— into a single device. This requires processors that are Combining high performance with low power con- capable of high performance and modest power con- sumption is becoming one of the primary objectives of sumption. Moreover, to be power efficient, the proces- processor designs. Instead of relying just on sleep mode sors for the next generation communicator need to take for conserving power, an increasing number of proces- advantage of the highly variable performance require- sors take advantage of the fact that reducing the clock ments of the applications they are likely to run. For frequency and corresponding operating voltage of the example an MPEG video player requires about an order CPU can yield quadratic decrease in energy use. -
Scalable NUMA-Aware Blocking Synchronization Primitives
Scalable NUMA-aware Blocking Synchronization Primitives Sanidhya Kashyap, Changwoo Min, Taesoo Kim The rise of big NUMA machines 'i The rise of big NUMA machines 'i The rise of big NUMA machines 'i Importance of NUMA awareness NUMA node 1 NUMA node 2 NUMA oblivious W1 W2 W3 W4 W6 W5 W6 W5 W3 W1 W4 L W2 File Importance of NUMA awareness NUMA node 1 NUMA node 2 NUMA oblivious W1 W2 W3 W4 W6 W5 W6 W5 W3 W1 W4 NUMA aware/hierarchical L W2 W1 W6 W2 W3 W4 W5 File Importance of NUMA awareness NUMA node 1 NUMA node 2 NUMA oblivious W1 W2 W3 W4 W6 W5 W6 W5 W3 W1 W4 NUMA aware/hierarchical L W2 W1 W6 W2 W3 W4 W5 File Idea: Make synchronization primitives NUMA aware! Lock's research efforts and their use Lock's research efforts Linux kernel lock Dekker's algorithm (1962) adoption / modification Semaphore (1965) Lamport's bakery algorithm (1974) Backoff lock (1989) Ticket lock (1991) MCS lock (1991) HBO lock (2003) Hierarchical lock – HCLH (2006) Flat combining NUMA lock (2011) Remote Core locking (2012) Cohort lock (2012) RW cohort lock (2013) Malthusian lock (2014) HMCS lock (2015) AHMCS lock(2016) Lock's research efforts and their use Lock's research efforts Linux kernel lock Dekker's algorithm (1962) adoption / modification Semaphore (1965) Lamport's bakery algorithm (1974) Backoff lock (1989) Ticket lock (1991) MCS lock (1991) HBO lock (2003) Hierarchical lock – HCLH (2006) Flat combining NUMA lock (2011) Remote Core locking (2012) NUMA- Cohort lock (2012) aware RW cohort lock (2013) locks Malthusian lock (2014) HMCS lock (2015) AHMCS lock(2016) -
Power Reduction Techniques for Microprocessor Systems
Power Reduction Techniques For Microprocessor Systems VASANTH VENKATACHALAM AND MICHAEL FRANZ University of California, Irvine Power consumption is a major factor that limits the performance of computers. We survey the “state of the art” in techniques that reduce the total power consumed by a microprocessor system over time. These techniques are applied at various levels ranging from circuits to architectures, architectures to system software, and system software to applications. They also include holistic approaches that will become more important over the next decade. We conclude that power management is a multifaceted discipline that is continually expanding with new techniques being developed at every level. These techniques may eventually allow computers to break through the “power wall” and achieve unprecedented levels of performance, versatility, and reliability. Yet it remains too early to tell which techniques will ultimately solve the power problem. Categories and Subject Descriptors: C.5.3 [Computer System Implementation]: Microcomputers—Microprocessors;D.2.10 [Software Engineering]: Design— Methodologies; I.m [Computing Methodologies]: Miscellaneous General Terms: Algorithms, Design, Experimentation, Management, Measurement, Performance Additional Key Words and Phrases: Energy dissipation, power reduction 1. INTRODUCTION of power; so much power, in fact, that their power densities and concomitant Computer scientists have always tried to heat generation are rapidly approaching improve the performance of computers. levels comparable to nuclear reactors But although today’s computers are much (Figure 1). These high power densities faster and far more versatile than their impair chip reliability and life expectancy, predecessors, they also consume a lot increase cooling costs, and, for large Parts of this effort have been sponsored by the National Science Foundation under ITR grant CCR-0205712 and by the Office of Naval Research under grant N00014-01-1-0854. -
Vertigo: Automatic Performance-Setting for Linux
Vertigo: Automatic Performance-Setting for Linux Krisztián Flautner ARM Limited, Cambridge, UK Trevor Mudge The University of Michigan Presented by Choi Hojung Background(1) • In 2002 – need for low power and high performance processors – from embedded computers to servers – high performance – battery operated Background(2) – Intel SpeedStep • SpeedStep by Intel – No built-in performance-setting policy – A simple approach by the usage model – When on AC power, processor runs at higher speed. – When on battery power , processor runs at a slower speed, thus saving battery power Background(3) - LongRun • LongRun for Crusoe™, by Transmeta – power management that dynamically manages the frequency and voltage levels at runtime – use historical utilization to guide clock rate selection – in processor’s firmware – interval-based algorithm Background(4) - LongRun • Flaws & Questions – in Processor’s firmware – utilization periods can be obscured when all tasks are observed in the aggregate – not have any information about interactive performance in operating system level – a single algorithm perform well under all conditions Background(5) - DVS • Dynamic voltage scaling(DVS) – also called Dynamic Voltage and Frequency Scaling(DVFS) – reduces the power consumed by a processor by lowering its operating voltage – P : power consumption - V : the supply voltage – C : the capacitance - f : operating frequency Proposed • What Vertigo proposed – Implemented in OS kernel level to use a richer set of data for prediction – to reduce the processor’s performance -
ECE 571 – Advanced Microprocessor-Based Design Lecture 28
ECE 571 { Advanced Microprocessor-Based Design Lecture 28 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 9 November 2020 Announcements • HW#9 will be posted, read AMD Zen 3 Article • Remember, no class on Wednesday 1 When can we scale CPU down? • System idle • System memory or I/O bound • Poor multi-threaded code (spinning in spin locks) • Thermal emergency • User preference (want fans to run less) 2 Non-CPU power saving • RAM • GPU • Ethernet / Wireless • Disk • PCI • USB 3 GPU power saving • From Intel lesswatts.org ◦ Framebuffer Compression ◦ Backlight Control ◦ Minimized Vertical Blank Interrupts ◦ Auto Display Brightness • from LWN: http://lwn.net/Articles/318727/ ◦ Clock gating or reclocking ◦ Fewer memory accesses: compression. Simpler background image, lower power 4 ◦ Moving mouse: 15W. Blinking cursor: 2W ◦ Powering off unneeded output port, 0.5W ◦ LVDS (low-voltage digital signaling) interface, lower refresh rate, 0.5W (start getting artifacts) 5 More LCD • When LCD not powered, not twisted, light comes through • Active matrix display, transistor and capacitor at each pixel (which can often have 255 levels of brightness). Needs to be refreshed like memory. One row at a time usually. 6 Ethernet • PHY (transmitter) can take several watts • WOL can draw power when system is turned off • Gigabit draw 2W-4W more than 100Megabit 10 Gigabit 10-20W more than 100Megabit • Takes up to 2 seconds to re-negotiate speeds • Green Ethernet IEEE 802.3az 7 WLAN • power-save poll { go to sleep, have server queue up packets. latency • Auto association { how aggressively it searches for access points • RFKill switch • Unnecessary Bluetooth 8 Disks • SATA Aggressive Link Power Management { shuts down when no I/O for a while, save up to 1.5W • Filesystem atime • Disk power management (spin down) (lifetime of drive) • VM writeback { less power if queue up, but power failure potentially worse 9 Soundcards • Low-power mode 10 USB • autosuspend. -
Non-Scalable Locks Are Dangerous
Non-scalable locks are dangerous Silas Boyd-Wickizer, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich MIT CSAIL Abstract more cores. The paper presents a predictive model of non-scalable lock performance that explains this phe- Several operating systems rely on non-scalable spin locks nomenon. for serialization. For example, the Linux kernel uses A third element of the argument is that critical sections ticket spin locks, even though scalable locks have better which appear to the eye to be very short, perhaps only theoretical properties. Using Linux on a 48-core ma- a few instructions, can nevertheless trigger performance chine, this paper shows that non-scalable locks can cause collapses. The paper’s model explains this phenomenon dramatic collapse in the performance of real workloads, as well. even for very short critical sections. The nature and sud- den onset of collapse are explained with a new Markov- Naturally we argue that one should use scalable locks [1, based performance model. Replacing the offending non- 7, 9], particularly in operating system kernels where scalable spin locks with scalable spin locks avoids the the workloads and levels of contention are hard to con- collapse and requires modest changes to source code. trol. As a demonstration, we replaced Linux’s spin locks with scalable MCS locks [9] and re-ran the software that caused performance collapse. For 3 of the 4 benchmarks 1 Introduction the changes in the kernel were simple. For the 4th case the changes were more involved because the directory It is well known that non-scalable locks, such as simple cache uses a complicated locking plan and the directory spin locks, have poor performance when highly con- cache in general is complicated. -
Dynamic Thermal Management for High-Performance Microprocessors
Dynamic Thermal Management for High-Performance Microprocessors David Brooks Margaret Martonosi Department of Electrical Engineering Department of Electrical Engineering Princeton University Princeton University dbrooks @ee.princeton.edu mrm @ee.princeton.edu Abstract [23]. The second major source of design complexity in- volves power delivery, specifically the on-chip decoupling With the increasing clock rate and transistor count of capacitances required by the power distribution network. today’s microprocessors, power dissipation is becoming a Unfortunately, these cooling techniques must be de- critical component of system design complexity. Thermal signed to withstand the maximum possible power dissipa- and power-delivery issues are becoming especially critical tion of the microprocessor, even if these cases rarely oc- for high-performance computing systems. cur in typical applications. For example, while the Alpha In this work, we investigate dynamic thermal manage- ment as a technique to control CPUpower dissipation. With 21 264 processor is rated as having a maximum power dis- the increasing usage of clock gating techniques, the average sipation of 95W when running “max-power” benchmarks, power dissipation typically seen by common applications is the average power dissipation was found to be only 72W becoming much less than the chip’s rated maximum power for typical applications [IO]. The increased use of clock dissipation. However; system designers still must design gating and other power management techniques that target thermal heat sinks to withstand the worst-case scenario. average power dissipation will expand this gap even further We define and investigate the major components of any in future processors. This disparity between the maximum dynamic thermal management scheme. -
Recoverable Mutual Exclusion for Non-Volatile Main Memory Systems
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Waterloo's Institutional Repository RGLock: Recoverable Mutual Exclusion for Non-Volatile Main Memory Systems by Aditya Ramaraju A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering (Computer Software) Waterloo, Ontario, Canada, 2015 ©Aditya Ramaraju 2015 AUTHOR'S DECLARATION I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Aditya Ramaraju. ii Abstract Mutex locks have traditionally been the most popular concurrent programming mechanisms for inter- process synchronization in the rapidly advancing field of concurrent computing systems that support high- performance applications. However, the concept of recoverability of these algorithms in the event of a crash failure has not been studied thoroughly. Popular techniques like transaction roll-back are widely known for providing fault-tolerance in modern Database Management Systems. Whereas in the context of mutual exclusion in shared memory systems, none of the prominent lock algorithms (e.g., Lamport’s Bakery algorithm, MCS lock, etc.) are designed to tolerate crash failures, especially in operations carried out in the critical sections. Each of these algorithms may fail to maintain mutual exclusion, or sacrifice some of the liveness guarantees in presence of crash failures. Storing application data and recovery information in the primary storage with conventional volatile memory limits the development of efficient crash-recovery mechanisms since a failure on any component in the system causes a loss of program data.