The Role of Performance Relating the Metrics CPU Execution Time for a Program = CPU Clock Cycles for a Program * Ming-Hwa Wang, Ph.D

Total Page:16

File Type:pdf, Size:1020Kb

The Role of Performance Relating the Metrics CPU Execution Time for a Program = CPU Clock Cycles for a Program * Ming-Hwa Wang, Ph.D The Role of Performance Relating the Metrics CPU execution time for a program = CPU clock cycles for a program * Ming-Hwa Wang, Ph.D. clock cycle time = CPU clock cycles for a program / clock rate, measured COEN 210 Computer Architecture by running the program Department of Computer Engineering CPU clock cycles = instructions for a program * average CPI or clock Santa Clara University cycles per instruction measuring the instruction count (depends on the architecture but not on the Introduction exact implementation) by using hardware counters or by using software hardware performance is often key to the effectiveness of an entire system tools (that profile the execution or by using a simulator of the architecture) of hardware and software CPI = CPU clock cycles / instruction count, provides one way of comparing accurately measuring and comparing different machines is critical to two different implementations of the same instruction set architecture, CPI purchasers, and therefore to designers varies by applications as well as among implementations with the same for different types of applications, different performance metrics may be instruction set appropriate and different aspects of a computer system may be the most CPU time = instruction count * CPI * clock cycle time = instruction count * significant in determining overall performance CPI / clock rate an individual computer user is interested in reducing response/execution CPU execution time = (instructions / program) * (clock cycles / instruction) * time (the time between the start and completion of a task), computer center (seconds / clock cycle) n managers are often interested in increasing throughput (the total amount of CPU clock cycles = i=1 (CPIi * Ci), where Ci is the count of the number of work done in a given time) instructions of class i executed, CPIi is the average number of cycles per for a machine X, performanceX = 1 / execution timeX instruction for that instruction class, and n is the number of instruction balance performance and cost classes high-performance design (e.g., supercomputer) – performance is the primary goal and cost is secondary Choosing Programs to Evaluate Performance low-cost design (e.g., PC clones, embedded computers) – cost takes performance measurements should be reproducibility by listing everything precedence over performance another experimenter would need to duplicate the results cost/performance design (e.g., workstation) – balances cost against workload – the set of programs run performance benchmarks – programs specifically chosen to measure performance the benchmarks form a workload that the user hopes will predict the Measuring Performance performance of the actual workload time is the measure of computer performance the best type of programs to use for benchmarks are real applications execution/response/wall-clock/elapsed time is the total time to complete a that the user employs regularly or simply applications that are typical task measured in seconds using real application as benchmarks makes it much more difficult to CPU time is the time CPU spends computing for a task and does not find trivial ways to speedup the execution of the benchmark; and when include time spent waiting for I/O or running other programs (system techniques are found to improve performance, such techniques are performance refer to elapsed time on an unloaded system) much more likely to help other programs in addition to the benchmark user CPU time or CPU performance: the CPU time spent in the the use of benchmarks whose performance depends on very small program code segments encourages optimization in either the architecture or system CPU time: the CPU time spent in the OS compiler that target those segments the Unix time command a designer might try to make some sequence of instructions run clock rate is the inverse of the clock cycle (or tick, clock tick, clock period, especially fast for the sequence occurs in the benchmarks, and with clock, cycle), usually published as part of the document for a machine specific compiler option for special-purpose optimizations sometime in the quest to produce highly optimized code for benchmarks, engineers introduce erroneous optimizations small benchmarks are attractive when beginning a design and more easily standardized CPU benchmarks – the SPEC (System Performance Evaluation the earlier version of Amdahl’s law: execution time after Cooperative) suite improvement = execution time affected by improvement / amount the first release in 1989 consists of 4 integer and 6 floating-point of improvement + execution time unaffected benchmarks, the matrix300 was to exercise the computer’s memory speedup = performance after improvement / performance before system, but optimization by blocking transformations substantially improvement = execution time before improvement / execution lower the number of memory accesses required and transform the time after improvement inner loops from having high cache miss rate to having almost negligible case miss rate, which reorganized the program to minimize Fallacies and Pitfalls memory usage, and thus was eliminated from later release pitfall: expecting the improvement of one aspect of a machine to increase the 1992 release (SPEC92) separates integer and floating-point performance by an amount proportional to the size of the improvement programs (SPECint and SPECfp), and the SPECbase disallows execution time after improvement = execution time affected by program-specific optimization flags improvement / amount of improvement + execution time unaffected the 1995 release (the SPEC95 suite) consists of 8 integer (SPECint95) a corollary of Amdahl’s law: make the common case fast, making the and 10 floating-point (SPECfp95) programs common case fast will tend to enhance performance better than the SPEC ratio: normalize the execution time by dividing the optimizing the rare case, and common case is often simpler and easier execution time on a Sun SPARCstation 10/40 by the execution to enhance than the rare case time on the measured machine fallacy: hardware-independent metrics predict performance (e.g., use code the SDM (System Development Multitasking) benchmark size as a measure of speed) the SFS (System-level File Server) benchmark the size of the compiled program is important when memory space is the 1996 release adds SPECpc96 for high-end scientific workloads at a premium today, the fastest machines tend to have instruction sets that lead to larger programs but can be executed faster with less hardware Comparing and Summarizing Performance 6 the simplest approach to summarizing relative performance is to use total pitfall: using MIPS (native MIPS = instruction count / (execution time * 10 ) execution time as a performance metric MIPS specifies the instruction execution rate but does not take into the average of the execution times that is directly proportional to total account the capabilities of the instructions execution time is the arithmetic mean (AM) = ( nTime ) / n i=1 i MIPS varies between programs on the same computer weighted arithmetic mean: assign a weighting factor wi to each program to MIPS can very inversely with performance indicate the frequency of the program in that workload fallacy: synthetic benchmarks predict performance create a single benchmark program where the execution frequency of Performance of Recent Processors statements in the benchmark matches the statement frequency in a for a given instruction set architecture, increase in CPU performance can large set of benchmarks come from 3 sources: Whetstone – for scientific and engineering environment quoted in increase in clock rate Whetstones per second (the number of executions of one iteration improve in processor organization that lower the CPI of the Whetstone benchmark) compiler enhancements that lower the instruction count or generate Dhrystone – for systems programming environment instructions with a lower average CPI no user would ever run a synthetic benchmark as an application memory system has a significant effect on performance – when the clock usually not reflect program behavior rate is increased by a certain factor, the processor performance increases special purpose optimizations can inflate the performance by a lower factor because the performance loss in the memory system; pitfall: using the arithmetic mean of normalized execution time to predict since the speed of main memory is not increased, increasing the processor performance speed will exacerbate the bottleneck at the memory system especially on geometric mean = ( nExecution time ratio )1/n the floating-point benchmarks (the Amdahl’s law) i=1 i geometric mean(Xi)/geometric mean(Yi) = geometric mean(Xi/Yi) Amdahl’s law – speedup is the measure of how a machine performs fallacy: the geometric mean of execution time ratios is proportional to total after some enhancement relative to how it performed previously execution time geometric means do not track total execution time and thus can’t be used to predict relative execution for a workload Historical Perspective instruction mix – measure the relative frequency of instructions in a computer across many programs average instruction execution time – multiply the time for each instruction by its weight in the mix peak MIPS – choose an instruction mix that minimizes the CPI, even if the instruction mix is totally impractical MFLOPS/MOPS (millions of floating-point/integer operations per second) or megaFLOPS/OPS MFLOPS = number of floating-point operations in a program / (execution time
Recommended publications
  • Chapter 1: Computer Abstractions and Technology 1.6 – 1.7: Performance and Power
    Chapter 1: Computer Abstractions and Technology 1.6 – 1.7: Performance and power ITSC 3181 Introduction to Computer Architecture https://passlaB.githuB.io/ITSC3181/ Department of Computer Science Yonghong Yan [email protected] https://passlab.github.io/yanyh/ Lectures for Chapter 1 and C Basics Computer Abstractions and Technology • Lecture 01: Chapter 1 – 1.1 – 1.4: Introduction, great ideas, Moore’s law, aBstraction, computer components, and program execution • Lecture 02: C Basics; Memory and Binary Systems • Lecture 03: Number System, Compilation, Assembly, Linking and Program Execution ☛• Lecture 04: Chapter 1 – 1.6 – 1.7: Performance, power and technology trends • Lecture 05: – 1.8 - 1.9: Multiprocessing and Benchmarking 2 § 1.6 Performance 1.6 Defining Performance • Which airplane has the best performance? Boeing 777 Boeing 777 Boeing 747 Boeing 747 BAC/Sud BAC/Sud Concorde Concorde Douglas Douglas DC- DC-8-50 8-50 0 100 200 300 400 500 0 2000 4000 6000 8000 10000 Passenger Capacity Cruising Range (miles) Boeing 777 Boeing 777 Boeing 747 Boeing 747 BAC/Sud BAC/Sud Concorde Concorde Douglas Douglas DC- DC-8-50 8-50 0 500 1000 1500 0 100000 200000 300000 400000 Cruising Speed (mph) Passengers x mph 3 Response Time and Throughput • Response time çè Latency – How long it takes to do a task • Throughput çè Bandwidth – Total work done per unit time • e.g., tasks/transactions/… per hour • How are response time and throughput affected by – Replacing the processor with a faster version? – Adding more processors? • We’ll focus on response time for now… 4 Relative Performance • Define Performance = 1/Execution Time • “X is n time faster than Y”, i.e.
    [Show full text]
  • Computer Organization and Architecture Designing for Performance Ninth Edition
    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION William Stallings Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Editorial Director: Marcia Horton Designer: Bruce Kenselaar Executive Editor: Tracy Dunkelberger Manager, Visual Research: Karen Sanatar Associate Editor: Carole Snyder Manager, Rights and Permissions: Mike Joyce Director of Marketing: Patrice Jones Text Permission Coordinator: Jen Roach Marketing Manager: Yez Alayan Cover Art: Charles Bowman/Robert Harding Marketing Coordinator: Kathryn Ferranti Lead Media Project Manager: Daniel Sandin Marketing Assistant: Emma Snider Full-Service Project Management: Shiny Rajesh/ Director of Production: Vince O’Brien Integra Software Services Pvt. Ltd. Managing Editor: Jeff Holcomb Composition: Integra Software Services Pvt. Ltd. Production Project Manager: Kayla Smith-Tarbox Printer/Binder: Edward Brothers Production Editor: Pat Brown Cover Printer: Lehigh-Phoenix Color/Hagerstown Manufacturing Buyer: Pat Brown Text Font: Times Ten-Roman Creative Director: Jayne Conte Credits: Figure 2.14: reprinted with permission from The Computer Language Company, Inc. Figure 17.10: Buyya, Rajkumar, High-Performance Cluster Computing: Architectures and Systems, Vol I, 1st edition, ©1999. Reprinted and Electronically reproduced by permission of Pearson Education, Inc. Upper Saddle River, New Jersey, Figure 17.11: Reprinted with permission from Ethernet Alliance. Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on the appropriate page within text. Copyright © 2013, 2010, 2006 by Pearson Education, Inc., publishing as Prentice Hall. All rights reserved. Manufactured in the United States of America.
    [Show full text]
  • Trends in Electrical Efficiency in Computer Performance
    ASSESSING TRENDS IN THE ELECTRICAL EFFICIENCY OF COMPUTATION OVER TIME Jonathan G. Koomey*, Stephen Berard†, Marla Sanchez††, Henry Wong** * Lawrence Berkeley National Laboratory and Stanford University †Microsoft Corporation ††Lawrence Berkeley National Laboratory **Intel Corporation Contact: [email protected], http://www.koomey.com Final report to Microsoft Corporation and Intel Corporation Submitted to IEEE Annals of the History of Computing: August 5, 2009 Released on the web: August 17, 2009 EXECUTIVE SUMMARY Information technology (IT) has captured the popular imagination, in part because of the tangible benefits IT brings, but also because the underlying technological trends proceed at easily measurable, remarkably predictable, and unusually rapid rates. The number of transistors on a chip has doubled more or less every two years for decades, a trend that is popularly (but often imprecisely) encapsulated as “Moore’s law”. This article explores the relationship between the performance of computers and the electricity needed to deliver that performance. As shown in Figure ES-1, computations per kWh grew about as fast as performance for desktop computers starting in 1981, doubling every 1.5 years, a pace of change in computational efficiency comparable to that from 1946 to the present. Computations per kWh grew even more rapidly during the vacuum tube computing era and during the transition from tubes to transistors but more slowly during the era of discrete transistors. As expected, the transition from tubes to transistors shows a large jump in computations per kWh. In 1985, the physicist Richard Feynman identified a factor of one hundred billion (1011) possible theoretical improvement in the electricity used per computation.
    [Show full text]
  • Computer Performance Evaluation and Benchmarking
    Computer Performance Evaluation and Benchmarking EE 382M Dr. Lizy Kurian John Evolution of Single-Chip Microprocessors 1970’s 1980’s 1990’s 2010s Transistor Count 10K- 100K-1M 1M-100M 100M- 100K 10 B Clock Frequency 0.2- 2-20MHz 20M- 0.1- 2MHz 1GHz 4GHz Instruction/Cycle < 0.1 0.1-0.9 0.9- 2.0 1-100 MIPS/MFLOPS < 0.2 0.2-20 20-2,000 100- 10,000 Hot Chips 2014 (August 2014) AMD KAVERI HOT CHIPS 2014 AMD KAVERI HOTCHIPS 2014 Hotchips 2014 Hotchips 2014 - NVIDIA Power Density in Microprocessors 10000 Sun’s Surface 1000 Rocket Nozzle ) 2 Nuclear Reactor 100 Core 2 8086 10 Hot Plate 8008 Pentium® Power Density (W/cm 8085 4004 386 Processors 286 486 8080 1 1970 1980 1990 2000 2010 Source: Intel Why Performance Evaluation? • For better Processor Designs • For better Code on Existing Designs • For better Compilers • For better OS and Runtimes Design Analysis Lord Kelvin “To measure is to know.” "If you can not measure it, you can not improve it.“ "I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03] Designs evolve based on Analysis • Good designs are impossible without good analysis • Workload Analysis • Processor Analysis Design Analysis Performance Evaluation
    [Show full text]
  • Performance Scalability of N-Tier Application in Virtualized Cloud Environments: Two Case Studies in Vertical and Horizontal Scaling
    PERFORMANCE SCALABILITY OF N-TIER APPLICATION IN VIRTUALIZED CLOUD ENVIRONMENTS: TWO CASE STUDIES IN VERTICAL AND HORIZONTAL SCALING A Thesis Presented to The Academic Faculty by Junhee Park In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Computer Science Georgia Institute of Technology May 2016 Copyright c 2016 by Junhee Park PERFORMANCE SCALABILITY OF N-TIER APPLICATION IN VIRTUALIZED CLOUD ENVIRONMENTS: TWO CASE STUDIES IN VERTICAL AND HORIZONTAL SCALING Approved by: Professor Dr. Calton Pu, Advisor Professor Dr. Shamkant B. Navathe School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Professor Dr. Ling Liu Professor Dr. Edward R. Omiecinski School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Professor Dr. Qingyang Wang Date Approved: December 11, 2015 School of Electrical Engineering and Computer Science Louisiana State University To my parents, my wife, and my daughter ACKNOWLEDGEMENTS My Ph.D. journey was a precious roller-coaster ride that uniquely accelerated my personal growth. I am extremely grateful to anyone who has supported and walked down this path with me. I want to apologize in advance in case I miss anyone. First and foremost, I am extremely thankful and lucky to work with my advisor, Dr. Calton Pu who guided me throughout my Masters and Ph.D. programs. He first gave me the opportunity to start work in research environment when I was a fresh Master student and gave me an admission to one of the best computer science Ph.D.
    [Show full text]
  • Clock Rate Improves Roughly Proportional to Improvement in L • Number of Transistors Improves Proportional to L2 (Or Faster)
    TheThe VonVon NeumannNeumann ComputerComputer ModelModel • Partitioning of the computing engine into components: – Central Processing Unit (CPU): Control Unit (instruction decode , sequencing of operations), Datapath (registers, arithmetic and logic unit, buses). – Memory: Instruction and operand storage. – Input/Output (I/O) sub-system: I/O bus, interfaces, devices. – The stored program concept: Instructions from an instruction set are fetched from a common memory and executed one at a time Control Input Memory - (instructions, data) Datapath registers Output ALU, buses Computer System CPU I/O Devices EECC551 - Shaaban #1 Lec # 1 Winter 2001 12-3-2001 Generic CPU Machine Instruction Execution Steps Instruction Obtain instruction from program storage Fetch Instruction Determine required actions and instruction size Decode Operand Locate and obtain operand data Fetch Execute Compute result value or status Result Deposit results in storage for later use Store Next Determine successor or next instruction Instruction EECC551 - Shaaban #2 Lec # 1 Winter 2001 12-3-2001 HardwareHardware ComponentsComponents ofof AnyAny ComputerComputer Five classic components of all computers: 1. Control Unit; 2. Datapath; 3. Memory; 4. Input; 5. Output } Processor Computer Keyboard, Mouse, etc. Processor Memory Devices (active) (passive) Control Input (where Unit programs, data Disk Datapath live when Output running) Display, Printer, etc. EECC551 - Shaaban #3 Lec # 1 Winter 2001 12-3-2001 CPUCPU OrganizationOrganization • Datapath Design: – Capabilities & performance characteristics of principal Functional Units (FUs): • (e.g., Registers, ALU, Shifters, Logic Units, ...) – Ways in which these components are interconnected (buses connections, multiplexors, etc.). – How information flows between components. • Control Unit Design: – Logic and means by which such information flow is controlled. – Control and coordination of FUs operation to realize the targeted Instruction Set Architecture to be implemented (can either be implemented using a finite state machine or a microprogram).
    [Show full text]
  • An Algorithmic Theory of Caches by Sridhar Ramachandran
    An algorithmic theory of caches by Sridhar Ramachandran Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY. December 1999 Massachusetts Institute of Technology 1999. All rights reserved. Author Department of Electrical Engineering and Computer Science Jan 31, 1999 Certified by / -f Charles E. Leiserson Professor of Computer Science and Engineering Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students MSSACHUSVTS INSTITUT OF TECHNOLOGY MAR 0 4 2000 LIBRARIES 2 An algorithmic theory of caches by Sridhar Ramachandran Submitted to the Department of Electrical Engineeringand Computer Science on Jan 31, 1999 in partialfulfillment of the requirementsfor the degree of Master of Science. Abstract The ideal-cache model, an extension of the RAM model, evaluates the referential locality exhibited by algorithms. The ideal-cache model is characterized by two parameters-the cache size Z, and line length L. As suggested by its name, the ideal-cache model practices automatic, optimal, omniscient replacement algorithm. The performance of an algorithm on the ideal-cache model consists of two measures-the RAM running time, called work complexity, and the number of misses on the ideal cache, called cache complexity. This thesis proposes the ideal-cache model as a "bridging" model for caches in the sense proposed by Valiant [49]. A bridging model for caches serves two purposes. It can be viewed as a hardware "ideal" that influences cache design. On the other hand, it can be used as a powerful tool to design cache-efficient algorithms.
    [Show full text]
  • 02 Computer Evolution and Performance
    Computer Architecture Faculty Of Computers And Information Technology Second Term 2019- 2020 Dr.Khaled Kh. Sharaf Computer Architecture Chapter 2: Computer Evolution and Performance Computer Architecture LEARNING OBJECTIVES 1. A Brief History of Computers. 2. The Evolution of the Intel x86 Architecture 3. Embedded Systems and the ARM 4. Performance Assessment Computer Architecture 1. A BRIEF HISTORY OF COMPUTERS The First Generation: Vacuum Tubes Electronic Numerical Integrator And Computer (ENIAC) - Designed and constructed at the University of Pennsylvania, was the world’s first general purpose - Started 1943 and finished 1946 - Decimal (not binary) - 20 accumulators of 10 digits - Programmed manually - 18,000 vacuum tubes by switches - 30 tons -15,000 square feet - 140 kW power consumption - 5,000 additions per second Computer Architecture The First Generation: Vacuum Tubes VON NEUMANN MACHINE • Stored Program concept • Main memory storing programs and data • ALU operating on binary data • Control unit interpreting instructions from memory and executing • Input and output equipment operated by control unit • Princeton Institute for Advanced Studies • IAS • Completed 1952 Computer Architecture Structure of von Neumann machine Structure of the IAS Computer Computer Architecture IAS - details • 1000 x 40 bit words • Binary number • 2 x 20 bit instructions Set of registers (storage in CPU) 1 • Memory buffer register (MBR): Contains a word to be stored in memory or sent to the I/O unit, or is used to receive a word from memory or from the I/O unit. • • Memory address register (MAR): Specifies the address in memory of the word to be written from or read into the MBR. • • Instruction register (IR): Contains the 8-bit opcode instruction being executed.
    [Show full text]
  • Measurement and Rating of Computer Systems Performance
    U'Dirlewanger 06.11.2006 14:39 Uhr Seite 1 About this book Werner Dirlewanger ISO has developed a new method to describe and measure performance for a wide range of data processing system types and applications. This solves the pro- blems arising from the great variety of incompatible methods and performance terms which have been proposed and are used. The method is presented in the International Standard ISO/IEC 14756. This textbook is written both for perfor- mance experts and beginners, and academic users. On the one hand it introdu- ces the latest techniques of performance measurement, and on the other hand it Measurement and Rating is a guide on how to apply the standard. The standard also includes additionally two advanced aspects. Firstly, it intro- of duces a rating method to compute those performance values which are actually required by the user entirety (reference values) and a process which compares Computer Systems Performance the reference values with the actual measured ones. This answers the question whether the measured performance satisfies the user entirety requirements. and of Secondly, the standard extends its method to assess the run-time efficiency of software. It introduces a method for quantifying and measuring this property both Software Efficiency for application and system software. Each chapter focuses on a particular aspect of performance measurement which is further illustrated with exercises. Solutions are given for each exercise. An Introduction to the ISO/ IEC 14756 As measurement cannot be performed manually, software tools are needed. All needed software is included on a CD supplied with this book and published by Method and a Guide to its Application the author under GNU license.
    [Show full text]
  • Atmega165p Datasheet
    Features • High Performance, Low Power Atmel® AVR® 8-Bit Microcontroller • Advanced RISC Architecture – 130 Powerful Instructions – Most Single Clock Cycle Execution – 32 × 8 General Purpose Working Registers – Fully Static Operation – Up to 16 MIPS Throughput at 16 MHz – On-Chip 2-cycle Multiplier • High Endurance Non-volatile Memory segments – 16 Kbytes of In-System Self-programmable Flash program memory – 512 Bytes EEPROM – 1 Kbytes Internal SRAM 8-bit – Write/Erase cyles: 10,000 Flash/100,000 EEPROM(1)(3) – Data retention: 20 years at 85°C/100 years at 25°C(2)(3) Microcontroller – Optional Boot Code Section with Independent Lock Bits In-System Programming by On-chip Boot Program with 16K Bytes True Read-While-Write Operation – Programming Lock for Software Security In-System • JTAG (IEEE std. 1149.1 compliant) Interface – Boundary-scan Capabilities According to the JTAG Standard Programmable – Extensive On-chip Debug Support – Programming of Flash, EEPROM, Fuses, and Lock Bits through the JTAG Interface Flash • Peripheral Features – Two 8-bit Timer/Counters with Separate Prescaler and Compare Mode – One 16-bit Timer/Counter with Separate Prescaler, Compare Mode, and Capture Mode – Real Time Counter with Separate Oscillator –Four PWM Channels ATmega165P – 8-channel, 10-bit ADC – Programmable Serial USART ATmega165PV – Master/Slave SPI Serial Interface – Universal Serial Interface with Start Condition Detector – Programmable Watchdog Timer with Separate On-chip Oscillator – On-chip Analog Comparator Preliminary – Interrupt and Wake-up
    [Show full text]
  • Computer Performance Factors
    Advanced Computer Architecture (0630561) Lecture 7 Computer Performance Factors Prof. Kasim M. Al-Aubidy Computer Eng. Dept. ACA-Lecture Objective: ACA-Lecture Performance Metrics: ACA-Lecture Performance Metrics: ACA-Lecture CPU Performance Equation: ACA-Lecture CPU Performance Equation: ACA-Lecture Execution Time (T): T = Ic *CPI *t T: CPU time (seconds/program) needed to execute a program. Ic: Number of Instructions in a given program. CPI: Cycle per Instruction. t: Cycle time. t=1/f, f=clock rate. • The CPI can be divided into TWO component terms; - processor cycles (p) - memory cycles (m) • The instruction cycle may involve (k) memory references, for example; k=4; one for instruction fetch, two for operand fetch, and one for store result. T = Ic *( p + m*k)*t ACA-Lecture System Attributes: T =Ic *(p+m*k)*t The above five performance factors (Ic, p, m, k & t) are influenced by these attributes: FACTORS Ic p m k t Instruction set architecture. X X Compiler technology. X X X CPU implementation & control X X Cache & memory hierarchy X X •The instruction set architecture affects program length and p. •Compiler design affects the values of IC, p & m. •The CPU implementation & control determine the total processor time= p*t •The memory technology & hierarchy design affect the memory access time= k*t 7 ACA-Lecture MIPS Rate: • The processor speed is measured in terms of million instructions per seconds. • MIPS rate varies with respect to: – Clock rate (f). – Instruction count ( Ic ). – CPI of a given machine. I f f * I MIPS = c = = c T *106 CPI *106 N *106 Where N is the total number of clock cycles needed to execute a given program.
    [Show full text]
  • A Review of High Performance Computing
    IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 1, Ver. VII (Feb. 2014), PP 36-43 www.iosrjournals.org A review of High Performance Computing 1 2 3 G.Sravanthi , B.Grace , V.kamakshamma 1 2 3Department of Computer Science, PBR Visvodaya Institute of Science And Technology, India Abstract: Current high performance computing (HPC) applications are found in many consumers, industrial and research fields. There is a great deal more to remote sensing data than meets the eye, and extracting that information turns out to be a major computational challenges, so the high performance computing (HPC) infrastructure such as MPP (massive parallel processing), clusters, distributed networks or specialized hardware devices are used to provide important architectural developments to accelerate the computations related with information extraction in remote sensing. The focus of HPC has shifted towards enabling the transparent and most efficient utilization of a wide range of capabilities made available over networks. In this paper we review the fundamentals of High Performance Computing (HPC) in a way which is easy to understand and sketch the way to standard computers and supercomputers work, as well as discuss distributed computing and essential aspects to take into account when running scientific calculations in computers. Keywords: High Performance Computing, scientific supercomputing, simulation, computer architecture, MPP, distributed computing, clusters, parallel calculations. I. Introduction High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
    [Show full text]