[3B2-9] man2011030046.3d 29/7/011 10:35 Page 46 Implications of Historical Trends in the Electrical Efficiency of Computing Jonathan G. Koomey Stanford University Stephen Berard Microsoft Marla Sanchez Carnegie Mellon University Henry Wong Intel The electrical efficiency of computation has doubled roughly every year and a half for more than six decades, a pace of change compa- rable to that for computer performance and electrical efficiency in the microprocessor era. These efficiency improvements enabled the cre- ation of laptops, smart phones, wireless sensors, and other mobile computing devices, with many more such innovations yet to come. Valentine’s day 1946 was a pivotal date in trends in chip production.2 (It has also some- human history. It was on that day that the times served as a benchmark for progress in US War Department formally announced chip design, which we discuss more later on the existence of the Electronic Numerical in this article.) As Gordon Moore put it in Integrator and Computer (ENIAC).1 ENIAC’s his original article, ‘‘The complexity [of inte- computational engine had no moving parts grated circuits] for minimum component and used electrical pulses for its logical oper- costs has increased at a rate of roughly a fac- ations. Earlier computing devices relied on tor of two per year,’’3 where complexity is mechanical relays and possessed computa- defined as the number of components (not tional speeds three orders of magnitude just transistors) per chip. slower than ENIAC. The original statement of Moore’s law has Moving electrons is inherently faster than been modified over the years in several differ- moving atoms, and shifting to electronic digital ent ways, as previous research has estab- computing began a march toward ever-greater lished.4,5 The trend, as Moore initially defined and cheaper computational power that contin- it, relates to the minimum component costs ues even to this day. These trends proceed at at current levels of technology. All other easily measurable, remarkably predictable, things being equal, the cost per component and unusually rapid rates. For example, the decreases as more components are added to number of transistors on a chip has doubled a chip, but because of defects, the yield of more or less every two years for decades, a chips goes down with increasing complex- trend that is popularly (but often imprecisely) ity.6 As semiconductor technology improves, encapsulated as Moore’s law (see Figure 1). the cost curve shifts down, making increased Moore’s law has seen several incarnations, component densities cheaper (see Figure 1 in some more accurate than others. In its original Moore’s 1965 paper3). form,itwasnotaphysicallaw,butan‘‘empir- In 1975, Moore modified his observation ical observation’’ that described economic to a doubling of complexity every two years,7 46 IEEE Annals of the History of Computing Published by the IEEE Computer Society 1058-6180/11/$26.00 c 2011 IEEE [3B2-9] man2011030046.3d 29/7/011 10:35 Page 47 which reflected a change in the economics and technology of chip production at that time as well as a change in his conceptualiza- tion of the law. That rate of increase in chip complexity has held for more than three dec- ades, which is a reflection mainly of the un- derlying characteristics of semiconductor manufacturing during that period. As Ethan Mollick explained,5 Moore’s law is in some sense a self-fulfilling prophecy—the indus- try’sengineershaveusedMoore’slawasa benchmark to which they calibrated their Figure 1. Transistor counts for microprocessors over time (thousands). rate of innovation. This result is partly driven The doubling time from 1971 to 2006 is approximately 1.8 years. by business dynamics, as David E. Liddle (Data courtesy of James Larus, Microsoft Corporation) points out: ‘‘Moore’s law expresses that rate of semiconductor process improvement data do not exist, so we compiled available which transfers the maximum profit from data to piece together the long-term trends the computer industry to the semiconductor on the electrical efficiency of computation. industry.’’2 To estimate computations per kilowatt- The striking predictive power of Moore’s hour, we focused on the full-load computa- law has prompted many to draw links be- tional capacity and the direct active electrical tween chip complexity and other aspects of power for each machine, dividing the num- computer systems. One example is the popu- ber of computations possible per hour at lar summary of Moore’s law (‘‘computing full computational load by the number of performance doubles every 18 months’’), kilowatt-hours consumed over that same which correlates well to trends in personal hour. This metric says nothing about the computer systems to date but is a statement power used by computers when they are that Moore never made. Another is ‘‘Moore’s idle or running at less than full load, but it law for power,’’ coined by Wu-chun Feng to is a well-defined measure of the efficiency describe changes in the electricity used by of this technology, and it helps show how computing nodes in supercomputer installa- the technology has changed over time. tions during a period of rapid growth in Measuring computing performance has al- power use for servers (‘‘power consumption ways been controversial, and this article will of compute nodes doubles every 18 months’’).8 not settle those issues. In 2007, William D. This article describes the implications of Nordhaus published a sophisticated and the relationship between the processing comprehensive historical analysis of com- power of computers (which in the micro- puting performance over time,9 and that’s processor era has been driven by Moore’s the source for performance data on which law) and the electricity required to deliver we relied most heavily. We combined those that performance. Over the past 65 years, data with measured data on the power use the steps taken to improve computing perfor- of each computer when operating at full mance also invariably increased the electrical load to calculate computations per kilowatt- efficiency of computing, whether the logical hour. (More details on the data and methods gates consisted of vacuum tubes and diodes, are available in our Web Extra appendix, see discrete transistors, or microprocessors. The http://doi.ieeecomputersociety.org/10.1109/ most important historical effect of this rela- MAHC.2010.28). tionship has been to enable the creation of mobile computing devices such as laptop Historical Trend Results computers. If these trends continue, they Figure 2 shows performance per computer will have important implications for more for all the computers included in Nordhaus’ widespread future use of mobile computing, analysis from 1946 onward. We also added sensors, and controls. the 40 additional machines from this anal- ysis for which measured power and perfor- Methods for Deriving Trends mancewereavailable.Thefiguredoesnot Analyzing long-term trends is a tricky busi- include performance estimates for recent ness. Ideally we’d have performance and en- large-scale supercomputers (for example, ergy use data for all types of computers in those at www.top500.org), but it does in- all applications since 1946. In practice, such clude measurements for server models that July–September 2011 47 [3B2-9] man2011030046.3d 29/7/011 10:35 Page 48 Implications of Historical Trends in the Electrical Efficiency of Computing power than those made with vacuum tubes and diodes, and the transition to transistors also led to a period of great technological innovation as engineers experimented with different ways to build these machines to maximize performance and improve reliability. Computations per kilowatt-hour doubled every 1.57 years over the entire analysis pe- riod, a rate of improvement only slightly slower than that for PCs, which saw effi- ciency double every 1.52 years from 1975 to 2009 (see Figure 4). The data show signif- icant increases in computational efficiency even during the vacuum tube and discrete- transistor eras. From 1946 (ENIAC) to 1958 (when the last of the primarily tube-based computers in our sample came on line), computations per kilowatt-hour doubled every 1.35 years. Computations per kilowatt- hour increased even more rapidly during the shift from tubes to transistors, but the pace of change slowed during the era of discrete transistors. In the recent years for which we have Figure 2. Computational capacity over time (computations/second per more than a few data points (2001, 2004, computer). These data are based on William D. Nordhaus’ 2007 work,9 2008, and 2009), there is a factor of two or with additional data added post-1985 for computers not considered three separating the lowest and highest esti- in his study. Doubling time for personal computers only (1975 to 2009) mates of computations per kilowatt-hour, is 1.5 years. which indicates substantial variation in the data in any given year. This variation is partly the result of including different types of are often used as computing nodes for computers in the sample (desktops, servers, those machines. laptops, and supercomputers), but the differ- The trends for microprocessor-based com- ences tend to be swamped by the rapid in- puters are clear. The performance per unit for crease in performance per computer over PCs, regressed over time, shows a doubling time, which drives the results. time of 1.50 years from 1975 (the introduc- tion date of the Altair 8800 kit PC) to Explaining These Trends 2009.10 This rate corresponds to the popular Even current computing technology is far interpretation of Moore’s law, but not its from the minimum theoretically possible en- exact 1975 formulation. ergy used per computation.12 In 1985, the Figure 3 shows the results in terms of the physicist Richard Feynman analyzed the elec- number of calculations per kilowatt-hour of tricity needed for computers that use elec- electricity consumed for the computers for trons for switching and estimated that there whichbothperformanceandmeasured was a factor of 1011 improvement that was power data are available.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-