The Future of Computing Performance: Game Over Or Next Level?
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Annual Reports of FCCSET Subcommittee Annual Trip Reports To
Annual Reports of FCCSET Subcommittee Annual trip reports to supercomputer manufacturers trace the changes in technology and in the industry, 1985-1989. FY 1986 Annual Report of the Federal Coordinating Council on Science, Engineering and Technology (FCCSET). by the FCCSET Ocnmittee. n High Performance Computing Summary During the past year, the Committee met on a regular basis to review government and industry supported programs in research, development, and application of new supercomputer technology. The Committee maintains an overview of commercial developments in the U.S. and abroad. It regularly receives briefings from Government agency sponsored R&D efforts and makes such information available, where feasible, to industry and universities. In addition, the committee coordinates agency supercomputer access programs and promotes cooperation with particular emphasis on aiding the establish- ment of new centers and new communications networks. The Committee made its annual visit to supercomputer manufacturers in August and found that substantial progress had been made by Cray Research and ETA Systems toward developing their next generations of machines. The Cray II and expanded Cray XMP series supercomputers are now being marketed commercially; the Committee was briefed on plans for the next generation of Cray machines. ETA Systems is beyond the prototype stage for the ETA-10 and planning to ship one machine this year. A ^-0 A 1^'Tr 2 The supercomputer vendors continue to have difficulty in obtaining high performance IC's from U.S. chip makers, leaving them dependent on Japanese suppliers. In some cases, the Japanese chip suppliers are the same companies, e.g., Fujitsu, that provide the strongest foreign competition in the supercomputer market. -
Computer Organization and Architecture Designing for Performance Ninth Edition
COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION William Stallings Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Editorial Director: Marcia Horton Designer: Bruce Kenselaar Executive Editor: Tracy Dunkelberger Manager, Visual Research: Karen Sanatar Associate Editor: Carole Snyder Manager, Rights and Permissions: Mike Joyce Director of Marketing: Patrice Jones Text Permission Coordinator: Jen Roach Marketing Manager: Yez Alayan Cover Art: Charles Bowman/Robert Harding Marketing Coordinator: Kathryn Ferranti Lead Media Project Manager: Daniel Sandin Marketing Assistant: Emma Snider Full-Service Project Management: Shiny Rajesh/ Director of Production: Vince O’Brien Integra Software Services Pvt. Ltd. Managing Editor: Jeff Holcomb Composition: Integra Software Services Pvt. Ltd. Production Project Manager: Kayla Smith-Tarbox Printer/Binder: Edward Brothers Production Editor: Pat Brown Cover Printer: Lehigh-Phoenix Color/Hagerstown Manufacturing Buyer: Pat Brown Text Font: Times Ten-Roman Creative Director: Jayne Conte Credits: Figure 2.14: reprinted with permission from The Computer Language Company, Inc. Figure 17.10: Buyya, Rajkumar, High-Performance Cluster Computing: Architectures and Systems, Vol I, 1st edition, ©1999. Reprinted and Electronically reproduced by permission of Pearson Education, Inc. Upper Saddle River, New Jersey, Figure 17.11: Reprinted with permission from Ethernet Alliance. Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on the appropriate page within text. Copyright © 2013, 2010, 2006 by Pearson Education, Inc., publishing as Prentice Hall. All rights reserved. Manufactured in the United States of America. -
Trends in Electrical Efficiency in Computer Performance
ASSESSING TRENDS IN THE ELECTRICAL EFFICIENCY OF COMPUTATION OVER TIME Jonathan G. Koomey*, Stephen Berard†, Marla Sanchez††, Henry Wong** * Lawrence Berkeley National Laboratory and Stanford University †Microsoft Corporation ††Lawrence Berkeley National Laboratory **Intel Corporation Contact: [email protected], http://www.koomey.com Final report to Microsoft Corporation and Intel Corporation Submitted to IEEE Annals of the History of Computing: August 5, 2009 Released on the web: August 17, 2009 EXECUTIVE SUMMARY Information technology (IT) has captured the popular imagination, in part because of the tangible benefits IT brings, but also because the underlying technological trends proceed at easily measurable, remarkably predictable, and unusually rapid rates. The number of transistors on a chip has doubled more or less every two years for decades, a trend that is popularly (but often imprecisely) encapsulated as “Moore’s law”. This article explores the relationship between the performance of computers and the electricity needed to deliver that performance. As shown in Figure ES-1, computations per kWh grew about as fast as performance for desktop computers starting in 1981, doubling every 1.5 years, a pace of change in computational efficiency comparable to that from 1946 to the present. Computations per kWh grew even more rapidly during the vacuum tube computing era and during the transition from tubes to transistors but more slowly during the era of discrete transistors. As expected, the transition from tubes to transistors shows a large jump in computations per kWh. In 1985, the physicist Richard Feynman identified a factor of one hundred billion (1011) possible theoretical improvement in the electricity used per computation. -
Computer Performance Evaluation and Benchmarking
Computer Performance Evaluation and Benchmarking EE 382M Dr. Lizy Kurian John Evolution of Single-Chip Microprocessors 1970’s 1980’s 1990’s 2010s Transistor Count 10K- 100K-1M 1M-100M 100M- 100K 10 B Clock Frequency 0.2- 2-20MHz 20M- 0.1- 2MHz 1GHz 4GHz Instruction/Cycle < 0.1 0.1-0.9 0.9- 2.0 1-100 MIPS/MFLOPS < 0.2 0.2-20 20-2,000 100- 10,000 Hot Chips 2014 (August 2014) AMD KAVERI HOT CHIPS 2014 AMD KAVERI HOTCHIPS 2014 Hotchips 2014 Hotchips 2014 - NVIDIA Power Density in Microprocessors 10000 Sun’s Surface 1000 Rocket Nozzle ) 2 Nuclear Reactor 100 Core 2 8086 10 Hot Plate 8008 Pentium® Power Density (W/cm 8085 4004 386 Processors 286 486 8080 1 1970 1980 1990 2000 2010 Source: Intel Why Performance Evaluation? • For better Processor Designs • For better Code on Existing Designs • For better Compilers • For better OS and Runtimes Design Analysis Lord Kelvin “To measure is to know.” "If you can not measure it, you can not improve it.“ "I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03] Designs evolve based on Analysis • Good designs are impossible without good analysis • Workload Analysis • Processor Analysis Design Analysis Performance Evaluation -
Hillview: a Trillion-Cell Spreadsheet for Big Data
Hillview: A trillion-cell spreadsheet for big data Mihai Budiu Parikshit Gopalan Lalith Suresh [email protected] [email protected] [email protected] VMware Research VMware Research VMware Research Udi Wieder Han Kruiger Marcos K. Aguilera [email protected] University of Utrecht [email protected] VMware Research VMware Research ABSTRACT Unfortunately, enterprise data is growing dramatically, and cur- Hillview is a distributed spreadsheet for browsing very large rent spreadsheets do not work with big data, because they are lim- datasets that cannot be handled by a single machine. As a spread- ited in capacity or interactivity. Centralized spreadsheets such as sheet, Hillview provides a high degree of interactivity that permits Excel can handle only millions of rows. More advanced tools such data analysts to explore information quickly along many dimen- as Tableau can scale to larger data sets by connecting a visualiza- sions while switching visualizations on a whim. To provide the re- tion front-end to a general-purpose analytics engine in the back- quired responsiveness, Hillview introduces visualization sketches, end. Because the engine is general-purpose, this approach is either or vizketches, as a simple idea to produce compact data visualiza- slow for a big data spreadsheet or complex to use as it requires tions. Vizketches combine algorithmic techniques for data summa- users to carefully choose queries that the system is able to execute rization with computer graphics principles for efficient rendering. quickly. For example, Tableau can use Amazon Redshift as the an- While simple, vizketches are effective at scaling the spreadsheet alytics back-end but users must understand lengthy documentation by parallelizing computation, reducing communication, providing to navigate around bad query types that are too slow to execute [17]. -
Performance Scalability of N-Tier Application in Virtualized Cloud Environments: Two Case Studies in Vertical and Horizontal Scaling
PERFORMANCE SCALABILITY OF N-TIER APPLICATION IN VIRTUALIZED CLOUD ENVIRONMENTS: TWO CASE STUDIES IN VERTICAL AND HORIZONTAL SCALING A Thesis Presented to The Academic Faculty by Junhee Park In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Computer Science Georgia Institute of Technology May 2016 Copyright c 2016 by Junhee Park PERFORMANCE SCALABILITY OF N-TIER APPLICATION IN VIRTUALIZED CLOUD ENVIRONMENTS: TWO CASE STUDIES IN VERTICAL AND HORIZONTAL SCALING Approved by: Professor Dr. Calton Pu, Advisor Professor Dr. Shamkant B. Navathe School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Professor Dr. Ling Liu Professor Dr. Edward R. Omiecinski School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Professor Dr. Qingyang Wang Date Approved: December 11, 2015 School of Electrical Engineering and Computer Science Louisiana State University To my parents, my wife, and my daughter ACKNOWLEDGEMENTS My Ph.D. journey was a precious roller-coaster ride that uniquely accelerated my personal growth. I am extremely grateful to anyone who has supported and walked down this path with me. I want to apologize in advance in case I miss anyone. First and foremost, I am extremely thankful and lucky to work with my advisor, Dr. Calton Pu who guided me throughout my Masters and Ph.D. programs. He first gave me the opportunity to start work in research environment when I was a fresh Master student and gave me an admission to one of the best computer science Ph.D. -
An Algorithmic Theory of Caches by Sridhar Ramachandran
An algorithmic theory of caches by Sridhar Ramachandran Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY. December 1999 Massachusetts Institute of Technology 1999. All rights reserved. Author Department of Electrical Engineering and Computer Science Jan 31, 1999 Certified by / -f Charles E. Leiserson Professor of Computer Science and Engineering Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students MSSACHUSVTS INSTITUT OF TECHNOLOGY MAR 0 4 2000 LIBRARIES 2 An algorithmic theory of caches by Sridhar Ramachandran Submitted to the Department of Electrical Engineeringand Computer Science on Jan 31, 1999 in partialfulfillment of the requirementsfor the degree of Master of Science. Abstract The ideal-cache model, an extension of the RAM model, evaluates the referential locality exhibited by algorithms. The ideal-cache model is characterized by two parameters-the cache size Z, and line length L. As suggested by its name, the ideal-cache model practices automatic, optimal, omniscient replacement algorithm. The performance of an algorithm on the ideal-cache model consists of two measures-the RAM running time, called work complexity, and the number of misses on the ideal cache, called cache complexity. This thesis proposes the ideal-cache model as a "bridging" model for caches in the sense proposed by Valiant [49]. A bridging model for caches serves two purposes. It can be viewed as a hardware "ideal" that influences cache design. On the other hand, it can be used as a powerful tool to design cache-efficient algorithms. -
Information Age Anthology Vol II
DoD C4ISR Cooperative Research Program ASSISTANT SECRETARY OF DEFENSE (C3I) Mr. Arthur L. Money SPECIAL ASSISTANT TO THE ASD(C3I) & DIRECTOR, RESEARCH AND STRATEGIC PLANNING Dr. David S. Alberts Opinions, conclusions, and recommendations expressed or implied within are solely those of the authors. They do not necessarily represent the views of the Department of Defense, or any other U.S. Government agency. Cleared for public release; distribution unlimited. Portions of this publication may be quoted or reprinted without further permission, with credit to the DoD C4ISR Cooperative Research Program, Washington, D.C. Courtesy copies of reviews would be appreciated. Library of Congress Cataloging-in-Publication Data Alberts, David S. (David Stephen), 1942- Volume II of Information Age Anthology: National Security Implications of the Information Age David S. Alberts, Daniel S. Papp p. cm. -- (CCRP publication series) Includes bibliographical references. ISBN 1-893723-02-X 97-194630 CIP August 2000 VOLUME II INFORMATION AGE ANTHOLOGY: National Security Implications of the Information Age EDITED BY DAVID S. ALBERTS DANIEL S. PAPP TABLE OF CONTENTS Acknowledgments ................................................ v Preface ................................................................ vii Chapter 1—National Security in the Information Age: Setting the Stage—Daniel S. Papp and David S. Alberts .................................................... 1 Part One Introduction......................................... 55 Chapter 2—Bits, Bytes, and Diplomacy—Walter B. Wriston ................................................................ 61 Chapter 3—Seven Types of Information Warfare—Martin C. Libicki ................................. 77 Chapter 4—America’s Information Edge— Joseph S. Nye, Jr. and William A. Owens....... 115 Chapter 5—The Internet and National Security: Emerging Issues—David Halperin .................. 137 Chapter 6—Technology, Intelligence, and the Information Stream: The Executive Branch and National Security Decision Making— Loch K. -
Member Renewal Guide
MEMBERSHIP DUES (All prices indicated are in U.S. dollars) membership dues (cont’d) Professional Member .........................................................................................$99 Professional Member PLUS Digital Library ($99 + $99) .........$198 PAYMENT Lifetime Membership ...................................................................... Prices vary Payment accepted by Visa, MasterCard, American Express, Available to ACM Professional Members in three age tiers. Discover, check, or money order in U.S. dollars. For residents Pricing is determined as a multiple of current professional rates. outside the US our preferred method of payment is credit The pricing structure is listed at www.acm.org/membership/life. card—but we do accept checks drawn in foreign currency at Membership the current monetary exchange rate. Student to Professional Transition ..........................................................$49 Renewal PLUS Digital Library ($49 + $50) ................................................................$99 CHANGING YOUR MEMBERSHIP STATUS... Student Membership .........................................................................................$19 From Student to Professional Guide Access to online books and courses; print subscription to XRDS; (Special Student Transition Dues of $49) online subscription to Communications of the ACM; and more. Cross out “Student” in the Member Class section, and write Student Membership PLUS Digital Library .......................................$42 “Student Tran sition -
The Marketing of Information in the Information Age Hofacker and Goldsmith
The Marketing of Information in the Information Age Hofacker and Goldsmith THE MARKETING OF INFORMATION IN THE INFORMATION AGE CHARLES F. HOFACKER, Florida State University RONALD E. GOLDSMITH, Florida State University The long-standing distinction made between goods and services must now be joined by the distinction between these two categories and “information products.” The authors propose that marketing management, practice, and theory could benefit by broadening the scope of what marketers market by adding a third category besides tangible goods and intangible services. It is apparent that information is an unusual and fundamental physical and logical phenomenon from either of these. From this the authors propose that information products differ from both tangible goods and perishable services in that information products have three properties derived from their fundamental abstractness: they are scalable, mutable, and public. Moreover, an increasing number of goods and services now contain information as a distinct but integral element in the way they deliver benefits. Thus, the authors propose that marketing theory should revise the concept of “product” to explicitly include an informational component and that the implications of this revised concept be discussed. This paper presents some thoughts on the issues such discussions should address, focusing on strategic management implications for marketing information products in the information age. INTRODUCTION “information,” in addition to goods and services (originally proposed by Freiden et al. 1998), A major revolution in marketing management while identifying unique characteristics of pure and theory occurred when scholars established information products. To support this position, that “products” should not be conceptualized the authors identify key attributes that set apart solely as tangible goods, but also as intangible an information product from a good or a services. -
War Gaming in the Information Age—Theory and Purpose Paul Bracken
Naval War College Review Volume 54 Article 6 Number 2 Spring 2001 War Gaming in the Information Age—Theory and Purpose Paul Bracken Martin Shubik Follow this and additional works at: https://digital-commons.usnwc.edu/nwc-review Recommended Citation Bracken, Paul and Shubik, Martin (2001) "War Gaming in the Information Age—Theory and Purpose," Naval War College Review: Vol. 54 : No. 2 , Article 6. Available at: https://digital-commons.usnwc.edu/nwc-review/vol54/iss2/6 This Article is brought to you for free and open access by the Journals at U.S. Naval War College Digital Commons. It has been accepted for inclusion in Naval War College Review by an authorized editor of U.S. Naval War College Digital Commons. For more information, please contact [email protected]. Bracken and Shubik: War Gaming in the Information Age—Theory and Purpose WAR GAMING IN THE INFORMATION AGE Theory and Purpose Paul Bracken and Martin Shubik ver twenty years ago, a study was carried out under the sponsorship of the ODefense Advanced Research Projects Agency and in collaboration with the General Accounting Office to survey and critique the models, simulations, and war games then in use by the Department of Defense.1 From some points of view, twenty years ago means ancient history; changes in communication tech- nology and computers since then can be measured Paul Bracken, professor of management and of political science at Yale University, specializes in national security only in terms of orders of magnitudes. The new world and management issues. He is the author of Command of the networked battlefield, super-accurate weapons, and Control of Nuclear Forces (1983) and of Fire in the and the information technology (IT) revolution, with its East: The Rise of Asian Military Power and the Second Nuclear Age (HarperCollins, 1999). -
Technology and Economic Growth in the Information Age Introduction
Technology and Economic Growth in the Information Age 1 POLICY BACKGROUNDER No. 147 For Immediate Release March 12, 1998 Technology and Economic Growth in the Information Age Introduction The idea of rapid progress runs counter to well-publicized reports of an American economy whose growth rate has slipped. Pessimists, citing statistics on weakening productivity and gross domestic product growth, contend the economy isn’t strong enough to keep improving Americans’ standard of living. They offer a dour view: the generation coming of age today will be the first in American history not to live better than its parents.1 Yet there is plenty of evidence that this generation is better off than any that have gone before, and given the technologies likely to shape the next quarter century, there are reasons to believe that progress will be faster than ever — a stunning display of capitalism’s ability to lift living standards. To suppose otherwise would be to exhibit the shortsightedness of Charles H. Du- ell, commissioner of the U.S. Office of Patents, who in 1899 said, “Everything that can be invented has been invented.” Ironically, though, our economic statistics may miss the show. The usual measures of progress — output and productivity — are losing relevance “The usual measures of prog- in this age of increasingly rapid technological advances. As the economy ress — output and produc- evolves, it is delivering not only a greater quantity of goods and services, but tivity — are losing relevance also improved quality and greater variety. Workers and their families will be in this age of increasingly rapid technological advanc- better off because they will have more time off, better working conditions, es.” more enjoyable jobs and other benefits that raise our living standards but aren’t easily measured and therefore often aren’t counted in gross domestic product (GDP).