The Future of Computing Performance: Game Over Or Next Level?

Total Page:16

File Type:pdf, Size:1020Kb

The Future of Computing Performance: Game Over Or Next Level? This PDF is available from The National Academies Press at http://www.nap.edu/catalog.php?record_id=12980 The Future of Computing Performance: Game Over or Next Level? ISBN Samuel H. Fuller and Lynette I. Millett, Editors; Committee on Sustaining 978-0-309-15951-7 Growth in Computing Performance; National Research Council 200 pages 6 x 9 PAPERBACK (2011) Visit the National Academies Press online and register for... Instant access to free PDF downloads of titles from the NATIONAL ACADEMY OF SCIENCES NATIONAL ACADEMY OF ENGINEERING INSTITUTE OF MEDICINE NATIONAL RESEARCH COUNCIL 10% off print titles Custom notification of new releases in your field of interest Special offers and discounts Distribution, posting, or copying of this PDF is strictly prohibited without written permission of the National Academies Press. Unless otherwise indicated, all materials in this PDF are copyrighted by the National Academy of Sciences. Request reprint permission for this book Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? THE FUTURE OF COMPUTING PERFORMANCE Game Over or Next Level? Samuel H. Fuller and Lynette I. Millett, Editors Committee on Sustaining Growth in Computing Performance Computer Science and Telecommunications Board Division on Engineering and Physical Science Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance. Support for this project was provided by the National Science Foundation under award CNS-0630358. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the organization that provided support for the project. International Standard Book Number-13: 978-0-309-15951-7 International Standard Book Number-10: 0-309-15951-2 Library of Congress Control Number: 2011923200 Additional copies of this report are available from The National Academies Press 500 Fifth Street, N.W., Lockbox 285 Washington, D.C. 20055 800 624-6242 202 334-3313 (in the Washington metropolitan area) http://www.nap.edu Copyright 2011 by the National Academy of Sciences. All rights reserved. Printed in the United States of America Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal govern- ment on scientific and technical matters. Dr. Ralph J. Cicerone is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its mem- bers, sharing with the National Academy of Sciences the responsibility for advis- ing the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. Charles M. Vest is president of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Harvey V. Fineberg is president of the Institute of Medicine. The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy’s purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in pro- viding services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Ralph J. Cicerone and Dr. Charles M. Vest are chair and vice chair, respectively, of the National Research Council. www.national-academies.org Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? COMMITTEE ON SUSTAINING GROWTH IN COMPUTING PERFORMANCE SAMUEL H. FULLER, Analog Devices Inc., Chair LUIZ ANDRÉ BARROSO, Google, Inc. ROBERT P. COLWELL, Independent Consultant WILLIAM J. DALLY, NVIDIA Corporation and Stanford University DAN DOBBERPUHL, P.A. Semi PRADEEP DUBEY, Intel Corporation MARK D. HILL, University of Wisconsin–Madison MARK HOROWITZ, Stanford University DAVID KIRK, NVIDIA Corporation MONICA LAM, Stanford University KATHRYN S. McKINLEY, University of Texas at Austin CHARLES MOORE, Advanced Micro Devices KATHERINE YELICK, University of California, Berkeley Staff LYNETTE I. MILLETT, Study Director SHENAE BRADLEY, Senior Program Assistant v Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD ROBERT F. SPROULL, Sun Labs, Chair PRITHVIRAJ BANERJEE, Hewlett Packard Company STEVEN M. BELLOVIN, Columbia University WILLIAM J. DALLY, NVIDIA Corporation and Stanford University SEYMOUR E. GOODMAN, Georgia Institute of Technology JOHN E. KELLY, III, IBM JON M. KLEINBERG, Cornell University ROBERT KRAUT, Carnegie Mellon University SUSAN LANDAU, Radcliffe Institute for Advanced Study PETER LEE, Microsoft Corporation DAVID LIDDLE, US Venture Partners WILLIAM H. PRESS, University of Texas PRABHAKAR RAGHAVAN, Yahoo! Research DAVID E. SHAW, Columbia University ALFRED Z. SPECTOR, Google, Inc. JOHN SWAINSON, Silver Lake Partners PETER SZOLOVITS, Massachusetts Institute of Technology PETER J. WEINBERGER, Google, Inc. ERNEST J. WILSON, University of Southern California JON EISENBERG, Director RENEE HAWKINS, Financial and Administrative Manager HERBERT S. LIN, Chief Scientist LYNETTE I. MILLETT, Senior Program Officer EMILY ANN MEYER, Program Officer ENITA A. WILLIAMS, Associate Program Officer VIRGINIA BACON TALATI, Associate Program Officer SHENAE BRADLEY, Senior Program Assistant ERIC WHITAKER, Senior Program Assistant For more information on CSTB, see its website at http://www.cstb.org, write to CSTB, National Research Council, 500 Fifth Street, N.W., Washington, D.C. 20001, call (202) 334-2605, or e-mail the CSTB at [email protected]. vi Copyright © National Academy of Sciences. All rights reserved. The Future of Computing Performance: Game Over or Next Level? Preface ast, inexpensive computers are now essential for nearly all human endeavors and have been a critical factor in increasing economic Fproductivity, enabling new defense systems, and advancing the frontiers of science. But less well understood is the need for ever-faster computers at ever-lower costs. For the last half-century, computers have been doubling in performance and capacity every couple of years. This remarkable, continuous, exponential growth in computing performance has resulted in an increase by a factor of over 100 per decade and more than a million in the last 40 years. For example, the raw performance of a 1970s supercomputer is now available in a typical modern cell phone. That uninterrupted exponential growth in computing throughout the lifetimes of most people has resulted in the expectation that such phenom- enal progress, often called Moore’s law, will continue well into the future. Indeed, societal expectations for increased technology performance con- tinue apace and show no signs of slowing, a trend that underscores the need to find ways to sustain exponentially increasing performance in multiple dimensions. The essential engine that made that exponential growth possible is now in considerable danger. Thermal-power challenges and increasingly expensive energy demands pose threats to the historical rate of increase in processor performance. The implications of a dramatic slowdown in how quickly computer performance is increasing—for our economy, our mili- tary, our research institutions, and our way of life—are substantial. That obstacle to continuing growth in computing performance is by now well vii Copyright © National Academy
Recommended publications
  • Annual Reports of FCCSET Subcommittee Annual Trip Reports To
    Annual Reports of FCCSET Subcommittee Annual trip reports to supercomputer manufacturers trace the changes in technology and in the industry, 1985-1989. FY 1986 Annual Report of the Federal Coordinating Council on Science, Engineering and Technology (FCCSET). by the FCCSET Ocnmittee. n High Performance Computing Summary During the past year, the Committee met on a regular basis to review government and industry supported programs in research, development, and application of new supercomputer technology. The Committee maintains an overview of commercial developments in the U.S. and abroad. It regularly receives briefings from Government agency sponsored R&D efforts and makes such information available, where feasible, to industry and universities. In addition, the committee coordinates agency supercomputer access programs and promotes cooperation with particular emphasis on aiding the establish- ment of new centers and new communications networks. The Committee made its annual visit to supercomputer manufacturers in August and found that substantial progress had been made by Cray Research and ETA Systems toward developing their next generations of machines. The Cray II and expanded Cray XMP series supercomputers are now being marketed commercially; the Committee was briefed on plans for the next generation of Cray machines. ETA Systems is beyond the prototype stage for the ETA-10 and planning to ship one machine this year. A ^-0 A 1^'Tr 2 The supercomputer vendors continue to have difficulty in obtaining high performance IC's from U.S. chip makers, leaving them dependent on Japanese suppliers. In some cases, the Japanese chip suppliers are the same companies, e.g., Fujitsu, that provide the strongest foreign competition in the supercomputer market.
    [Show full text]
  • Computer Organization and Architecture Designing for Performance Ninth Edition
    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION William Stallings Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Editorial Director: Marcia Horton Designer: Bruce Kenselaar Executive Editor: Tracy Dunkelberger Manager, Visual Research: Karen Sanatar Associate Editor: Carole Snyder Manager, Rights and Permissions: Mike Joyce Director of Marketing: Patrice Jones Text Permission Coordinator: Jen Roach Marketing Manager: Yez Alayan Cover Art: Charles Bowman/Robert Harding Marketing Coordinator: Kathryn Ferranti Lead Media Project Manager: Daniel Sandin Marketing Assistant: Emma Snider Full-Service Project Management: Shiny Rajesh/ Director of Production: Vince O’Brien Integra Software Services Pvt. Ltd. Managing Editor: Jeff Holcomb Composition: Integra Software Services Pvt. Ltd. Production Project Manager: Kayla Smith-Tarbox Printer/Binder: Edward Brothers Production Editor: Pat Brown Cover Printer: Lehigh-Phoenix Color/Hagerstown Manufacturing Buyer: Pat Brown Text Font: Times Ten-Roman Creative Director: Jayne Conte Credits: Figure 2.14: reprinted with permission from The Computer Language Company, Inc. Figure 17.10: Buyya, Rajkumar, High-Performance Cluster Computing: Architectures and Systems, Vol I, 1st edition, ©1999. Reprinted and Electronically reproduced by permission of Pearson Education, Inc. Upper Saddle River, New Jersey, Figure 17.11: Reprinted with permission from Ethernet Alliance. Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on the appropriate page within text. Copyright © 2013, 2010, 2006 by Pearson Education, Inc., publishing as Prentice Hall. All rights reserved. Manufactured in the United States of America.
    [Show full text]
  • Trends in Electrical Efficiency in Computer Performance
    ASSESSING TRENDS IN THE ELECTRICAL EFFICIENCY OF COMPUTATION OVER TIME Jonathan G. Koomey*, Stephen Berard†, Marla Sanchez††, Henry Wong** * Lawrence Berkeley National Laboratory and Stanford University †Microsoft Corporation ††Lawrence Berkeley National Laboratory **Intel Corporation Contact: [email protected], http://www.koomey.com Final report to Microsoft Corporation and Intel Corporation Submitted to IEEE Annals of the History of Computing: August 5, 2009 Released on the web: August 17, 2009 EXECUTIVE SUMMARY Information technology (IT) has captured the popular imagination, in part because of the tangible benefits IT brings, but also because the underlying technological trends proceed at easily measurable, remarkably predictable, and unusually rapid rates. The number of transistors on a chip has doubled more or less every two years for decades, a trend that is popularly (but often imprecisely) encapsulated as “Moore’s law”. This article explores the relationship between the performance of computers and the electricity needed to deliver that performance. As shown in Figure ES-1, computations per kWh grew about as fast as performance for desktop computers starting in 1981, doubling every 1.5 years, a pace of change in computational efficiency comparable to that from 1946 to the present. Computations per kWh grew even more rapidly during the vacuum tube computing era and during the transition from tubes to transistors but more slowly during the era of discrete transistors. As expected, the transition from tubes to transistors shows a large jump in computations per kWh. In 1985, the physicist Richard Feynman identified a factor of one hundred billion (1011) possible theoretical improvement in the electricity used per computation.
    [Show full text]
  • Computer Performance Evaluation and Benchmarking
    Computer Performance Evaluation and Benchmarking EE 382M Dr. Lizy Kurian John Evolution of Single-Chip Microprocessors 1970’s 1980’s 1990’s 2010s Transistor Count 10K- 100K-1M 1M-100M 100M- 100K 10 B Clock Frequency 0.2- 2-20MHz 20M- 0.1- 2MHz 1GHz 4GHz Instruction/Cycle < 0.1 0.1-0.9 0.9- 2.0 1-100 MIPS/MFLOPS < 0.2 0.2-20 20-2,000 100- 10,000 Hot Chips 2014 (August 2014) AMD KAVERI HOT CHIPS 2014 AMD KAVERI HOTCHIPS 2014 Hotchips 2014 Hotchips 2014 - NVIDIA Power Density in Microprocessors 10000 Sun’s Surface 1000 Rocket Nozzle ) 2 Nuclear Reactor 100 Core 2 8086 10 Hot Plate 8008 Pentium® Power Density (W/cm 8085 4004 386 Processors 286 486 8080 1 1970 1980 1990 2000 2010 Source: Intel Why Performance Evaluation? • For better Processor Designs • For better Code on Existing Designs • For better Compilers • For better OS and Runtimes Design Analysis Lord Kelvin “To measure is to know.” "If you can not measure it, you can not improve it.“ "I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03] Designs evolve based on Analysis • Good designs are impossible without good analysis • Workload Analysis • Processor Analysis Design Analysis Performance Evaluation
    [Show full text]
  • Hillview: a Trillion-Cell Spreadsheet for Big Data
    Hillview: A trillion-cell spreadsheet for big data Mihai Budiu Parikshit Gopalan Lalith Suresh [email protected] [email protected] [email protected] VMware Research VMware Research VMware Research Udi Wieder Han Kruiger Marcos K. Aguilera [email protected] University of Utrecht [email protected] VMware Research VMware Research ABSTRACT Unfortunately, enterprise data is growing dramatically, and cur- Hillview is a distributed spreadsheet for browsing very large rent spreadsheets do not work with big data, because they are lim- datasets that cannot be handled by a single machine. As a spread- ited in capacity or interactivity. Centralized spreadsheets such as sheet, Hillview provides a high degree of interactivity that permits Excel can handle only millions of rows. More advanced tools such data analysts to explore information quickly along many dimen- as Tableau can scale to larger data sets by connecting a visualiza- sions while switching visualizations on a whim. To provide the re- tion front-end to a general-purpose analytics engine in the back- quired responsiveness, Hillview introduces visualization sketches, end. Because the engine is general-purpose, this approach is either or vizketches, as a simple idea to produce compact data visualiza- slow for a big data spreadsheet or complex to use as it requires tions. Vizketches combine algorithmic techniques for data summa- users to carefully choose queries that the system is able to execute rization with computer graphics principles for efficient rendering. quickly. For example, Tableau can use Amazon Redshift as the an- While simple, vizketches are effective at scaling the spreadsheet alytics back-end but users must understand lengthy documentation by parallelizing computation, reducing communication, providing to navigate around bad query types that are too slow to execute [17].
    [Show full text]
  • Performance Scalability of N-Tier Application in Virtualized Cloud Environments: Two Case Studies in Vertical and Horizontal Scaling
    PERFORMANCE SCALABILITY OF N-TIER APPLICATION IN VIRTUALIZED CLOUD ENVIRONMENTS: TWO CASE STUDIES IN VERTICAL AND HORIZONTAL SCALING A Thesis Presented to The Academic Faculty by Junhee Park In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Computer Science Georgia Institute of Technology May 2016 Copyright c 2016 by Junhee Park PERFORMANCE SCALABILITY OF N-TIER APPLICATION IN VIRTUALIZED CLOUD ENVIRONMENTS: TWO CASE STUDIES IN VERTICAL AND HORIZONTAL SCALING Approved by: Professor Dr. Calton Pu, Advisor Professor Dr. Shamkant B. Navathe School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Professor Dr. Ling Liu Professor Dr. Edward R. Omiecinski School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Professor Dr. Qingyang Wang Date Approved: December 11, 2015 School of Electrical Engineering and Computer Science Louisiana State University To my parents, my wife, and my daughter ACKNOWLEDGEMENTS My Ph.D. journey was a precious roller-coaster ride that uniquely accelerated my personal growth. I am extremely grateful to anyone who has supported and walked down this path with me. I want to apologize in advance in case I miss anyone. First and foremost, I am extremely thankful and lucky to work with my advisor, Dr. Calton Pu who guided me throughout my Masters and Ph.D. programs. He first gave me the opportunity to start work in research environment when I was a fresh Master student and gave me an admission to one of the best computer science Ph.D.
    [Show full text]
  • An Algorithmic Theory of Caches by Sridhar Ramachandran
    An algorithmic theory of caches by Sridhar Ramachandran Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY. December 1999 Massachusetts Institute of Technology 1999. All rights reserved. Author Department of Electrical Engineering and Computer Science Jan 31, 1999 Certified by / -f Charles E. Leiserson Professor of Computer Science and Engineering Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students MSSACHUSVTS INSTITUT OF TECHNOLOGY MAR 0 4 2000 LIBRARIES 2 An algorithmic theory of caches by Sridhar Ramachandran Submitted to the Department of Electrical Engineeringand Computer Science on Jan 31, 1999 in partialfulfillment of the requirementsfor the degree of Master of Science. Abstract The ideal-cache model, an extension of the RAM model, evaluates the referential locality exhibited by algorithms. The ideal-cache model is characterized by two parameters-the cache size Z, and line length L. As suggested by its name, the ideal-cache model practices automatic, optimal, omniscient replacement algorithm. The performance of an algorithm on the ideal-cache model consists of two measures-the RAM running time, called work complexity, and the number of misses on the ideal cache, called cache complexity. This thesis proposes the ideal-cache model as a "bridging" model for caches in the sense proposed by Valiant [49]. A bridging model for caches serves two purposes. It can be viewed as a hardware "ideal" that influences cache design. On the other hand, it can be used as a powerful tool to design cache-efficient algorithms.
    [Show full text]
  • Information Age Anthology Vol II
    DoD C4ISR Cooperative Research Program ASSISTANT SECRETARY OF DEFENSE (C3I) Mr. Arthur L. Money SPECIAL ASSISTANT TO THE ASD(C3I) & DIRECTOR, RESEARCH AND STRATEGIC PLANNING Dr. David S. Alberts Opinions, conclusions, and recommendations expressed or implied within are solely those of the authors. They do not necessarily represent the views of the Department of Defense, or any other U.S. Government agency. Cleared for public release; distribution unlimited. Portions of this publication may be quoted or reprinted without further permission, with credit to the DoD C4ISR Cooperative Research Program, Washington, D.C. Courtesy copies of reviews would be appreciated. Library of Congress Cataloging-in-Publication Data Alberts, David S. (David Stephen), 1942- Volume II of Information Age Anthology: National Security Implications of the Information Age David S. Alberts, Daniel S. Papp p. cm. -- (CCRP publication series) Includes bibliographical references. ISBN 1-893723-02-X 97-194630 CIP August 2000 VOLUME II INFORMATION AGE ANTHOLOGY: National Security Implications of the Information Age EDITED BY DAVID S. ALBERTS DANIEL S. PAPP TABLE OF CONTENTS Acknowledgments ................................................ v Preface ................................................................ vii Chapter 1—National Security in the Information Age: Setting the Stage—Daniel S. Papp and David S. Alberts .................................................... 1 Part One Introduction......................................... 55 Chapter 2—Bits, Bytes, and Diplomacy—Walter B. Wriston ................................................................ 61 Chapter 3—Seven Types of Information Warfare—Martin C. Libicki ................................. 77 Chapter 4—America’s Information Edge— Joseph S. Nye, Jr. and William A. Owens....... 115 Chapter 5—The Internet and National Security: Emerging Issues—David Halperin .................. 137 Chapter 6—Technology, Intelligence, and the Information Stream: The Executive Branch and National Security Decision Making— Loch K.
    [Show full text]
  • Member Renewal Guide
    MEMBERSHIP DUES (All prices indicated are in U.S. dollars) membership dues (cont’d) Professional Member .........................................................................................$99 Professional Member PLUS Digital Library ($99 + $99) .........$198 PAYMENT Lifetime Membership ...................................................................... Prices vary Payment accepted by Visa, MasterCard, American Express, Available to ACM Professional Members in three age tiers. Discover, check, or money order in U.S. dollars. For residents Pricing is determined as a multiple of current professional rates. outside the US our preferred method of payment is credit The pricing structure is listed at www.acm.org/membership/life. card—but we do accept checks drawn in foreign currency at Membership the current monetary exchange rate. Student to Professional Transition ..........................................................$49 Renewal PLUS Digital Library ($49 + $50) ................................................................$99 CHANGING YOUR MEMBERSHIP STATUS... Student Membership .........................................................................................$19 From Student to Professional Guide Access to online books and courses; print subscription to XRDS; (Special Student Transition Dues of $49) online subscription to Communications of the ACM; and more. Cross out “Student” in the Member Class section, and write Student Membership PLUS Digital Library .......................................$42 “Student Tran sition
    [Show full text]
  • The Marketing of Information in the Information Age Hofacker and Goldsmith
    The Marketing of Information in the Information Age Hofacker and Goldsmith THE MARKETING OF INFORMATION IN THE INFORMATION AGE CHARLES F. HOFACKER, Florida State University RONALD E. GOLDSMITH, Florida State University The long-standing distinction made between goods and services must now be joined by the distinction between these two categories and “information products.” The authors propose that marketing management, practice, and theory could benefit by broadening the scope of what marketers market by adding a third category besides tangible goods and intangible services. It is apparent that information is an unusual and fundamental physical and logical phenomenon from either of these. From this the authors propose that information products differ from both tangible goods and perishable services in that information products have three properties derived from their fundamental abstractness: they are scalable, mutable, and public. Moreover, an increasing number of goods and services now contain information as a distinct but integral element in the way they deliver benefits. Thus, the authors propose that marketing theory should revise the concept of “product” to explicitly include an informational component and that the implications of this revised concept be discussed. This paper presents some thoughts on the issues such discussions should address, focusing on strategic management implications for marketing information products in the information age. INTRODUCTION “information,” in addition to goods and services (originally proposed by Freiden et al. 1998), A major revolution in marketing management while identifying unique characteristics of pure and theory occurred when scholars established information products. To support this position, that “products” should not be conceptualized the authors identify key attributes that set apart solely as tangible goods, but also as intangible an information product from a good or a services.
    [Show full text]
  • War Gaming in the Information Age—Theory and Purpose Paul Bracken
    Naval War College Review Volume 54 Article 6 Number 2 Spring 2001 War Gaming in the Information Age—Theory and Purpose Paul Bracken Martin Shubik Follow this and additional works at: https://digital-commons.usnwc.edu/nwc-review Recommended Citation Bracken, Paul and Shubik, Martin (2001) "War Gaming in the Information Age—Theory and Purpose," Naval War College Review: Vol. 54 : No. 2 , Article 6. Available at: https://digital-commons.usnwc.edu/nwc-review/vol54/iss2/6 This Article is brought to you for free and open access by the Journals at U.S. Naval War College Digital Commons. It has been accepted for inclusion in Naval War College Review by an authorized editor of U.S. Naval War College Digital Commons. For more information, please contact [email protected]. Bracken and Shubik: War Gaming in the Information Age—Theory and Purpose WAR GAMING IN THE INFORMATION AGE Theory and Purpose Paul Bracken and Martin Shubik ver twenty years ago, a study was carried out under the sponsorship of the ODefense Advanced Research Projects Agency and in collaboration with the General Accounting Office to survey and critique the models, simulations, and war games then in use by the Department of Defense.1 From some points of view, twenty years ago means ancient history; changes in communication tech- nology and computers since then can be measured Paul Bracken, professor of management and of political science at Yale University, specializes in national security only in terms of orders of magnitudes. The new world and management issues. He is the author of Command of the networked battlefield, super-accurate weapons, and Control of Nuclear Forces (1983) and of Fire in the and the information technology (IT) revolution, with its East: The Rise of Asian Military Power and the Second Nuclear Age (HarperCollins, 1999).
    [Show full text]
  • Technology and Economic Growth in the Information Age Introduction
    Technology and Economic Growth in the Information Age 1 POLICY BACKGROUNDER No. 147 For Immediate Release March 12, 1998 Technology and Economic Growth in the Information Age Introduction The idea of rapid progress runs counter to well-publicized reports of an American economy whose growth rate has slipped. Pessimists, citing statistics on weakening productivity and gross domestic product growth, contend the economy isn’t strong enough to keep improving Americans’ standard of living. They offer a dour view: the generation coming of age today will be the first in American history not to live better than its parents.1 Yet there is plenty of evidence that this generation is better off than any that have gone before, and given the technologies likely to shape the next quarter century, there are reasons to believe that progress will be faster than ever — a stunning display of capitalism’s ability to lift living standards. To suppose otherwise would be to exhibit the shortsightedness of Charles H. Du- ell, commissioner of the U.S. Office of Patents, who in 1899 said, “Everything that can be invented has been invented.” Ironically, though, our economic statistics may miss the show. The usual measures of progress — output and productivity — are losing relevance “The usual measures of prog- in this age of increasingly rapid technological advances. As the economy ress — output and produc- evolves, it is delivering not only a greater quantity of goods and services, but tivity — are losing relevance also improved quality and greater variety. Workers and their families will be in this age of increasingly rapid technological advanc- better off because they will have more time off, better working conditions, es.” more enjoyable jobs and other benefits that raise our living standards but aren’t easily measured and therefore often aren’t counted in gross domestic product (GDP).
    [Show full text]