Simics: a Full System Simulation Platform

Total Page:16

File Type:pdf, Size:1020Kb

Simics: a Full System Simulation Platform COVER FEATURE Simics: A Full System Simulation Platform A full system simulator attempts to strike a balance between accuracy and performance by modeling the complete final application and providing a unified framework for hardware and software design within that context. Peter S. hat all computers can simulate each other jects in this area. It is the first commercial full sys- Magnusson is an immediate consequence of the theo- tem simulator, and it is just beginning to demon- Magnus retical work of Alan Turing and Alonzo strate the possibilities for system development. Christensson Church. Computer architects made direct use of this property as early as the EDSAC FULL SYSTEM SIMULATION T 1 Jesper project in the 1950s, and simulation in its various Increasingly, we must design computer hardware or Eskilson shapes and guises has been used to support the software within the context of the final application. Daniel design of computers ever since. Simulation offers A component’s value lies in its contribution to that Forsgren the traditional benefits of software construction: It application. For example, no particular signal or exe- can arbitrarily parameterize, control, and inspect cuted instruction provides value at Amazon.com’s Gustav the system it is modeling—the target system. Its Web site. Instead, the value lies in letting a customer Hållberg measurements are nonintrusive and deterministic. expediently execute a search and make a purchase Johan Further, it provides a basis for automation: Multi- decision. An end-user service like this is usually built Högberg ple simulator sessions can run in parallel, and ses- from a mixture of different manufacturers’ equip- sions can be fully scripted. ment, in turn running a mixture of standard software Fredrik Naturally, we wish to simulate an entire system and proprietary components. The availability, per- Larsson and to do so with total accuracy—a perfect model. formance, and reliability of that end-user service moti- Andreas There are obvious problems with seeking perfec- vates the entire digital value chain up to that point. Moestedt tion, including cost, time to completion, specifica- Large projects aimed at developing high-end dig- Bengt Werner tion inaccuracies, and implementation errors. But ital systems employ a variety of simulation-oriented Virtutech AB, most important is the problem of workload real- tools and methodologies. We can classify these along Stockholm ism. In most cases, we do not know how to imple- two dimensions: scope (what is being modeled) and ment an accurate model with performance sufficient level of abstraction (the abstraction level at which to run realistic workloads. So, in practice, models it is modeled). Abstraction level, in turn, is best that attempt to be highly accurate end up running viewed from two perspectives: the functional behav- very small “toy” workloads. The result is accurate ior (“what”) and the timing behavior (“when”). answers to irrelevant questions. If the goal is to model realistic workloads, then Simics is a platform for full system simulation, the scope must be the full system or we will not be which attempts to strike a balance between accu- able to represent modern scenarios at all. The racy and performance. That is, it is sufficiently abstraction level must be functionally low enough abstract to achieve tolerable performance levels to boot and run unmodified commercial operating with, at the same time, sufficient functional accu- systems and industry benchmarks, and temporally racy to run commercial workloads and sufficient low enough to support hardware engineering. timing accuracy to interface to detailed hardware However, the descent into more detailed levels of models. Simics was one of the first academic pro- abstraction must not result in an overall simulation 50 Computer 0018-9162/02/$17.00 © 2002 IEEE Figure 1. Simics simulation of target systems based on several processor architectures. Each simulated system can run unmodified operating systems and applications. performance that precludes realistic workload telecom switches, multiprocessor systems, clusters, scale, in terms of data set sizes and execution and networks of all these items. At the same time, lengths. Today, a high-end workload scenario has Simics is flexible enough to support a broad variety a total code base of 105 to 108 lines, with execution of tasks throughout the product development cycle, lengths of 109 to 1012 instructions operating on a including such seemingly disparate activities as physical memory of 108 to 1011 bytes, with backing microprocessor design, operating system develop- storage of 1010 to 1013 bytes. ment, fault injection studies, and hardware design Full system simulation supports the design, devel- verification. opment, and testing of computer hardware and Simics simulates processors at the instruction-set software within a simulation framework that level, including the full supervisor state. Currently, approximates the final application context. In this Simics supports models for UltraSparc, Alpha, x86, case, “system” does not mean some arbitrary sub- x86-64 (Hammer), PowerPC, IPF (Itanium), MIPS, set of digital components running simple test code. and ARM. Simics is pure software, and current ports Referring to the Amazon.com example, it would include Linux (x86, PowerPC, and Alpha), Solaris/ include multiple Windows/Linux desktop clients UltraSparc, Tru64/Alpha, and Windows 2000/x86. connected over a network to a cluster of worksta- Figure 1 shows multiple instances of Simics sim- tions and servers running Web software, databases, ulating target systems based on a variety of differ- and various application-specific tasks related to the ent processor architectures, each running a corre- value proposition. sponding operating system: SIMICS OVERVIEW • An x86 (Pentium II) machine running Red Hat Simics provides such a simulation platform. We 6.2 and a KDE desktop (large window in the designed it from the ground up to be sufficiently center), showing two Netscape sessions con- detailed to run unmodified operating systems nected to actual, live Web servers; (including both embedded systems such as VxWorks • A second x86 machine (top right) showing the and general-purpose desktop/server systems such as Windows NT login screen; Solaris, Linux, Tru64, and Windows XP). It is fast • An UltraSparc II machine running Solaris 8 enough to run realistic workloads, including the and MySQL (middle left); SPEC CPU2000 benchmark suite, database bench- • A Simics command line for an UltraSparc III marks such as TPC-C, interactive desktop applica- model before “powering on” (bottom left); tions, and games. Simics is also sufficiently generic to • An IPF (Itanium) model running Red Hat 7.2 model embedded systems, desktop or set-top boxes, (top left); February 2002 51 Figure 2. Sample Host 1 Host 2 network setup for Simics simulation. Simics/x86 Simics/Hammer Client A Client B The simulation is Red Hat Linux 6.2 WinXP distributed over four KDE Explorer host workstations, Netscape and two clients are talking to the Web Host 3 server, which has a database back end. Simics/Alpha Web server enough to use interactively. Through the mouse or Red Hat Linux 6.0 Simics Central a keyboard, users can give input to clients A or B, Apache which run within windows on the respective host mwforum window systems. Simics Central acts as a router that lets users traceroute into the simulated net- Host 4 work from the host environment (and vice versa). With this setup, we can interactively browse the Simics/Ultra III DB server different discussion groups on the Web server and Solaris 8 write new messages with acceptable response. MySQL Retrieving the first Web page that includes a list of all discussion groups takes approximately 30 sec- onds. Network of simulators The point here, of course, is that this is a fully Simulated network simulated setup. For example, any Simics session can be stopped to single step, inspect state, and so on, in which case the other Simics processes auto- • A PowerPC machine running VxWorks (top matically pause pending simulated global time center); and progress. The simulation can access memory traf- • An x86-64 (Hammer) machine running fic anywhere, set breakpoints anywhere, and Windows XP (the simulated processor is run- modify any of the systems (such as adding new ning in 32-bit legacy mode), in the bottom instructions or caches). It can record and time- right window. stamp all user input—for example, keyboard and mouse—and play back the entire session. The sim- The window in the bottom-left corner of ulation can also save the entire setup to a check- Figure 1 shows the Simics command line. All other point and bring it up again in repeat sessions. Simics command windows are hidden. The screen shot is taken from a dual-processor 933-MHz APPLICATIONS FOR FULL SYSTEM SIMULATION Pentium III system with 512 Mbytes of memory, Figure 3 shows the major task dependencies in running Red Hat Linux 7.2. All the Simics pro- developing a complete high-end digital system. cesses are running on that same system. Note that full system simulation relaxes many In addition to processor models, Simics includes dependencies by providing a single platform across device models accurate enough to run the real the development cycle. Each task can move from firmware and device drivers. For example, an existing system model toward the intended next- Simics/UltraSparc III will run the real Open Boot generation model at its own pace. In other words, PROM, and Simics/x86 will correctly install and each task can begin within an abstract and—by run Windows XP from the installation disks. design—incorrect context, which gradually be- Simics views each target machine as a node, rep- comes more representative of the real future sys- resenting a resource such as a Web server, a data- tem. The time-to-market gains and risk reductions base engine, a router, or a client.
Recommended publications
  • May 21St at the Time, the EDSAC Used IBM 726 Three Small Cathode Ray Tube Börje Langefors Screens to Display the State of Its Announced Memory
    it played tic-tac-toe (known as Noughts and Crosses in the UK). May 21st At the time, the EDSAC used IBM 726 three small cathode ray tube Börje Langefors screens to display the state of its Announced memory. Each one could draw a May 21, 1952 Born: May 21, 1915; grid of 35 x 16 dots. Douglas re- purposed one of them for his Ystad, Sweden The IBM 726 was the company’s game, and obtained input (i.e. Died: Dec. 13, 2009 first magnetic tape unit, where to place a nought or intended for use with the Langefors developed the cross) via EDSAC’s rotary recently announced IBM 701 ‘infological equation’ in 1980, controller. [April 7], the company’s first which describes the difference electronic computer. between information and data in terms of additional semantic The 726 utilized half-inch tapes background and a with seven tracks. Six were for communication time interval. the data and the seventh was employed as a parity track. Langefors joined SAAB, the Some tapes were 1,200 feet long, Swedish aerospace and defense could store 2.3 MB of data, and company, in 1949 where he IBM claimed that just one could utilized analog devices for replace 12,500 punch cards. calculating wing stresses. The need for more powerful tools The drive could write 100 became evident, and the only characters per inch on a tape Swedish computer of the time, EDSAC CRT Tubes. Computer and read 75 inches per second. the BESK [April 1], was Lab, Univ. of Cambridge. CC BY To withstand the system’s fast insufficient for the task.
    [Show full text]
  • An Early Program Proof by Alan Turing F
    An Early Program Proof by Alan Turing F. L. MORRIS AND C. B. JONES The paper reproduces, with typographical corrections and comments, a 7 949 paper by Alan Turing that foreshadows much subsequent work in program proving. Categories and Subject Descriptors: 0.2.4 [Software Engineeringj- correctness proofs; F.3.1 [Logics and Meanings of Programs]-assertions; K.2 [History of Computing]-software General Terms: Verification Additional Key Words and Phrases: A. M. Turing Introduction The standard references for work on program proofs b) have been omitted in the commentary, and ten attribute the early statement of direction to John other identifiers are written incorrectly. It would ap- McCarthy (e.g., McCarthy 1963); the first workable pear to be worth correcting these errors and com- methods to Peter Naur (1966) and Robert Floyd menting on the proof from the viewpoint of subse- (1967); and the provision of more formal systems to quent work on program proofs. C. A. R. Hoare (1969) and Edsger Dijkstra (1976). The Turing delivered this paper in June 1949, at the early papers of some of the computing pioneers, how- inaugural conference of the EDSAC, the computer at ever, show an awareness of the need for proofs of Cambridge University built under the direction of program correctness and even present workable meth- Maurice V. Wilkes. Turing had been writing programs ods (e.g., Goldstine and von Neumann 1947; Turing for an electronic computer since the end of 1945-at 1949). first for the proposed ACE, the computer project at the The 1949 paper by Alan M.
    [Show full text]
  • Simics* Model Library for Eagle Stream Simulation Environment
    Simics* Model Library for Eagle Stream Simulation Environment Release Notes May 2020 Revision V0.6.00 Intel Confidential Notice: This document contains information on products in the design phase of development. The information here is subject to change without notice. Do not finalize a design with this information. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
    [Show full text]
  • Fax86: an Open-Source FPGA-Accelerated X86 Full-System Emulator
    FAx86: An Open-Source FPGA-Accelerated x86 Full-System Emulator by Elias El Ferezli A thesis submitted in conformity with the requirements for the degree of Master of Applied Science (M.A.Sc.) Graduate Department of Electrical and Computer Engineering University of Toronto Copyright c 2011 by Elias El Ferezli Abstract FAx86: An Open-Source FPGA-Accelerated x86 Full-System Emulator Elias El Ferezli Master of Applied Science (M.A.Sc.) Graduate Department of Electrical and Computer Engineering University of Toronto 2011 This thesis presents FAx86, a hardware/software full-system emulator of commodity computer systems using x86 processors. FAx86 is based upon the open-source IA-32 full-system simulator Bochs and is implemented over a single Virtex-5 FPGA. Our first prototype uses an embedded PowerPC to run the software portion of Bochs and off- loads the instruction decoding function to a low-cost hardware decoder since instruction decode was measured to be the most time consuming part of the software-only emulation. Instruction decoding for x86 architectures is non-trivial due to their variable length and instruction encoding format. The decoder requires only 3% of the total LUTs and 5% of the BRAMs of the FPGA's resources making the design feasible to replicate for many- core emulator implementations. FAx86 prototype boots Linux Debian version 2.6 and runs SPEC CPU 2006 benchmarks. FAx86 improves simulation performance over the default Bochs by 5 to 9% depending on the workload. ii Acknowledgements I would like to begin by thanking my supervisor, Professor Andreas Moshovos, for his patient guidance and continuous support throughout this work.
    [Show full text]
  • Hardware Mechanisms for Distributed Dynamic Software Analysis
    Hardware Mechanisms for Distributed Dynamic Software Analysis by Joseph Lee Greathouse A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in The University of Michigan 2012 Doctoral Committee: Professor Todd M. Austin, Chair Professor Scott Mahlke Associate Professor Robert Dick Associate Professor Jason Nelson Flinn c Joseph Lee Greathouse All Rights Reserved 2012 To my parents, Gail and Russell Greathouse. Without their support throughout my life, I would never have made it this far. ii Acknowledgments First and foremost, I must thank my advisor, Professor Todd Austin, for his help and guid- ance throughout my graduate career. I started graduate school with the broad desire to “research computer architecture,” but under Professor Austin’s watch, I have been able to hone this into work that interests us both and has real applications. His spot-on advice about choosing topics, performing research, writing papers, and giving talks has been an invaluable introduction to the research world. The members of my committee, Professors Robert Dick, Jason Flinn, and Scott Mahlke, also deserve special gratitude. Their insights, comments, and suggestions have immea- surably improved this dissertation. Together their expertise spans low-level hardware to systems software and beyond. This meant that I needed to ensure that any of my ideas were defensible from all sides. I have been fortunate enough to collaborate with numerous co-authors. I have often published with Professor Valeria Bertacco, who, much like Professor Austin, has given me invaluable advice during my time at Michigan. I am extremely lucky to have had the chance to work closely with Ilya Wagner, David Ramos, and Andrea Pellegrini, all of whom have continued to be good friends even after the high-stress publication process.
    [Show full text]
  • Alan Turing's Automatic Computing Engine
    5 Turing and the computer B. Jack Copeland and Diane Proudfoot The Turing machine In his first major publication, ‘On computable numbers, with an application to the Entscheidungsproblem’ (1936), Turing introduced his abstract Turing machines.1 (Turing referred to these simply as ‘computing machines’— the American logician Alonzo Church dubbed them ‘Turing machines’.2) ‘On Computable Numbers’ pioneered the idea essential to the modern computer—the concept of controlling a computing machine’s operations by means of a program of coded instructions stored in the machine’s memory. This work had a profound influence on the development in the 1940s of the electronic stored-program digital computer—an influence often neglected or denied by historians of the computer. A Turing machine is an abstract conceptual model. It consists of a scanner and a limitless memory-tape. The tape is divided into squares, each of which may be blank or may bear a single symbol (‘0’or‘1’, for example, or some other symbol taken from a finite alphabet). The scanner moves back and forth through the memory, examining one square at a time (the ‘scanned square’). It reads the symbols on the tape and writes further symbols. The tape is both the memory and the vehicle for input and output. The tape may also contain a program of instructions. (Although the tape itself is limitless—Turing’s aim was to show that there are tasks that Turing machines cannot perform, even given unlimited working memory and unlimited time—any input inscribed on the tape must consist of a finite number of symbols.) A Turing machine has a small repertoire of basic operations: move left one square, move right one square, print, and change state.
    [Show full text]
  • Case Studies: Memory Behavior of Multithreaded Multimedia and AI Applications
    Case Studies: Memory Behavior of Multithreaded Multimedia and AI Applications Lu Peng*, Justin Song, Steven Ge, Yen-Kuang Chen, Victor Lee, Jih-Kwon Peir*, and Bob Liang * Computer Information Science & Engineering Architecture Research Lab University of Florida Intel Corporation {lpeng, peir}@cise.ufl.edu {justin.j.song, steven.ge, yen-kuang.chen, victor.w.lee, bob.liang}@intel.com Abstract processor operated as two logical processors [3]. Active threads on each processor have their own local states, Memory performance becomes a dominant factor for such as Program Counter (PC), register file, and today’s microprocessor applications. In this paper, we completion logic, while sharing other expensive study memory reference behavior of emerging components, such as functional units and caches. On a multimedia and AI applications. We compare memory multithreaded processor, multiple active threads can be performance for sequential and multithreaded versions homogenous (from the same application), or of the applications on multithreaded processors. The heterogeneous (from different independent applications). methodology we used including workload selection and In this paper, we investigate memory-sharing behavior parallelization, benchmarking and measurement, from homogenous threads of emerging multimedia and memory trace collection and verification, and trace- AI workloads. driven memory performance simulations. The results Two applications, AEC (Acoustic Echo Cancellation) from the case studies show that opposite reference and SL (Structure Learning), were examined. AEC is behavior, either constructive or disruptive, could be a widely used in telecommunication and acoustic result for different programs. Care must be taken to processing systems to effectively eliminate echo signals make sure the disruptive memory references will not [4].
    [Show full text]
  • Ramp: Research Accelerator for Multiple Processors
    ..................................................................................................................................................................................................................................................... RAMP: RESEARCH ACCELERATOR FOR MULTIPLE PROCESSORS ..................................................................................................................................................................................................................................................... THE RAMP PROJECT’S GOAL IS TO ENABLE THE INTENSIVE, MULTIDISCIPLINARY INNOVATION John Wawrzynek THAT THE COMPUTING INDUSTRY WILL NEED TO TACKLE THE PROBLEMS OF PARALLEL David Patterson PROCESSING. RAMP ITSELF IS AN OPEN-SOURCE, COMMUNITY-DEVELOPED, FPGA-BASED University of California, EMULATOR OF PARALLEL ARCHITECTURES. ITS DESIGN FRAMEWORK LETS A LARGE, Berkeley COLLABORATIVE COMMUNITY DEVELOP AND CONTRIBUTE REUSABLE, COMPOSABLE DESIGN Mark Oskin MODULES. THREE COMPLETE DESIGNS—FOR TRANSACTIONAL MEMORY, DISTRIBUTED University of Washington SYSTEMS, AND DISTRIBUTED-SHARED MEMORY—DEMONSTRATE THE PLATFORM’S POTENTIAL. Shih-Lien Lu ...... In 2005, the computer hardware N Prototyping a new architecture in Intel industry took a historic change of direction: hardware takes approximately four The major microprocessor companies all an- years and many millions of dollars, nounced that their future products would even at only research quality. Christoforos Kozyrakis be single-chip multiprocessors, and that N Software
    [Show full text]
  • Automotive Solutions Brochure
    WIND RIVER AUTOMOTIVE SOLUTIONS The Software-Enabled Automobile Simplifying the Connected Car Software is a key differentiating factor for today’s automakers. It is critical to harnessing the opportunities of autonomous driving, creating the connected car, delivering unique driving and infotainment experiences, ensuring all-around safety, and generating new revenue streams from innovative applications. But automotive software is becoming increasingly complex, because it must Tackle complexity, increase support greater functionality to meet consumer expectations, address competitive pressures, integrate predictability, reduce costs, disparate platforms, and adapt quickly to the ever-accelerating rate of technological change, while maintaining compliance with safety and security requirements and remaining within cost targets. and speed time-to-production Software complexity, in turn, slows time-to-production and increases development costs, creating a for your automotive projects vicious cycle. with Wind River. For example, a high-end modern car has more than 100 million lines of code, 145 actuators, over 4,000 signals, 75 sensors, and more than 70 on-board computers analyzing 25 GB of data per hour. In addition, the car might have as many as 100 different electronic control units (ECUs) managing both safety-critical and non-safety-critical functions. The more complex systems become, the higher the risk that functions will interfere with each other or that one or more systems will fail. Connected cars provide a further example: employing V2X communications, they will be able to An experienced software integrator such as Wind River® can help automotive original equipment alert each other and the infrastructure to road hazards or impending collisions. But as cars become manufacturers (OEMs) and their Tier 1 suppliers manage complexity, alleviate risk, increase more dependent on wireless connectivity, both within the vehicles and externally, they become predictability, and shorten development time.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Wind River Simics
    ™ AN INTEL COMPANY WIND RIVER SIMICS Electronic systems are becoming increasingly complex, with more hardware, more software, and more connectivity. Today’s systems are software intensive, using multiple processor architectures and running complex, multilayered software stacks. With an increasing empha- sis on smart and connected systems, complexity in software and hardware is unavoidable. With more connections comes additional security risk, which needs to be tested thoroughly. Compounding the challenge is the fact that developers have turned to DevOps and con- tinuous development practices to meet customer and company expectations for quick deliveries. Such methodologies rely on fast iterations for test, feedback, and deployment. Collaborative and cross-functional teams need tools to communicate and share a common development baseline. Wind River® Simics® allows developers to have on-demand access to any target system at any time. It enables more efficient collaboration between developers and quality assurance teams. Simics provides an automation API, enabling organizations to reap the business ben- efits of DevOps and continuous development practices to create and deliver better, more secure software, faster—even for complex, embedded, connected, and large IoT systems. DEVELOP SOFTWARE IN A VIRTUAL ENVIRONMENT Simics provides the access, automation, and collaboration required to enable DevOps and continuous development practices. By using virtual platforms and simulation, soft- ware developers can decouple their work from physical hardware and its limitations during development. Access to virtual hardware allows developers to do continuous integration and automated testing much sooner in the development cycle—even before the hardware design is finalized—as well as perform both testing and debugging during design and pro- totyping phases.
    [Show full text]
  • History of ENIAC
    A Short History of the Second American Revolution by Dilys Winegrad and Atsushi Akera (1) Today, the northeast corner of the old Moore School building at the University of Pennsylvania houses a bank of advanced computing workstations maintained by the professional staff of the Computing and Educational Technology Service of Penn's School of Engineering and Applied Science. There, fifty years ago, in a larger room with drab- colored walls and open rafters, stood the first general purpose electronic computer, the Electronic Numerical Integrator And Computer, or ENIAC. It spanned 150 feet in width with twenty banks of flashing lights indicating the results of its computations. ENIAC could add 5,000 numbers or do fourteen 10-digit multiplications in a second-- dead slow by present-day standards, but fast compared with the same task performed on a hand calculator. The fastest mechanical relay computers being operated experimentally at Harvard, Bell Laboratories, and elsewhere could do no more than 15 to 50 additions per second, a full two orders of magnitude slower. By showing that electronic computing circuitry could actually work, ENIAC paved the way for the modern computing industry that stands as its great legacy. ENIAC was by no means the first computer. In 1839, an Englishman Charles Babbage designed and developed the first true mechanical digital computer, which he described as a "difference engine," for solving mathematical problems including simple differential equations. He was assisted in his work by a woman mathematician, Ada Countess Lovelace, a member of the aristocracy and the daughter of Lord Byron. They worked out the mathematics of mechanical computation, which, in turn, led Babbage to design the more ambitious analytical engine.
    [Show full text]