CACM Observations on Super

Total Page:16

File Type:pdf, Size:1020Kb

CACM Observations on Super view point Gordon Bell Photo illustration by Robert Vizzini 1995 Observations on Supercomputing Alternatives: Did the MPP Bandwagon Lead to a Cul-de-Sac? or over a decade, govern- During 1995, Cray Research, puter and Tandem Computers Fment and the technical com- Fujitsu, IBM, Intel, NEC, and Sili- announced scalable computer puting community has focused on con Graphics introduced new clusters based on P6 for the com- achieving a teraflop speed super- technical computers. Intel mercial market. Dongarra’s Sur- computer. In 1989, I predicted announced the P6, a PC-compati- vey of Technical Computing Sites this goal would be reached in ble chip with a peak advertised shows that the world’s top 10 have mid-1995 for a $30 million com- performance (PAP) of 133Mflops installed peak capacity of about puter by using interconnected, to be raised to 266Mflops. In Sep- 850Gflops, all of which contain “killer” complimentary metal tember, Sandia ordered a $45.6 hundreds of computers. oxide semiconductor (CMOS) million, 9,072 processor, 1.81Gflops Teracomputer, an ARPA-fund- microprocessors [3–5]. The goal computer using the chip sched- ed state computer company, went is likely to be reached in 1996 in a uled to be installed in November public with an initial public offer- much more dramatic fashion 1996 that will provide ing to raise money to complete its than predicted because it is likely 39Kflops/dollar or 1.2Tflops at computer. In the same period, to be based on PC technology. the $30 million supercomputer Thinking Machines, a state com- Furthermore, by clustering PCs price in 1989. Adjusting for infla- puter company, and Kendall using System Area Nets (SANs), tion allows the 1996 supercom- Square Research, which offered scalable computing can be widely puter price to rise to $40 million massive parallelism with over available at low cost. and gets 1.6Tflops. Compaq Com- 1,000 processors, filed for Chap- COMMUNICATIONS OF THE ACM March 1996/Vol. 39, No. 3 11 viewpoint other companies, such as Digital computer formed from single, Equipment are still entering the fast vector processor comput- ter 11 but reemerged to offer soft- market. ers connected via a fast, high- ware and systems based on inter- These events call for a look at capacity switch (Fujitsu). The connected workstations. In how technical computing is now vector processor is implement- March, Cray Computer filed for likely to evolve. ed in CMOS technology. NEC Chapter 11, following the demise Five distinct computer struc- has also announced a CMOS of ACRI, aka Stern Computer tures are now vying for survival: vector processor operating at Company of Lyon, France. Con- 2Gflops per node that can vex, which uses Hewlett-Packard’s • Cray vector-style architecture scale to 512 processors. PA-RISC chips, was bought by supercomputers consisting of • Headless workstation clusters, Hewlett-Packard. Other small multiple, vector processors that or multicomputers, formed companies making parallel com- access a common memory and from workstation “killer” CMOS puters are certain to fail, while build from the fastest ECL microprocessor computers con- (emitter coupled logic) and nected via SANs that are propri- GaA (gallium arsenide) circuit etary, high-bandwidth, Figure 1. technology (Cray Research and low-latency switches; the IBM PAP* Gflops(t) for supers and NEC); Fujitsu and Hitachi have SP2 uses stacks of workstations. MPP’s for $30M (unless noted). switched to CMOS but remain UC/Berkeley is building clus- Peak and # Proc. (in parenthesis) on this path. ters using off-the-shelf Sun *Peak Advertised Performance • A computer cluster or multi- Microsystems workstations inter- connected via Myrinet’s high- bandwidth switch. Intel’s X Intel (Sandia) Paragon is formed from special- 1000 ly packaged, CMOS micro- processor computers connected 1 TT Fujitsu 60% /yr = $100M (512) via its high-bandwidth, low- latency switch. Tandem and extrapolation Cray DARPA to reach 1 Tf (propose) Compaq have introduced clus- in mid-1990 ters for the commercial market NEC using Tandem’s ServerNet to interconnect Compaq 4 proces- CM5 Cray T3D sor computers. (1K) • “Multis,” or multiple, CMOS microprocessors connected to 100 Intel T90 (02) large caches that access a com- mon memory via a common bus (Cray Superserver using Bell prize Sun SPARC micros, Silicon Graphics Power Challenge Fujitsu (35Gf/16) using MIPS micros) that I pre- dicted to be computing’s “main- IBM SP2 (17Gf/64) line” structure [2] and have Cray Res. limited scalability of about 10, (supers) although Cray’s Superserver 10 uses 64 SPARC processors. • Distributed shared-memory multiprocessors formed from SGI (4.8Gf/16) workstation CMOS micro- processor or multimicroproces- sor computers that communicate with one another via a proprietary, high-band- width, low-latency switch. Processors can access both local and remote memories as 1 a multiprocessor (Convex, 1990 1995 2000 Cray). Silicon Graphics is fol- 12 March 1996/Vol. 39, No. 3 COMMUNICATIONS OF THE ACM lowing this path for scalability. developing parallel computers. fastest CMOS micros to equal a Other companies are using the More impressive is the fact that supercomputer vector processor IEEE Scalable Coherent Inter- technical users have made in peak power. When used in par- face (SCI), to build scalable progress in realizing the PAP for allel, power can be significantly multis with a single memory to various apps as shown by the Bell reduced, depending on the com- simplify the operating system Prize. The growth in apps perfor- puter (its memory and intercon- and apps porting. mance by this measure has nectability) and problem roughly doubled yearly, with the granularity. Figure 1 shows performance 1995 winner operating at Most vector apps are unlikely measured in PAP for a $30 mil- 0.5Tflops using a specialized to run on multicomputers for a lion expenditure, or roughly the computer. The winning MPP long time. Silicon Graphics’ multi cost of a supercomputer in the operated at 179Gflops. is more likely to provide paral- mid-1990s. Technical computing lelism for fine granularity even has evolved. Since 1990, ARPA’s 3. Price differences among the though its scalability and memory High Performance Computing alternatives are often explained bandwidth are limited. Silicon and Communication Initiative by differences in memory size and Graphics has the largest market (HPCCI) has stimulated the mar- bandwidth. With computers, you share for technical computing, ket by developing, purchasing get what you pay for. This rarely even though it is not the fastest. and using highly parallel comput- shows up in PAP, but appears Convex, Cray, Fujitsu, and NEC ers for scientific and technical downstream in RAP (real applica- are supporting traditional supers computing. It is especially inter- tion performance) and occasion- and MPPs. Since it is unlikely that esting to observe the effects of ally on benchmarks. However, in MPPs based on CMOS micros can this effort as the teraflop quest 1995, most computers operated take over supercomputer work- continues. well on the Linpack benchmark, loads, the transition, if it happens From the details of the provided there was sufficient at all, is certain to be costly. It is announcements and figure, I memory to scale the problem size more likely CMOS micros will draw 13 major conclusions: and cover communication over- approach the speed of supers head. because supers trade off vector 1. There is more diversity in com- speed for scalar speed. puting alternatives than I predict- 4. CMOS has effectively replaced ed. While competition makes for ECL and GaAs as the technology 6. The prediction by NEC and lower hardware cost, it inhibits for building the highest-perfor- me [4, 5] that a 1Tflop, classical the attraction of apps software by mance computers. Fujitsu’s multiprocessor supercomputer independent software vendors. CMOS vector processor has a would not be available until 2000 Cray (T90), Fujitsu, and NEC are higher PAP than Cray Research’s still seems possible, even though continuing to evolve the super- computers. the T90 supercomputer isn’t computer, utilizing existing apps. quite on this trajectory. The diffi- Fujitsu’s multicomputer is a cost- 5. The Cray vector-style architec- culty is building a high-band- effective hybrid of the traditional ture is not dead to be replaced by width, low-latency switch to super that enables existing apps multiple, slow CMOS workstation- connect processors and memo- to run effectively and be evolved. style processors. The common wis- ries, since latency increases with Silicon Graphics is evolving the dom within the U.S. academic bandwidth. A 1Tflop multiproces- workstation and compatible multi community, which is the domi- sor would require a switch of at with a wide range of apps. Con- nant receptor of research funding least 16Tbytes per second to feed vex, Cray, IBM, Intel, and nCUBE and sets the research and funding the vector units using the Cray are all trying to establish massively agenda, appears to have been formula. parallel processing (MPP) as a wrong. The MPP bandwagon ran viable computer structure. IBM is over vectors, replacing them with 7. No teraflop before its time. I likely to be successful based on its many interconnected “killer” predicted that a $30 million, ability to fund commercial apps. micros used for workstations. 1flop computer would be avail- Intel’s P6 microprocessor makes These workstation micros are low able in 1995 [3–5], or by mid- the PC the most likely candidate cost and may be tuned for the 1996 at the latest. The price of for the most cost-effective nodes benchmark de jour to provide computation, using Thinking in both the commercial and tech- high hype. MPP machines often Machines’ CM5 PAP as a refer- nical markets. perform poorly for problems ence, is only increased by 50% where high bandwidth between with Cray’s T3D MPP.
Recommended publications
  • Evaluation of Architectural Support for Global Address-Based
    Evaluation of Architectural Supp ort for Global AddressBased Communication in LargeScale Parallel Machines y y Arvind Krishnamurthy Klaus E Schauser Chris J Scheiman Randolph Y Wang David E Culler and Katherine Yelick the sp ecic target architecture Wehave develop ed multi Abstract ple highly optimized versions of this compiler employing a Largescale parallel machines are incorp orating increas range of co degeneration strategies for machines with dedi ingly sophisticated architectural supp ort for userlevel mes cated network pro cessors In this studywe use this sp ec saging and global memory access We provide a systematic trum of runtime techniques to evaluate the p erformance evaluation of a broad sp ectrum of current design alternatives tradeos in architectural supp ort for communication found based on our implementations of a global address language in several of the current largescale parallel machines on the Thinking Machines CM Intel Paragon Meiko CS We consider ve imp ortant largescale parallel platforms Cray TD and Berkeley NOW This evaluation includes that havevarying degrees of architectural supp ort for com a range of compilation strategies that makevarying use of munication the Thinking Machines CM Intel Paragon the network pro cessor each is optimized for the target ar Meiko CS Cray TD and Berkeley NOW The CM pro chitecture and the particular strategyWe analyze a family vides direct userlevel access to the network the Paragon of interacting issues that determine the p erformance trade provides a network pro cessor
    [Show full text]
  • Parallel Computer Systems
    Parallel Computer Systems Randal E. Bryant CS 347 Lecture 27 April 29, 1997 Topics • Parallel Applications • Shared vs. Distributed Model • Concurrency Models • Single Bus Systems • Network-Based Systems • Lessons Learned Motivation Limits to Sequential Processing • Cannot push clock rates beyond technological limits • Instruction-level parallelism gets diminishing returns – 4-way superscalar machines only get average of 1.5 instructions / cycle – Branch prediction, speculative execution, etc. yield diminishing returns Applications have Insatiable Appetite for Computing • Modeling of physical systems • Virtual reality, real-time graphics, video • Database search, data mining Many Applications can Exploit Parallelism • Work on multiple parts of problem simultaneously • Synchronize to coordinate efforts • Communicate to share information – 2 – CS 347 S’97 Historical Perspective: The Graveyard • Lots of venture capital and DoD Resarch $$’s • Too big to enumerate, but some examples … ILLIAC IV • Early research machine with overambitious technology Thinking Machines • CM-2: 64K single-bit processors with single controller (SIMD) • CM-5: Tightly coupled network of SPARC processors Encore Computer • Shared memory machine using National microprocessors Kendall Square Research KSR-1 • Shared memory machine using proprietary processor NCUBE / Intel Hypercube / Intel Paragon • Connected network of small processors • Survive only in niche markets – 3 – CS 347 S’97 Historical Perspective: Successes Shared Memory Multiprocessors (SMP’s) • E.g., SGI
    [Show full text]
  • Multiple Instruction Issue in the Nonstop Cyclone System
    ~TANDEM Multiple Instruction Issue in the NonStop Cyclone System Robert W. Horst Richard L. Harris Robert L. Jardine Technical Report 90.6 June 1990 Part Number: 48007 Multiple Instruction Issue in the NonStop Cyclone Processorl Robert W. Horst Richard L. Harris Robert L. Jardine Tandem Computers Incorporated 19333 Vallco Parkway Cupertino, CA 95014 Abstract This paper describes the architecture for issuing multiple instructions per clock in the NonStop Cyclone Processor. Pairs of instructions are fetched and decoded by a dual two-stage prefetch pipeline and passed to a dual six-stage pipeline for execution. Dynamic branch prediction is used to reduce branch penalties. A unique microcode routine for each pair is stored in the large duplexed control store. The microcode controls parallel data paths optimized for executing the most frequent instruction pairs. Other features of the architecture include cache support for unaligned double­ precision accesses, a virtually-addressed main memory, and a novel precise exception mechanism. lA previous version of this paper was published in the conference proceedings of The 17th Annual International Symposium on Computer Architecture, May 28-31, 1990, Seattle, Washington. Dynabus+ Dynabus X Dvnabus Y IIIIII I 20 MBIS Parallel I I II 100 MbiVS III I Serial Fibers CPU CPU CPU CPU 0 3 14 15 MEMORY ••• MEMORY • •• MEMORY MEMORY ~IIIO PROC110 IIPROC1,0 PROC1,0 ROC PROC110 IIPROC1,0 PROC110 F11IOROC o 1 o 1 o 1 o 1 I DISKCTRL ~ DISKCTRL I I Q~ / \. I DISKCTRL I TAPECTRL : : DISKCTRL : I 0 1 2 3 /\ o 1 2 3 0 1 2 3 0 1 2 3 Section 0 Section 3 Figure 1.
    [Show full text]
  • Fault Tolerance in Tandem Computer Systems
    1'TANDEM Fault Tolerance in Tandem Computer Systems Joel Bartlett * Wendy Bartlett Richard Carr Dave Garcia Jim Gray Robert Horst Robert Jardine Dan Lenoski DixMcGuire • Preselll address: Digital Equipmelll CorporQlioll Western Regional Laboralory. Palo Alto. California Technical Report 90.5 May 1990 Part Number: 40666 ~ TANDEM COMPUTERS Fault Tolerance in Tandem Computer Systems Joel Bartlett* Wendy Bartlett Richard Carr Dave Garcia Jim Gray Robert Horst Robert Jardine Dan Lenoski Dix McGuire * Present address: Digital Equipment Corporation Western Regional Laboratory, Palo Alto, California Technical Report 90.5 May 1990 Part Nurnber: 40666 Fault Tolerance in Tandem Computer Systems! Wendy Bartlett, Richard Carr, Dave Garcia, Jim Gray, Robert Horst, Robert Jardine, Dan Lenoski, Dix McGuire Tandem Computers Incorporated Cupertino, California Joel Bartlett Digital Equipment Corporation, Western Regional Laboratory Palo Alto, California Tandem Technical Report 90.5, Tandem Part Number 40666 March 1990 ABSTRACT Tandem produces high-availability, general-purpose computers that provide fault tolerance through fail­ fast hardware modules and fault-tolerant software2. This chapter presents a historical perspective of the Tandem systems' evolution and provides a synopsis of the company's current approach to implementing these systems. The article does not cover products announced since January 1990. At the hardware level, a Tandem system is a loosely-coupled multiprocessor with fail-fast modules connected with dual paths. A system can include a range of processors, interconnected through a hierarchical fault-tolerant local network. A system can also include a variety of peripherals, attached with dual-ported controllers. A novel disk subsystem allows a choice between low cost-per-byte and low cost-per-access.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Scalability Study of KSR-1
    Scalability Study of the KSR-1 Appeared in Parallel Computing, Vol 22, 1996, 739-759 Umakishore Ramachandran Gautam Shah S. Ravikumar Jeyakumar Muthukumarasamy College of Computing Georgia Institute of Technology Atlanta, GA 30332 Phone: (404) 894-5136 e-mail: [email protected] Abstract Scalability of parallel architectures is an interesting area of current research. Shared memory parallel programming is attractive stemming from its relative ease in transitioning from sequential programming. However, there has been concern in the architectural community regarding the scalability of shared memory parallel architectures owing to the potential for large latencies for remote memory accesses. KSR-1 is a commercial shared memory parallel architecture, and the scalability of KSR-1 is the focus of this research. The study is conducted using a range of experiments spanning latency measurements, synchronization, and analysis of parallel algorithms for three computational kernels and an application. The key conclusions from this study are as follows: The communication network of KSR-1, a pipelined unidirectional ring, is fairly resilient in supporting simultaneous remote memory accesses from several processors. The multiple communication paths realized through this pipelining help in the ef®cient implementation of tournament-style barrier synchronization algorithms. Parallel algorithms that have fairly regular and contiguous data access patterns scale well on this architecture. The architectural features of KSR-1 such as the poststore and prefetch are useful for boosting the performance of parallel applications. The sizes of the caches available at each node may be too small for ef®ciently implementing large data structures. The network does saturate when there are simultaneous remote memory accesses from a fully populated (32 node) ring.
    [Show full text]
  • HP Nonstop Systems Deployments
    HP NonStop systems – as you haven’t seen them before HP NonStop systems as you haven’t seen them before Deployed in support of mission-critical applications in manufacturing and distribution, telecommunications, retail and wholesale banking, transportation and entertainment Richard Buckle Founder and CEO Pyalla Technologies, LLC Pyalla Technologies, LLC Page 1 HP NonStop systems – as you haven’t seen them before About the Author Richard Buckle is the founder and CEO of Pyalla Technologies, LLC. He has enjoyed a long association with the IT industry as a user, vendor, and more recently, as an industry commentator. Richard has over 25 years of research experience with HP’s NonStop platform, including eight years working at Tandem Computers, followed by just as many years at InSession Inc. and ACI Worldwide, as well as four years at Golden Gate, now a part of Oracle. Well known to the user communities of HP and IBM, Richard served as a Director of ITUG (2000-2006), as its Chairman (2004-2005), and as the Director of Marketing of the IBM user group, SHARE, (2007-2008). Richard provides industry commentary and opinions through his community blog as well as through his industry association and vendor blogs, web publications and eNewsletters. You can follow him at www.itug- connection.blogspot.com and at ATMmarketplace.com as well read his editorial, Musings on NonStop, published monthly in Tandemworld.net Pyalla Technologies, LLC Page 2 HP NonStop systems – as you haven’t seen them before Introduction The strength of NonStop systems has always been its support of real time, mission critical, transaction processing, starting out with an application built on a fault tolerant system continues to be the simplest way to assure its availability.
    [Show full text]
  • Atalla HSM & HPE Nonstop
    Data Security Overview GTUG – May 2018 Darren Burkey, Senior PreSales Consultant Atalla [email protected] The New Combined Company: built on stability, acquisition and innovation Network Management/ COBOL Data Protector 40 30 2 Years Years “Better Together” Portfolio Has Breadth and Depth Information Linux & DevOps IT Operations Cloud Security Governance Open Source Service Management, Cloud Service Digital Safe, Data Protector, Operations Bridge, Automation, Control Point, Data Center Hybrid Cloud Structured Data Manager, Automation, Management Storage Optimizer Network Management Mainframe Solutions, IT Operations Enterprise Linux, Management, OpenStack Private Cloud, Cobol Development, Host Connectivity, Identity-based Software-defined Software Delivery Collaboration Workload Migration Access Governance Storage and Testing and Security Big Data Analytics IDOL Data security portfolio: Voltage & Atalla Data privacy & security compliance Secure analytics, privacy and Hybrid cloud data protection & & risk reduction pseudonymization collaboration Voltage SecureData Enterprise, Big Data, Cloud, Mobile and Payments Data Security Tokenization, Encryption, Masking Voltage SecureMail Voltage SecureMail Cloud Easy, scalable email encryption Enterprise email encryption SaaS Atalla HSM Enterprise Secure Key Manager Payments crypto appliances & key storage KMIP Key Management for Storage, 3rd party apps 4 ® Atalla Product Overview History of Atalla • Established in 1972 • Mission: Protect financial transactions • Atalla introduced first
    [Show full text]
  • A Nonstop* Kernel Joel F. Bartlett Tandem Computers Inc. Cupertino
    A NonStop* Kernel Joel F. Bartlett Tandem Computers Inc. Cupertino, Ca. Abstract significantly expanded over its lifetime. The Tandem system is intended to fit these The Tandem NonStop System is a fault- requirements. tolerant [1], expandable, and distributed computer system designed expressly for i. Hardware Organization online transaction processing. This paper describes the key primitives of the kernel A network consists of up to 255 nodes. of the operating system. The first section Each node is composed of multiple processor describes the basic hardware building and I/O controller modules interconnected blocks and introduces their software by redundant buses [2,3] as shown in PMS analogs: processes and messages. Using [3] notation in Figure i. A node consists these primitives, a mechanism that allows of two to sixteen processors, where each fault-tolerant resource access, the processor (Pcentral) has its own power process-pair, is described. The paper supply, memory, backup battery, and I/O concludes with some observations on this channel (Sio). All processors are type of system structure and on actual use interconnected by redundant interprocessor of the system. buses (Sipb). Each I/O controller (Kdisc, Ksync, etc.) is connected to two I/O channels and is powered from two different Introduction power supplies using a diode ORing scheme. Fault-tolerant computing systems have been Finally, dual-ported I/O devices such as built over the last two decades in a number discs (Tdisc) may be connected to a second of places to satisfy a variety of goals. I/O controller. The contents of a disc may These results and differing approachs have be "mirrored" on a second volume, but this been summarized in [1,3,11].
    [Show full text]
  • Why Do Computers Stop and What Can Be Done About It?
    "1,TANDEMCOMPUTERS Why Do Computers Stop and What Can Be Done About It? Jim Gray Technical Report 85.7 June 1985 PN87614 Why Do Computers Stop and What Can Be Done About It? Jim Gray June 1985 Tandem Technical report 85.7 Tandem TR 85.7 Why Do Computers Stop and What Can Be Done About It? Jim Gray June, 1985 Revised November, 1985 ABSTRACT An analysis of the failure statistics of a commercially available fault-tolerant system shows that administration and software are the major contributors to failure. Various approachs to software fault- tolerance are then discussed notably process-pairs, transactions and reliable storage. It is pointed out that faults in production software are often soft (transient) and that a transaction mechanism combined with persistent process-pairs provides fault-tolerant execution -- the key to software fault-tolerance. DISCLAIMER This paper is not an "official" Tandem statement on fault-tolerance. Rather, it expresses the author's research on the topic. An early version of this paper appeared in the proceedings of the German Association for Computing Machinery Conference on Office Automation, Erlangen, Oct. 2-4, 1985. TABLE OF CONTENTS Introduct ion 1 Hardware Availability by Modular Redundancy....•.•.....•..•..•• 3 Analysis of Failures of a Fault-tolerant System•.••......•••.•. 7 Implications of the Analysis of MTBF ...•••.•.•••••...•........ 12 Fault-tolerant Execution 15 Software Modularity Through Processes and Messages 16 Fault Containment Through Fail-Stop Software Modules 16 Software Faults Are Soft, the Bohrbug-Heisenbug Hypothesis.17 Process-pairs For Fault-tolerant Execution 20 Transactions for Data Integrity..•......................... 24 Transactions for Simple Fault-tolerant Execution 25 Fault-tolerant Communication .......•..•.....•.•.•.•.•.•......
    [Show full text]
  • The KSR1: Experimentation and Modeling of Poststore Amy Apon Clemson University, [email protected]
    Clemson University TigerPrints Publications School of Computing 2-1993 The KSR1: Experimentation and Modeling of Poststore Amy Apon Clemson University, [email protected] E Rosti Universita degli studi de Milano E Smirni Vanderbilt University T D. Wagner Vanderbilt University M Madhukar Vanderbilt University See next page for additional authors Follow this and additional works at: https://tigerprints.clemson.edu/computing_pubs Part of the Computer Sciences Commons Recommended Citation Please use publisher's recommended citation. This Article is brought to you for free and open access by the School of Computing at TigerPrints. It has been accepted for inclusion in Publications by an authorized administrator of TigerPrints. For more information, please contact [email protected]. Authors Amy Apon, E Rosti, E Smirni, T D. Wagner, M Madhukar, and L W. Dowdy This article is available at TigerPrints: https://tigerprints.clemson.edu/computing_pubs/9 3 445b 0374303 7 E. Rasti E. Smirni A. W. Apoa L. w. Dowdy .- .. , . - . .. .. ... ..... i- ORNL/TM- 1228 7 I' Engineering Physics and Mathematics Division ; ?J -2 c_ Mathematical Sciences Section I.' THE KSR1: EXPERIMENTATION AND MODELING OF POSTSTORE E. Rosti E. Smirni t T. D. Wagner + A. W. Apon L. W. Dowdy Dipartimento di Scienze dell'Informazione Universitb degli Studi di Milano Via Comelico 39 20135 Milano, Italy t Computer Science Department Vaiiderbilt University Box 1679, Station B Nashville, TN 37235 Date Published: February 1993 This work was partially supported by sub-contract 19X-SL131V from the Oak Ridge National Laboratory, and by grant N. 92.01615.PF69 from the Italian CNR "Progetto Finalizzato Sistemi Informatici e Calcolo Parallel0 - Sottoprogetto 3." Prepared by the Oak Ridge National Laboratory Oak Ridge, Tennessee 37831 managed by Martin Marietta Energy Systems, Inc.
    [Show full text]
  • Tandem Computers Unplugged: a People's History
    Tandem Computers Unplugged: A People’s History company’s leadership in general and the charisma oF Jimmy Treybig, the company’s key Founder, in specific? Again a resounding yes, but not entirely as Jimmy hasn’t been the only charismatic leader in Silicon Valley - a number come to mind. Was it because oF the company’s offbeat corporate culture such as no private parking places, an on campus swimming pool and oF course the inFamous beer busts. Well yes, but!!! How can it be that over 16 years since From an insiders point of view it was its merger with Compaq and nearly a all oF these things wrapped in a decade since it’s reabsorption back cocoon of an integrated corporate into Hewlett Packard a vibrant and value system that permeated all active online Tandem Computers corners oF the company, world wide alumni community still exists on and deeply touched the souls and Yahoo!Groups and Linked In? How can minds oF all employees through the it be that in many countries in the good and bad times. world groups oF Tandem Alumni still get together in local pubs or other Tandem Computers Unplugged – A types oF venues at least yearly? How People’s History is an attempt to can it be that many Former employees capture not just the history of an maintain their collection oF T-shirts, important foundational contributor to double-handled cups, pens, and what is today’s Silicon Valley, but to trophies almost like shrines oF some share the experience through the eyes kind? How can it be that so many and hearts oF the employees and employees when asked to look back through this process bring to light on their working liFe almost to a some oF the important ‘lessons person claim that working at Tandem learned’ about how to manage and was one oF the best places they have motivate talent.
    [Show full text]