The Marketplace of High Performance Computing

The Marketplace of High Performance Computing

The Marketplace of High Performance Computing a1 a;b2 c3 Erich Strohmaier , Jack J. Dongarra , Hans W. Meuer d4 and Horst D. Simon a Computer Science Department, University of Tennessee, Knoxvil le, TN 37996 b Mathematical ScienceSection, Oak Ridge National Lab., Oak Ridge, TN 37831 c Computing Center, University of Mannheim, D-68131 Mannheim, Germany d NERSC, Lawrence Berkeley Laboratory, 50A, Berkeley, CA 94720 Abstract In this pap er we analyze the ma jor trends and changes in the High Performance Computing HPC market place since the b eginning of the journal `Parallel Com- puting'. The initial success of vector computers in the seventies was driven byraw p erformance. The intro duction of this typ e of computer systems started the area of `Sup ercomputing'. In the eighties the availability of standard developmentenviron- ments and of application software packages b ecame more imp ortant. These criteria determined next to p erformance the success of MP vector systems esp ecially at in- dustrial customers. MPPs b ecame successful in the early nineties due to their b etter price/p erformance ratios whichwas enabled by the attack of the `killer-micros'. In the lower and medium market segments the MPPs were replaced by micropro cessor based SMP systems in the middle of the nineties. This success was the basis for the emerging cluster concepts for the very high end systems. In the last few years only the companies whichhaveentered the emerging markets for massive paral- lel database servers and nancial applications attract enough business volume to b e able to supp ort the hardware development for the numerical high end comput- ing market as well. Success in the traditional oating p ointintensive engineering applications seems to b e no longer sucient for survival in the market. Key words: High Performance Computing, HPC Market, Sup ercomputer Market, HPC technology, Sup ercomputer market, Sup ercomputer technology 1 e-mail: [email protected] 2 e-mail: [email protected] 3 e-mail: [email protected] 4 e-mail: [email protected] Preprint submitted to Elsevier Preprint 26 July 1999 1 Intro duction \The Only Thing Constant Is Change" | Lo oking back on the last decades this seem certainly to b e true for the market of High-Performance Computing systems HPC. This market was always characterized by a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of p erformance on a large scale however seems to b e a very steady and continuous pro cess. Mo ore's Law is often cited in this context. If we plot the p eak p erformance of various computers of the last 5 decades in gure 1 which could have b een called the `sup ercomputers' of there time [4,2] we indeed see howwell this law holds for almost the complete lifespan of mo dern computing. On average we see an increase in p erformance of two magnitudes of order every decade. Moore's Law ASCI Red 1 TFlop/s TMC CM-5 Cray T3D TMC CM-2 1 GFlop/s Cray 2 Cray X-MP Cray 1 CDC 7600 IBM 360/195 1 MFlop/s CDC 6600 IBM 7090 1 KFlop/s UNIVAC 1 EDSAC 1 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 Fig. 1. Performance of the fastest computer systems for the last 5 decades compared to Mo ore's Law. In this pap er we analyze the ma jor trends and changes in the HPC market for the last three decades. For this we fo cus on systems which had at least some commercial relevance. Historical overviews with di erent fo cus can b e found in [8,9]. In the second half of the seventies the intro duction of vector computer sys- tems marked the b eginning of mo dern Sup ercomputing. These systems o ered a p erformance advantage of at least one order of magnitude over conventional systems of that time. Raw p erformance was the main if not the only selling argument. In the rst half of the eighties the integration of vector system 2 in conventional computing environments b ecame more imp ortant. Only the manufacturers which provided standard programming environments, op erat- ing systems and key applications were successful in getting industrial cus- tomers and survived. Performance was mainly increased by improved chip technologies and by pro ducing shared memory multi pro cessor systems. Fostered by several Government programs massive parallel computing with scalable systems using distributed memory got in the fo cus of interest end of the eighties. Overcoming the hardware scalability limitations of shared mem- ory systems was the main goal. The increase of p erformance of standard micro pro cessors after the RISC revolution together with the cost advantage of large scale pro ductions formed the basis for the \Attack of the Killer Micro". The transition from ECL to CMOS chip technology and the usage of \o the shelf " micro pro cessor instead of custom designed pro cessors for MPPs was the con- sequence. Traditional design fo cus for MPP system was the very high end of p erformance. In the early nineties the SMP systems of various workstation manufacturers as well as the IBM SP series which targeted the lower and medium market segments gained great p opularity. Their price/p erformance ratios were b etter due to the missing overhead in the design for supp ort of the very large con g- urations and due to cost advantages of the larger pro duction numb ers. Due to the vertical integration of p erformance it was no longer economically feasible to pro duce and fo cus on the highest end of computing p ower alone. The design fo cus for new systems shifted to the market of medium p erformance systems. The acceptance of MPP system not only for engineering applications but also for new commercial applications esp ecially for database applications empha- sized di erent criteria for market success such as stability of system, continuity of the manufacturer and price/p erformance. Success in commercial environ- ments is now a new imp ortant requirement for a successful Sup ercomputer business. Due to these factors and the consolidation in the number of vendors in the market hierarchical systems build with comp onents designed for the broader commercial market are currently replacing homogeneous systems at the very high end of p erformance. Clusters build with comp onents of the shelf also gain more and more attention. 2 1976{1985: The rst Vector Computers If one had to pick one p erson asso ciated with Sup ercomputing it would b e without doubt Seymour Cray. Coming from Control Data Corp oration CDC where he had designed the CDC 6600 series in the sixties he had started his own company `Cray Research Inc.' in 1972. The delivery of the rst Cray1 3 vector computer in 1976 to the Los Alamos Scienti c Lab oratory marked the b eginning of the mo dern area of `Sup ercomputing'. The Cray1 was character- ized by a new architecture whichgave it a p erformance advantage of more than an order of magnitude over scalar systems at that time. Beginning with this system high-p erformance computers had a substantially di erent architecture from main stream computers. Before the Cray 1 systems which sometimes were called `Sup ercomputer' like the CDC 7600 still had b een scalar systems and did not di er in their architecture to this extend from comp eting main stream systems. For more than a decade sup ercomputer was a synonym for vector computer. Only at the b eginning of the nineties would the MPPs b e able to challenge or outp erform their MP vector comp etitors. 2.1 Cray 1 The architecture of the vector units of the Cray1 was the basis for the complete family of Crayvector systems into the nineties including the Cray 2, CrayX- MP, Y-MP, C-90, J-90 and T-90. Common feature was not only the usage of vector instructions and vector register but esp ecially the close coupling of the fast main memory with the CPU. The system did not have a separate scalar unit but integrated the scalar functions eciently in the vector cpu with the advantage of high scalar computing sp eed as well. One common remark ab out the Cray1 was often that it was not only the fastest vector system but esp ecially also the fastest scalar system of it's time. The Cray1was also a true Load/Store architecture. A concept which later entered mainstream computing with the RISC pro cessors. In the X-MP and follow on architecture, three simultaneous Load/Store op erations p er CPU were supp orted in parallel from main memory without using caches. This gave the systems exceptionally high memory to register bandwidth and facilitated the ease of use greatly. The Cray1 was well accepted in the scienti c community and 65 systems were sold till the end of it's pro duction in 1984. In the US the initial acceptance was largely driven bygovernment lab oratories and classi ed sites for which raw p erformance was essential. Due to it's p otential the Cray 1 so on gained great p opularity in general research lab oratories and at universities. 2.2 Cyber 205 Main comp etitor for the Cray1 wasavector computer from CDC the Cy- b er 205. This system was based on the design of the Star 100 of which only 4 systems had b een build after it's rst delivery in 1974.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    32 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us