Technical Computing Alternatives: Supercomputers to Ordinary Computers
Total Page:16
File Type:pdf, Size:1020Kb
ISSUES & OUTLOOK ~ .._..___..__........_._._._.._..__..I .. Technical Computing Alternatives: Supercomputers to Ordinary Computers ccording to Congress and the press, technical comput- ing is in trouble. But in fact, the future looks brighter than ever. New classes of computers and new software are king created by new and existing companies. Only the growth rate for the traditional supercorn- r- r- puter might be slow. A reason is strdghfforward. A user can often do The the same computation on a powerful PC,on a worksta- tion, on a minicomputer or microcomputer, on a super- minicomputer, a graphics supercomputer or 3D work- station, a minisupercomputer, a special-purpose super- computer, or even a traditional supercomputer. Users can trade execution time for cost because "Flops is Flops." The computer power necessary for technical computation can be substituted across this By Gordon Bell entire spe&um. Corporate Vice President Technical computing is moving away from highly and Chief Scientist centralized, time-shared supercomputers. The same Stardent Computer Inc. forces that operated in traditional computing will pre vide distributed,interactive computers to technical users. These computersoffer adequate capacity and peak power for demanding jobs, are cheaper and easier to purchase and use, and offer superlative price/perforrnance ratios. Agenml-putpose s?tpemompadter is a machine that, at the time of its announcement, costs more than other computers (perhaps $5 to $20 million), runs faster in gmeml, has greater primary and secondary memory, and is suitable for all scientik, numeric problems. Supercomputers have evolved along one architec- tural path. They all employ vectors and powerful mul- tiple processors to gain speed. Fortran is by far the most important programming language;dusty deck programs are expected to port easily Automatic compiler tools (vedorizers and such) help users exploit the new machines. The more adven- turous eventually reprogram using explicitly me1tech- niques. Within a decade, Fortran will undoubtedly have parallel constructs. 21 Table 1: Power of 1989 Technical Computers in Megaflops/Second Lm ILFK #Proc. per per Ijnpack Max Proc. Machine lOW00 Peak ~ - PC 1 0.1 to 0.5 0.1 to 0.5 0.1 to 1.0 Workstat ion 1 0.2 to 1.5 0.5 to 3.0 6 8 MicroJMini 1 0.1 to 0.5 0.1 to 0.5 0.1 to 0.5 2 Super mini 6 1 4 1 6 -24 Graphics Super 4 1.5 to 5 10 6to 12 80 128 Minisu per 8 2 to 4.3 10 6 to 16 ,166 200 MainJVectors 6 7.2 43 13 518 798 Su perm mpu ter 8 19 150 84 2,144 2667 . ... .. .. 22 ISSUES & OUTLOOK Applicationsmust be reprogram ance workstations; microcomputers; programs after considerable tuning. med to exploit these machines fully, and superminicomputers. Many of Notice that peak rates (some and such programs can run faster these have impressive scalar per- times listed at about 20 gigaflops) are than on a super. But the need for formance, but they have no way to not shown for the very large ma- reprogramming and slow scalar hit performance peaks for those chines because no real progrdms speeds causes these machines to fail programs that make good use of come anywhere near the peaks. Hy the workload test for serial job the vector or parallel capabilities of significant tuning to run in parallel, sbeams. They are not directly substi- supercomputers. programs operate at over onehalf tutable for current, general-purpose Table 1 summarizes the power the peak. supercomputers, of various technical computers in Super wtwlti~cessors(developed 1989. The machines in the bottom by BBN and Evans & Sutherland) half are capable of providing shared WHICH COMPUTER have 100 or more processors, a com- supercomputer power. The column IS THE BEST FOR mon memory and a single job pool labelled #hoc. Max describes the THE APPLKATIQ N? controlled by one operating system. parallelism available. The columns The sheer number of processors headed LFK.giveperformanceon the Market substi tu tion occurs guarantees good throughput per- Livermore Forban Kernels, a good across all computers. Users have a formance, but the relatively slow synthetic mixed workload for scien- fixed budget to trade off across the individual processors will find it diffi- tific machines. These numbers mea- complete range. Computer choice cult to run scalar programs at super- surethe throughput one might obhin depends on many factors besides computer speeds. Still, these ma- in a typical scientif~cenvironment purchase price and peak perform- chines might be the highest capacity, The columns headed Linpack ance: software availability, ease of general-purpose, cost-effective sys measure performance on the hpack purchase, installation and use; appar- terns that can be built today. linear equation solving test. The ent lifetime; rate of technological Ordinary computers provide the 100x100 test shows the rates that change; past and future compatibil- mostkhnidcompuhgpwertday might be achieved by normal users ity; control in the allocation and This includes PCs, which are evolv- of reasonable supercomputer pro- management of resources; program- ing toward the power of 1-D, ZD and grams, and the 1oooX1OOO test shows ming knowledge needed; even the 2Y-D workshtionq %D high-pwhrm- rates that can be achieved by real machine appearance or the prestige Table 2 Installed Capacity for Technical Computing (Dataquest) Companies Dataquest '89 LFK Installed Ships Capacity Selling Building Dead PC 3.4M 1M 1341 100s ? ? Workstation 0.4M 290K 580 7 -2 ? -50 Micro/Mini 0.9M 51K 30 -20 ? -100 Supermini 0.3M 7.5K 100 7 ? -10 Graphics Super 10.5K 13.6K 182 2 2 2 Minisuper 1.6K 600 32 5 >2 8 Parallel Roc. 365 250 4 24 >9 8 Mainflectom 8.3K 1600 46 3 ? 3 Supercomputer 450 130 100 4 >3 3 fSSUES & OUTLOOK -. ......................, ..... ..... 431 r IwtiiiiK :I ~~:~I~Icwl:~rvoiiilw!(*t. ~4~ll~~r:lli~~lior 4rip*rr'otllpulCrp, alltl iiwritig, clitmidry :itid bioctwiih Table 2 shows the installed ca- the last generation will csnhue to try, fluid dynamics and medical im pacity for techniral crrmpntinR, ne widen. Howwrr, altrmatiw cornput- aEt- prnmsinn an- hinEhnsfnrmrcl iriiisl iiiillorlmil r-oliiiiiii is 'H!) I.I;IC CTS tllilt ;ir*tbfi~t rnough will conlinue by tlic twwly availabltb rlistributtd Capacity - the power available to the bend to distributed computing power. A similar change occurred run general-purposetechnical work- even for applications that were about a decade ago when works& loads measured in units of CRAY Y- previously s~~eYd~plications. tions were introduced into the de- MP/&. Most of the power is pm Architecturally, the reasons are sign of digital systems and chips. vidcd by machines that are not super easy to see. Supercomputers require All the supercomputers and al- anythings; even much of the super- expensive, high-speed components. ternates are highly parallel - using computer power is provided by wimp elaborate processor-memoryconnec- vector prwessors, parallel CPUs or machines at least one generation old. tions, very fast, large disks, process multiple processing elements. Com- Computation on the mngsy~, ing circuits that do relatively few puter scientists and applications tern is costly and inefficient. There is operations per chip and per watt, specialists must cooperate to under- a supercomputer at onr national extensive installation and hiEh owr- stand and makt. ziw of pamllrl com- I: itm-ntcwy iii:ik iiiK :a nhI ivt .l y I i.ivi;il iitirIE costs; worst', Ilwy Ii;ivc litltc: putinx: itistiluticais ~riustrt!sprd an (1 calculation for contractors Ihrough- architectural scalability. encourage research and teaching in out the United States. The supercom- Supercomputer buyers must these areas. puter produces a picture. compresses have great needs, great dedication to The arts and sciences of visuali- the data, sends it over a slow but ex- the support of the machine and great zation must be propagated into the pensive network. and graphics work- budgets. For almost all users, KO- technical computing community.The stations recompute the image for nomics inevitably dictate the pur- cost in new software will be more static display. The computation and chase of smaller systems connected than repaid by the value of new in- display could be done on a powerful in networks. sights. If the technical cumpiiting workstation in roughly the same In addition to the pure econom- community can meet these needs, elapsed time without the network or ics, current supercomputers lack the our future is bright. snpcrcomputer. The computer exists visualization capability and interac- . ........... -. ... - ......- .......... to support a super bureaucracy. tivity found in distributed computing. Let us compare an Ardent Titan Networks coupled with workstations III/2 (two processon} with a CRAY are not adequate to provide the same Y-MP processor. Titan JII was in- capability. For example, the use of HaW duced one year after the CRAY Y-MP spread sheets, drawing programs and and delivers about 9 megaflops LFK even word pmessing is qualitatively throughput. The Titan's throughput different using terminals connected To Akke isabout 1J2 that of a Y-MPprocessor to time-shared computers through for LFK, its 100x100 Linpack rate is LANs, in comparison with the use of about 1/6 that of a Y-MP processor; pmonal computers. The and its peak performance is about I/ that of an eight-processorY-MI? 50 BASIC SHIFT TO Dollar A Titan IIV2 costs about 1/20 what a one-processor CRAY Y-MP INTERACTIVE AND does. Furthermore, the Titan 111's DlSTRlBUTED COMPUTING smgm speed doubled (and itspricedropped) fromitspredecessorinonly 18months. A significant change in comput- The shortest conceivablegestation for ing styles is occurring in providing overseas. a supercomputer is three years, and truly interactive design and analysis Ciw to CAKE.