;If\.}y\._ . X —; ~\~;Qi·4·5.;_—\\ _ ` »`\§;i>.».—__\_` ` _ ` ` >‘‘~ AQ. ejo~.\_~_\_ ..».§j`§i\.;\—`\\`\_;` C w ` ` ES-. A. H ~ _~.\ \\.\ ‘ ` `\`—Y`§§§QY ;;q» ;_‘»f M. · S·\v `?»`~Q\\\ ~ x ·` \?~.‘e. .. >.>¤~.\ ` ‘—*T·>¤;\\\.€.—\~.\ . »~`.\\1IN* ._. ` ` s—¤‘>.j>?{~QQ. —. _\\`\ x\\ _\.&\ \\ " ~~`\\`\\}`Q\\\ ` ~\\ ,5 ~\ \x ~ ~‘—T» . \\\~. _ .x<>wFS:2>$q\\,Q ` e xi-;. \§\\\\ . Wx ;.\_\\s&\` \ wx `.~.`_—\.a\> 56*0 Q CERNWW\%\\\\\\\R\\I\mW\\\\\\¥ LIBRARIES, GENEVA ATOOOOOIIS Q m rp Q YOOTIW 1992 — 1993 ACADEMIC TRAINING PROGRAMME LECTURE SERIES SPEAKER : M. SMITH / Edinburgh Parallel Computing Centre TITLE Introduction to massively parallel computing in High Energy Physics 15, 16, 17, 18 & 19 March from 11.00 to 12.00 , . .. \ CILRN PLACE _ _ Auditorium ES G 6 -<’<OTt\<c ABST Az ani 772arh · 3 8 O . Ever sxnce computers were fzrst used for sc1ent1f1c and numerzcal work, there nas existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time scales. However, the vast leaps i n processor performance achieved through advances in semi—conductor science have reached a hiatus as the technology comes up against the physical limits ofthe speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (SIMD and MIMD). We will also review the expected future trends in the technology, with particular reference to the high performance computing initiatives in Europe and the United States, and the drive towards standardization of parallel computing software environments. GE .2.`é QQ m.%LLI °Y`*"~ _ ' ' '~'».*’»"*”_~ »5 " ` \ ?· `—K\?` " *—;£js—}‘»Z£~Z·{ \q\` Ti? i~?i`\~;—§}`_\\T1QT—~—L€—_—?~>Z. ·r-~·»•.-q-¤——. Q.©(;~J_~;._;\_\~`·., .~»‘ i.;;._~.\ _\;_.,;.. .5-\._ ;.—.;;.__ »» . __ . \. " `\ Q.} ‘ \\ TilflilgTiF1s§;Y‘2Qt\i§}§_I_{\T ’ `—_\i·5i`§I"j`..Tg \ "L ¤§5.;·._;.gs—;;{·_;:\;‘{;.—. \ g X~ j. .»——j; .·—.l.;}{.§{.; ;.;_; }g_>;§: psy _\. sgi; . g; i _.\ii\_—I\ » ~—.·— . , * ~ . \· - iw = xx \ is . \ ;\q i.gij§i{\.;\\\g.r_j—§._Q\§.\_.\>_—;. »\\~?>>&%.;>;;_ ._i_ . g.- ~» ~.\\\ §§OCR .—».‘ \\;~;.§ Output`”`— \\—\ ` ~§¤§\\§\ »»..Q \— ` . I ..\ CERN ACADENHC TRAINING PROGRAMME 1992-93 An Introduction to Massively Parallel Computing in High Energy Physics Mark Smith Edinburgh Parallel Computing Centre OCR Output MAssrvELY PARALLEL CGMPUTING IN HIGH ENERGY Pmsrcs OCR Output CERN AcADEM1c TRANNG PR0GRAMME 1992-93 Contents I An Introduction to Parallel Computing 1.1 An Introduction to thc EPCC .......... 1.2 Parallel Computer Architectures ......... 1.2.1 A Taxonomy for Computer Architectures 1.2.2 SIMD: Massively Parallel Machines . 1.2.3 MIMD Architectures .......... 1.2.4 Shared Memory MIMD ......... 10 1.2.5 Distributed Memory MIMD ....... 11 1.3 Architectural Trends ............... 12 131 Future Trends 14 ................ 1.4 General Purpose Parallelism ........... 15 1.4.1 Proven Price/Performance ....... 15 1.4.2 Proven Service Provision ........ 16 1.4.3 Parallel Software ............ 16 17 144................. In Summary 2 Decomposing the Potentially Parallel 19 2.1 Potential Parallelism 19 ............... 19 211................. Granularity 2.1.2 Load Balancing ............. 20 2.2 Decomposing the Potentially Parallel ...... 21 22 ,.·¤. 2.2.1 Trivial Decomposition ......... 2.2.2 Functional Decomposition ....... 22 23 23................ Data Decomposition 2.3.1 Geometric Data Decomposition ..... 24 2.3.2 Dealing with Unbalanced Problems . 25 27 24..................... Summary 3 An HEP Case Study for SIMD Architectures 29 3.1 SIMD Architecture and Programming Languages . 29 3.2 Case Study: Lattice QCD Codes ......... 31 3.2.1 Generation of Gauge Conhgurations . 32 3.2.2 The Connection Machine Implementation 35 3.2.3 Calculation of Quark Propagators ..... 36 36 OCR Output ss1vE1.Y PARALLEL C0M1>UT1Nc 1N H1G1-1 ENERGY Pmsxcs An HEP Case Study for MIMD Architectures 39 4.1 A Brief History of MIMD Computing ..... 39 4.1.1 The Importance of the Transputer . 39 4.1.2 The Need for Message Passing .... 40 4.2 Case Study: Experimental Monte Carlo Codes 4l 4.2.1 Physics Event Generation ....... 41 4.2.2 Task Farming GEANT ........ 42 4.2.3 Geometric Decomposition of the Detector 4.2.4 Summary: A Hierarchy of Approaches . 45 High Performance Computing: Initiatives and Standards 49 {___ 5.1 High Performance Computing and Networking Initiatives 49 49 5.1.2 United States ........... 49 52 513................. Europe 5.2 Emerging Key Technologies ........ 53 521 CHIMP 54 ................ 5.2.2 PUL 55 ................ 5.2.3 NEVIS 57 .............. 5.3 Parallel Computing Standards Forums . 57 5.3.1 High Performance Fortran Forum . 58 5.3.2 Message Passing Interface Forum . 58 Acknowledgements These lectures and their accompanying notes have been produced, in part, from the EPCC Parallel Systems training course material. I am therefore grateful to my EPCC colleagues Neil MacDonald, Arthur Trew, Mike Norman, Kevin Collins, David Wallace, Nick Radcliffe and Lyndon Clarke for the material they produced for that course. In addition, I must thank Ken Peach, Steve Booth, David Henty and Mark Parsons of the University of Edinburgh Physics Department for their help in the HEP aspects of these notes, and CERN’s Fabrizio Gagliardi for the original invitation to present this material. OCR Output CERN ACADEMIC TRAINING PRooRAMME 1992-93 1 An Introduction to Parallel Computing 1.1 An Introduction to the EPCC The Edinburgh Parallel Computing Centre was established during 1990 as a focus for the University’s various interests in high performance supercomputing. It is interdisciplin ary, and combines a decade’s experience of parallel computing applications with a strong research activity and a successful industrial affiliation scheme. The Centre provides a national service to around 300 registered users, on state—of—the—art commercial parallel machines worth many millions of pounds. As parallel computers become more common, the Centre’s task is to accelerate the effect ive exploitation of high performance parallel computing systems throughout academia, industry and commerce. The Centre, housed at the Univcrsity’s King’s Buildings, has a staff of more than 50, in three divisions: Service, Consultancy & Development, and Applications Development, The team is headed by Professors David Wallace (Physics), Roland Ibbett (Computer Science) and Jeff Collins (formerly of Electrical Engineering). Edinburgh’s involvement with parallel computing began in 1980, when physicists in Edinburgh used the ICL Distributed Array Processor (DAP) at Queen Mary College in London to run molecular dynamics and high energy physics simulations. Their pioneering results and wider University interest enabled Edinburgh to acquire two of these machines for its own use, and Edinburgh researchers soon achieved widespread recognition across a range of disciplines: high energy physics, molecular dynamics, phase transitions, neural network models, protein crystallography, stellar dynamics, _,_ image processing, meteorology and data processing. With the imminent decommissioning of the DAPs, a replacement resource was needed and a transputer machine was chosen in 1986. This machine was the new multi-user Meiko Computing Surface, consisting of domains of transputers, each with its own local memory, and interconnected by programmable switching chips. The Edinburgh Concurrent Supercomputer Project was then established in 1987 to create a national parallel computing facility, available to academics and industry throughout the UK. ln 1990 this project evolved into the Edinburgh Parallel Computing Centre (EPCC) to bring together the many strands of parallel research and applications activity in Edinburgh University, and to provide the basis for a broadening of these activities. The Centre now houses a range of machines including two third- generation DAPs, a number of Meiko machines (both transputer and i860—based), a Thinking Machines Corporation CM-200, and a large network of Sun and Silicon Graphics workstations. Plans are currently being made to purchase two further parallel supercomputers in 1993, these will act as an internal development resource, and will be state-of-the-art architectures. OCR Output MAss1vELY PARALLEL C0M1>U‘r1Nc· 1N H1c·H ENERGY Pmsrcs The applications dcvclopmcnt work within the EPCC takes placc within thc Numerical Simulations Group and thc Informadon Systems Group; These teams perform all of the contract work for our industrial and commercial collaborators. Current projects include work on parallelising computational iiuid dynamics code for the aerospace, oil and nuclear power industries; parallel geographical information systems; distributed memory implementations of oil reservoir simulators; spatial interaction modelling on data parallel machines; parallel network modelling and analysis; development of a production environment for seismic processing; and also parallel implementation of speech recognition algorithms. Non-contract software development effort is focused around
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages61 Page
-
File Size-