Modula-2* and Its Compilation

Modula-2* and Its Compilation

This paper appeared in First International Conference of the Austrian Center for Paral lel Computation Salzburg Austria pages Springer Verlag Lecture Notes in Computer Science Mo dula and its Compilation Michael Philippsen and Walter F Tichy Universitat Karlsruhe email philippseniraukade start with the problem sp ecication and not with a se Abstract quential implementation In a sequential program to o many opp ortunities for parallelism have b een hidden Mo dula an extension of Mo dula is a program or eliminated ming language for writing highly parallel programs in The second approach is to write programs that amachineindep endent problemoriented way The are explicitly parallel We claim that only minor ex novel attributes of Mo dula are that programs are tensions of existing programming languages are re indep endent of the numb er of pro cessors indep endent quired to express highly parallel programs Thus pro of whether memory is shared or distributed and inde grammers will need only mo derate additional training p endent of the control mo des SIMD or MIMD of a mainly in the area of parallel algorithms and their parallel machine analysis This area fortunatelyiswell develop ed This article briey describ es Mo dula and dis see for instance textb o oks and In compiler cusses its ma jor advantages over the dataparallel pro technologyhowever new techniques must b e found gramming mo del We also present the principles of to map machineindep endent programs to existing ar translating Mo dula programs to MIMD and SIMD chitectures while at the same time parallel machine machines and discuss the lessons learned from our rst architecture must evolve to eciently supp ort the fea compiler targeting the Connection Machine Wecon tures that are required for problemoriented program clude with imp ortant architectural principles required ming styles of parallel computers to allow for ecient compiled We take the approach of expressing parallelism ex programs plicitly but in a machineindep endentway In sec tion we analyze the problems that plague most par Intro duction allel programming languages to day Section then presents Mo dula an extension of Mo dula for Highly parallel machines with thousands and tens the explicit formulation of highly parallel programs of thousands of pro cessors are now b eing manufac The extension is small and easy to learn but pro tured and used commercially These machines are of vides a programming mo del that is far more general rapidly growing imp ortance for highsp eed computa and machine indep endent than other prop osals Next tion They have also initiated a ma jor shift within we discuss compilation techniques for targeting MIMD Computer Science from the sequential to the parallel and SIMD machines and rep ort on exp erience with computer One of the ma jor problems we face in the our rst Mo dula compiler for the Connection use of these new machines is programmability How Machine We conclude with prop erties of parallel ma to write with no more than ordinary eort programs chine architectures that would improve the eciency that bring the rawpower of a parallel computer to of highlevel parallel programs b ear on a problem Two ma jor approaches to the programming prob lem can b e distinguished The rst is to automati Related Work cally parallelize sequential software Although there is overwhelming economic justication for it this ap Most current programming languages for parallel proach will meet with only limited success in the short and highly parallel machines including LISPC to medium term see for instance The goal of MPL VAL Sisal Occam Ada FORTRAN Blaze automatically pro ducing parallel programs can only Dino and Kali suer if ever b e achieved by program transformations that from some or all of the following problems Whereas the numb er of pro cessors of a parallel machine dep endence of parallel programs Data machine is xed the problem size is not Be parallelism extends a synchronous SIMD mo del with cause most of the known parallel languages do a global name space whichobviates the need for ex not supp ort the virtual pro cessor concept the plicit message passing b etween pro cessing elements programmer has to write explicit mappings for It also makes the numb er of virtual pro cessing ele adapting the pro cess structure of each program ments a function of the problem size rather than a to the available pro cessors This is not only a te function of the target machine dious and rep etitive task but also one that makes The dataparallel approach has three ma jor ad programs nonp ortable vantages It is a natural extension of sequential programming The only parallel instruction a syn Colo cating data with the pro cessors that op erate chronous forall statement is a simple extension of up on the data is critical for the p erformance of the well known for statement and is easy to under distributed memory machines Po or colo cation stand Debugging dataparallel programs is not results in high communication costs and p o or p er much more dicult than debugging sequential pro formance Go o d colo cation is highly dep endent grams The reason is that there is only a single lo on the top ology of the communication network cus of control which dramatically simplies the state and must at present b e programmed by hand space of a program compared to that of an MIMD It is a primary source of machine dep endence program with thousands of indep endent lo ci of con All parallel machines provide facilities for inter trol There is a wide range of dataparallel al pro cess communication most of them by means gorithms Most parallel algorithms in textb o oks are of a message passing system Nearly all paral dataparallel compare for instance According lel languages supp ort only lowlevel send and get to Fox more than of the existing paral communication commands Programming com lel applications he examined fall in the class of syn munication with these primitives esp ecially if chronous dataparallel programs Furthermore sys only nearest neighbor communication is available tolic algorithms as well as vectoralgorithms are sp e is a time consuming and error prone task cial cases of dataparallel algorithms But dataparallelism at least as dened bycur There are several control mo des for parallel ma rent languages has some drawbacks It is a syn chines including MIMD SIMD dataow and chronous mo del Even if the problem is not amenable systolic mo des Any extant parallel language tar to a synchronous solution there is no escap e In par gets exactly one of those control mo des What ticular parallel programs that interact with sto chastic ever the choice it severely limits p ortabilityas events are awkward to write and run ineciently well as the space of solutions There is no nested parallelism This means that once Mo dula provides solutions to the basic problems a parallel activity has started the involved pro cesses mentioned ab ove The language abstracts from the cannot start up additional parallel activity A paral memory organization and from the number of physi lel op eration simply cannot expand itself and involve cal pro cessors Mapping of data to pro cessors is p er more pro cesses This prop erty seriously limits parallel formed by the compiler optionally supp orted by high searches in irregular search spaces for example The eect is that dataparallel programs are strictly bi level directives provided by the programmer Com mo dal They alternate b etween a sequential and a par munication is not directly visible Instead reading allel mo de where the maximal degree of parallelism is and writing in a virtually shared address space sub xed once the parallel mo de is entered Tochange the sumes communication A shared memoryhowever is degree of parallelism the program rst has to stop all not required Parallelism is explicit and the program parallel activity and return to the sequential mo de mer can cho ose among synchronous and asynchronous The use of pro cedures to structure a parallel program execution mo de at anylevel of granularityThus pro in a topdown fashion is severely limited The problem grams can use SIMDmo de where prop er synchroniza here is that it is not p ossible to call a pro cedure in par tion is dicult or use MIMDmo de where synchro allel mo de when the pro cedure itself invokes parallel nization is simple or infrequent The twomodescan even b e intermixed freely op erations this is a consequence of Pro cedures cannot allo cate lo cal data and spawn data parallel op The dataparallel approach discussed in and ex erations on it unless they are called from a sequential emplied in languages such as LISP C and MPL program Thus pro cedures can only b e used in ab out is currently quite successful b ecause it has reduced half of the cases where they would b e desirable They Overview of the forall statement also force the use of global data structures on the pro grammer The forall statement creates a set of pro cesses that execute in parallel In the asynchronous form the in When designing Mo dula wewanted to preserve dividual pro cesses op erate concurrently and are joined the main advantages of dataparallel languages while at the end of the forall statement The asynchronous avoiding the ab ovedrawbacks The following list forall simply terminates when the last of the created contains the main advances of Mo dula over data pro cesses terminates In the synchronous form the parallel languages pro cesses created bythe forall op erate in unison until they reach a branchpoint suchasanif or case state

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us