The Flow Programming Language: an Implicitly-Parallelizing, Programmer-Safe Language

Total Page:16

File Type:pdf, Size:1020Kb

The Flow Programming Language: an Implicitly-Parallelizing, Programmer-Safe Language ““WriteWrite Once,Once, ParallelizeParallelize Anywhere”Anywhere” TheThe FlowFlow ProgrammingProgramming Language:Language: AnAn implicitly-parallelizing,implicitly-parallelizing, programmer-safeprogrammer-safe languagelanguage Luke Hutchison Moore'sMoore's LawLaw ● TheThe numbernumber ofof transistorstransistors thatthat cancan bebe placedplaced inexpensivelyinexpensively onon anan integratedintegrated circuitcircuit hashas doubleddoubled approximatelyapproximately everyevery twotwo years.years. Moore'sMoore's LawLaw ● ComputingComputing isis work;work; transistorstransistors dodo work;work; moremore transistorstransistors workingworking inin parallelparallel shouldshould yieldyield fasterfaster computerscomputers ● =>=> Moore'sMoore's LawLaw hashas (so(so far)far) alsoalso appliedapplied toto CPUCPU speedspeed ● BUTBUT nownow we'rewe're hittinghitting multiplemultiple walls:walls: ● TransistorTransistor sizesize – tunneling;tunneling; lithographylithography featurefeature sizesize vs.vs. wavelengthwavelength – nono moremore 2D2D densitydensity increasesincreases ● ThermalThermal envelopeenvelope – functionfunction ofof frequencyfrequency andand featurefeature sizesize =>=> wewe havehave hithit aa clockspeedclockspeed wallwall ● DataData widthwidth – 128-bit128-bit CPUs?CPUs? NeedNeed parallelparallel controlcontrol flowflow insteadinstead ButBut it'sit's worseworse thanthan thatthat It'sIt's notnot justjust thethe endend ofof Moore'sMoore's Law...Law... ...it's...it's thethe beginningbeginning ofof anan eraera ofof buggierbuggier software.software. HumansHumans willwill nevernever bebe goodgood atat writingwriting multi-threadedmulti-threaded codecode –– ourour brainsbrains simplysimply don'tdon't workwork likelike that.that. (Abstractions like Futures only help a little bit.) ParallelParallel ProgrammingProgramming ToolsetToolset ● Traditional locks: mutex, semaphore etc. ● Libraries: java.lang.concurrent ● Message passing: MPI; Actor model (Erlang) ● Futures, channels, STM ● MapReduce ● Array programming ● Java7 ParallelArray ● Intel Concurrent Collections (CnC) ● Intel Array BuildingBlocks (ArBB) ● Data Parallel Haskell (DPH) ● ZPL But—all require programmer skill, extra effort and shoehorning of design. HowHow toto shootshoot yourselfyourself inin thethe footfoot Classic 1991 – http://bit.ly/hiwaG1 FORTRAN : You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. COBOL : USEing a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be retied. Lisp : You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds... BASIC : You shoot yourself in the foot with a water pistol. Forth : Foot yourself in the shoot. C++ : You accidently create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying "That's me, over there." C : You shoot yourself in the foot. My addition: Explicit parallelization in any language : You shoot in the yourself <segfault> TheThe statestate ofof Moore'sMoore's Law,Law, 20102010 ""Finding:Finding: ThereThere isis nono knownknown alternativealternative forfor sustainingsustaining growthgrowth inin computingcomputing performance;performance; however,however, nono compellingcompelling programmingprogramming paradigmsparadigms forfor generalgeneral parallelparallel systemssystems havehave yetyet emerged."emerged." —The Future of Computing Performance: Game Over or Next Level? Fuller & Millett (Eds.); Committee on Sustaining Growth in Computing Performance, National Research Council, The National Academies Press, 2010, p.81 =>=> TheThe MulticoreMulticore DilemmaDilemma TheThe rootroot ofof thethe problemproblem ● PurelyPurely functionalfunctional programmingprogramming languageslanguages cancan bebe safelysafely implicitlyimplicitly parallelizedparallelized ● NoNo side-effects,side-effects, nono statestate ● =>=> threadsthreads cannotcannot interactinteract ● e.g.e.g. severalseveral parallelparallel versionsversions ofof HaskellHaskell TheThe rootroot ofof thethe problemproblem butbut purelypurely functionalfunctional programmingprogramming languageslanguages areare extremelyextremely hardhard forfor meremere mortalsmortals toto useuse toto solvesolve real-worldreal-world problems.problems. OnOn thethe otherother hand,hand, gettinggetting explicitexplicit parallelizationparallelization rightright –– andand writingwriting multithreadedmultithreaded codecode quicklyquickly –– maymay actuallyactually bebe harder.harder. MereMere mortalsmortals ""Recommendation:Recommendation: InvestInvest inin researchresearch andand developmentdevelopment ofof programmingprogramming methodsmethods thatthat willwill enableenable efficientefficient useuse ofof parallelparallel systemssystems notnot onlyonly byby parallelparallel systemssystems expertsexperts butbut alsoalso byby typicaltypical programmers.programmers."" —The Future of Computing Performance: Game Over or Next Level? Fuller & Millett (Eds.); Committee on Sustaining Growth in Computing Performance, National Research Council, The National Academies Press, 2010, p.99 TheThe rootroot ofof thethe problemproblem ● We like imperative programming languages: “do this, then this”. ● But for imperative programming languages, it is impossibleimpossible toto telltell thethe exactexact data dependency graph of a program by static analysis (by looking at the source). ● (The data dependency graph tells you the order you need to compute values.) ● Therefore you don't know what can be computed in parallel without: – (1) trusting the programmer (explicit parallelism) – VERY BAD – (2) guessing (static code analysis) – possibly/probably very bad – (3) trying and failing/bailing if you were wrong (STM) FunctionalFunctional vs.vs. imperativeimperative ● Functional:Functional: ● gather,gather, pull,pull, reducereduce ● Imperative:Imperative: samesame asas functional,functional, butbut addsadds ● scatter,scatter, push,push, produceproduce TrainingTraining wheelswheels ● HowHow toto minimallyminimally constrainconstrain imperativeimperative programmingprogramming toto allowallow forfor automaticautomatic implicitimplicit parallelizationparallelization withwith guaranteedguaranteed threadthread safety...safety... ……withoutwithout forcingforcing thethe programmerprogrammer toto useuse trainingtraining wheels?wheels? TheThe realreal rootroot ofof thethe problemproblem ● ...is the ability to read the current value of a variable. (Surprising. Foundational.) ● => Einstein's theory of the relativity of simultaneity: there is no such thing as “now” given physical separation. ● c.f. Values vs. variables. ● Remove the concept of “now”, and you minimally restrict a language such that it's impossible to create race conditions or deadlocks. ● But you also don't have to program in purely functional style: Can support a subset of imperative, push-style programming. IntroducingIntroducing FlowFlow ● FlowFlow isis aa newnew programmingprogramming paradigmparadigm thatthat enforcesenforces thisthis restrictionrestriction toto solvesolve thethe multicoremulticore dilemmadilemma whilewhile providingproviding strongstrong andand specificspecific guaranteesguarantees aboutabout programprogram correctnesscorrectness andand runtimeruntime safety.safety. IntroducingIntroducing FlowFlow ● FlowFlow isis aa compilercompiler frameworkframework thatthat cancan bebe targetedtargeted byby aa widewide rangerange ofof differentdifferent languageslanguages ● [will[will probablyprobably plugplug intointo LLVM]LLVM] ● ...and...and (eventually),(eventually), aa referencereference languagelanguage ● ItIt doesn'tdoesn't actuallyactually existexist yet,yet, comecome helphelp makemake itit existexist ● http://flowlang.net/http://flowlang.net/ GoalsGoals ofof FlowFlow ● Solve the multicore dilemma ● Ubiquitous implicit parallelization, zero programmer effort ● “Write once, parallelize anywhere” – Cluster : hadoop / MapReduce, MPI or similar – JVM via Java Threads – C via pthreads – CPU ↔ GPU via CUDA ● Optimal load balancing via big-Oh analysis of computation and communication cost ● Prevent programmers from shooting themselves in the foot ● Make race conditions, deadlocks, segfaults/NPEs, memory leaks impossible BackBack toto thethe drawingdrawing boardboard FlowFlow enablesenables implicitimplicit parallelizationparallelization byby syntacticallysyntactically enforcingenforcing everyevery programprogram toto bebe aa latticelattice oror partialpartial orderingordering,, suchsuch thatthat thethe programprogram sourcesource codecode itselfitself isis thethe datadata dependencydependency graph.graph. ThisThis makesmakes parallelizationparallelization trivial.trivial. BackBack toto thethe drawingdrawing boardboard Also note that this does not eliminate “push” programming ● G can be an unordered collection It also does not mean you can't “change the value of a variable” ● But you have to specify a specific timestamp, e.g. x@(t) = x@(t-1) + 1 (then each value of x is a specific node in the DAG) => Minimally-restricted imperative programming with maximum implicit parallelization AnAn exampleexample ● Histogram generation ● Multithreaded version? Two options: (1) locks (=> contention!) (2) keep separate
Recommended publications
  • An Architectural Trail to Threaded-Code Systems
    Some programmers question certain claims made for threaded-code systems. The author defends these claims by tracing such a system from its origin in simple concepts. An Architectural Trail to Threaded-Code Systems Peter M. Kogge, IBM Federal Systems Division Interest in software systems based on threaded-code * the concepts and syntax of a high-order language, or concepts has grown remarkably in recent years. Ad- HOL; and vocates of such systems regularly make claims for them * a set of commands for a conversational monitor. that many classical programmers regard as bordering on The ISA (such as it is) is for an abstract machine that is the impossible. Typically, proponents claim that it is very efficiently implemented by short code segments in possible to construct, in 5K to 10K bytes of code, a soft- almost any current machine architecture. The HOL con- ware package that cepts include hierarchical construction of programs, * is conversational like APL, Lisp, or Basic; block structures, GOTOless programming, and user- * includes a compile facility with many high-order definable data structures. The commands for the conver- language, structured-programming constructs; sational monitor permit direct interaction with objects * exhibits performance very close to that of machine- defined by the programmer in terms he defines, not sim- coded program equivalents; ply in the terms of the underlying machine (i.e., core * is written largely in itself and consequently is largely dumps). Further, the command set is directly executable portable; in a conversational mode and includes virtually all com- * places no barriers among combinations of system, mands available in the system; when so executed, they compiler, or application code; function just as they would if found in an executed pro- * can include an integrated, user-controlled virtual gram.
    [Show full text]
  • Basic Threads Programming: Standards and Strategy
    Basic Threads Programming: Standards and Strategy Mike Dahlin [email protected] February 13, 2007 1 Motivation Some people rebel against coding standards. I don’t understand the logic. For concurrent programming in particular, there are a few good solutions that have stood the test of time (and many unhappy people who have departed from these solutions.) For concurrent programming, debugging won’t work. You must rely on (a) writing correct code and (b) writing code that you and others can read and understand. Following the rules below will help you write correct, readable code. Rules 2 through 6 below are required coding standards for CS372. Answers to homework and exam problems and project code that do not follow these standards are by definition incorrect. Section 5 discusses additional restrictions for project code (or exam pseudo-code) in Java. Again, answers that deviate from these required coding standars are, by definition, incorrect for this class. If you believe you have a better way to write concurrent programs, that’s great! Bring it to us (before you use it on an assignment!) We will examine your approach the same way we hope that a civil engineering manager would examine a proposal by a bright young civil engineer who has a proposal for a better config- uration of rebar for reinforcing a new bridge: we will hope you have found a better way, but the burden of proof for explaining the superiority of your approach and proving that there are no hidden pitfalls is on you. 2 Coding standards for concurrent programs: 1.
    [Show full text]
  • Effective Inline-Threaded Interpretation of Java Bytecode
    Effective Inline-Threaded Interpretation of Java Bytecode Using Preparation Sequences Etienne Gagnon1 and Laurie Hendren2 1 Sable Research Group Universit´eduQu´ebec `a Montr´eal, [email protected] 2 McGill University Montreal, Canada, [email protected] Abstract. Inline-threaded interpretation is a recent technique that im- proves performance by eliminating dispatch overhead within basic blocks for interpreters written in C [11]. The dynamic class loading, lazy class initialization, and multi-threading features of Java reduce the effective- ness of a straight-forward implementation of this technique within Java interpreters. In this paper, we introduce preparation sequences, a new technique that solves the particular challenge of effectively inline-threa- ding Java. We have implemented our technique in the SableVM Java virtual machine, and our experimental results show that using our tech- nique, inline-threaded interpretation of Java, on a set of benchmarks, achieves a speedup ranging from 1.20 to 2.41 over switch-based inter- pretation, and a speedup ranging from 1.15 to 2.14 over direct-threaded interpretation. 1 Introduction One of the main advantages of interpreters written in high-level languages is their simplicity and portability, when compared to static and dynamic compiler-based systems. One of their main drawbacks is poor performance, due to a high cost for dispatching interpreted instructions. In [11], Piumarta and Riccardi introduced a technique called inlined-threading which reduces this overhead by dynamically inlining instruction sequences within basic blocks, leaving a single instruction dispatch at the end of each sequence. To our knowledge, inlined-threading has not been applied to Java interpreters before.
    [Show full text]
  • Threaded Code Is Not in Fact Implemented in the Computer's Hardware
    desired generality in others. In summary, when writing hard code the user needs relatively many instructions, each of which executes relatively rapidly. An interpreter, by contrast, is a vehicle by which the user can choose his own instruction set to correspond Programming R. Morris to his specific problem. Obviously such freedom allows Techniques Editor a much shorter program for that: problem to be written. The penalty is that the instruction set of the interpreter Threaded Code is not in fact implemented in the computer's hardware. Instead the interpreter must itself be a computer pro- James R. Bell gram which simulates the action of the interpretive in- Digital Equipment Corporation struction set in terms of the actual instruction set. This can be a time-consuming proposition. Thus interpretive code tends to be shorter but slower than hard code. The concept of "threaded code" is presented as an It is instructive to look at the relation between the alternative to machine language code. Hardware and host hardware and the alternatives discussed. In the software realizations of it are given. In software it is case of hard code an instruction directs the flow of realized as interpretive code not needing an interpreter. processing by its actual execution from the IR, or in- Extensions and optimizations are mentioned. struction register, of the machine. In the case of an in- Key Words and Phrases: interpreter, machine code, terpreter, an "instruction" is in fact merely a datum time tradeoff, space tradeoff, compiled code, subroutine from the interpreting program. Thus it directs the flow calls, threaded code of processing from an accumulator or the equivalent.
    [Show full text]
  • Csc 453 Interpreters & Interpretation
    CSc 453 Interpreters & Interpretation Saumya Debray The University of Arizona Tucson Interpreters An interpreter is a program that executes another program. An interpreter implements a virtual machine , which may be different from the underlying hardware platform. Examples: Java Virtual Machine; VMs for high-level languages such as Scheme, Prolog, Icon, Smalltalk, Perl, Tcl. The virtual machine is often at a higher level of semantic abstraction than the native hardware. This can help portability, but affects performance. CSc 453: Interpreters & Interpretation 2 1 Interpretation vs. Compilation CSc 453: Interpreters & Interpretation 3 Interpreter Operation ip = start of program; while ( ¬ done ) { op = current operation at ip ; execute code for op on current operands; advance ip to next operation; } CSc 453: Interpreters & Interpretation 4 2 Interpreter Design Issues Encoding the operation I.e., getting from the opcode to the code for that operation (“dispatch”): byte code (e.g., JVM) indirect threaded code direct threaded code. Representing operands register machines: operations are performed on a fixed finite set of global locations (“registers”) (e.g.: SPIM); stack machines: operations are performed on the top of a stack of operands (e.g.: JVM). CSc 453: Interpreters & Interpretation 5 Byte Code Operations encoded as small integers (~1 byte). The interpreter uses the opcode to index into a table of code addresses: while ( TRUE ) { byte op = ∗∗∗ip; switch ( op ) { case ADD: … perform addition ; break; case SUB: … perform subtraction ; break; … } } Advantages: simple, portable. Disadvantages: inefficient. CSc 453: Interpreters & Interpretation 6 3 Direct Threaded Code Indexing through a jump table (as with byte code) is expensive. Idea : Use the address of the code for an operation as the opcode for that operation.
    [Show full text]
  • Static Analysis of Lockless Microcontroller C Programs
    Static Analysis of Lockless Microcontroller C Programs Eva Beckschulze Sebastian Biallas Stefan Kowalewski Embedded Software Laboratory RWTH Aachen University, Germany {lastname}@embedded.rwth-aachen.de Concurrently accessing shared data without locking is usually a subject to race conditions resulting in inconsistent or corrupted data. However, there are programs operating correctly without locking by exploiting the atomicity of certain operations on a specific hardware. In this paper, we describe how to precisely analyze lockless microcontroller C programs with interrupts by taking the hardware architecture into account. We evaluate this technique in an octagon-based value range analysis using access-based localization to increase efficiency. 1 Introduction Static analysis based on abstract interpretation [7] is a formal method that found its way into practice by several commercial code analysis tools. Proving the absence of run-time errors in microcontroller programs is of particular importance as microcontrollers are often deployed in safety-critical systems. However, C code analyzers usually do not cope with C extensions and hardware-specific control prevalent in microcontroller programs. This control is not only necessary for data input/output but also needed to implement interrupt service routines (ISRs), which allows some form of concurrency and can be used for asynchronous hardware communication and periodic tasks. Since the control functions of the hardware are often exposed through normal memory accesses, a sound analysis of microcontroller programs has to reflect these registers in its memory model. On the Atmega16 [2], for example, it is possible to enable/disable interrupts by a write to the SREG register which is located at the memory address 0x5F.
    [Show full text]
  • A Case Study of Multi-Threading in the Embedded Space
    A Case Study of Multi-Threading in the Embedded Space Greg Hoover Forrest Brewer Timothy Sherwood University of California, Santa University of California, Santa University of California, Santa Barbara Barbara Barbara Engineering I Engineering I Engineering I Santa Barbara, California Santa Barbara, California Santa Barbara, California 93106 93106 93106 [email protected] [email protected] [email protected] ABSTRACT vironment. Such systems open the possibility of early de- The continuing miniaturization of technology coupled with tection of structural failure, tracking microclimates in for- wireless networks has made it feasible to physically embed est canopies, and observing migratory patterns of any num- sensor network systems into the environment. Sensor net ber of species [16, 11, 29]. In an ideal world, these tiny processors are tasked with the job of handling a disparate digital systems would be operating a wireless network, han- set of interrupt driven activity, from networks to timers to dling sensor readings, controlling electro-statically-activated the sensors themselves. In this paper, we demonstrate the devices, processing software updates, and performing dis- advantages of a tiny multi-threaded microcontroller design tributed computations. To handle all of these functions at which targets embedded applications that need to respond the required throughput, we argue for the use of dynamic to events at high speed. While multi-threading is typically multi-threading at all levels of the microcontroller design. used to improve resource utilization, in the embedded space At first this concept may seem counterintuitive as most it can provide zero-cycle context switching and interrupt ser- multi-threaded architectures were developed to better-exploit vice threads (IST), enabling complex programmable control an abundance of resources [25], something that these sys- in latency constrained environments.
    [Show full text]
  • Virtual Machine Showdown: Stack Versus Registers
    Virtual Machine Showdown: Stack Versus Registers Yunhe Shi, David Gregg, Andrew Beatty M. Anton Ertl Department of Computer Science Institut f¨urComputersprachen University of Dublin, Trinity College TU Wien Dublin 2, Ireland Argentinierstraße 8 {yshi, David.Gregg, A-1040 Wien, Austria Andrew.Beatty}@cs.tcd.ie [email protected] ABSTRACT be interpreted or compiled. The most popular VMs, such as Virtual machines (VMs) are commonly used to distribute the Java VM, use a virtual stack architecture, rather than programs in an architecture-neutral format, which can eas- the register architecture that dominates in real processors. ily be interpreted or compiled. A long-running question in A long-running question in the design of VMs is whether the design of VMs is whether stack architecture or register stack architecture or register architecture can be implemented architecture can be implemented more efficiently with an more efficiently with an interpreter. On the one hand stack interpreter. We extend existing work on comparing virtual architectures allow smaller VM code so less code must be stack and virtual register architectures in two ways. Firstly, fetched per VM instruction executed. On the other hand, our translation from stack to register code is much more stack machines require more VM instructions for a given sophisticated. The result is that we eliminate an average computation, each of which requires an expensive (usually of more than 47% of executed VM instructions, with the unpredictable) indirect branch for VM instruction dispatch. register machine bytecode size only 25% larger than that of Several authors have discussed the issue [12, 15, 11, 16] and the corresponding stack bytecode.
    [Show full text]
  • Multiple Cores + SIMD + Threads) (And Their Performance Characteristics
    Lecture 2: Modern Multi-Core Architecture: (multiple cores + SIMD + threads) (and their performance characteristics) CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012) Announcements ▪ WWW - http://www.cs.cmu.edu/afs/cs/academic/class/15418-s12/www/ ▪ We have another TA! - Mike Mu (Yes, our TAs are Michael and Mike) (CMU 15-418, Spring 2012) Review 1. Why has single threaded performance topped out in recent years? 2. What prevented us from obtaining maximum speedup from the parallel programs we wrote last time? (CMU 15-418, Spring 2012) Today ▪ Today we will talk computer architecture ▪ Four key concepts about how modern computers work - Two concern execution - Two concern access to memory ▪ Knowing some architecture will help you - Understand and optimize the performance of your programs - Gain intuition about what workloads might bene!t from fast parallel machines (CMU 15-418, Spring 2012) Part 1: parallel execution (CMU 15-418, Spring 2012) Example program Compute sin(x) using tailor expansion: sin(x) = x - x3/3! + x5/5! + x7/7! + ... void sinx(int N, int terms, float* x, float* result) { for (int i=0; i<N; i++) { float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! int sign = -1; for (int j=1; j<=terms; j++) { value += sign * numer / denom numer *= x[i] * x[i]; denom *= (j+3) * (j+4); sign *= -1; } result[i] = value; } } (CMU 15-418, Spring 2012) Compile program void sinx(int N, int terms, float* x, float* result) x[i] { for (int i=0; i<N; i++) { float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! ld r0, addr[r1] int sign = -1; mul r1, r0, r0 mul r1, r1, r0 for (int j=1; j<=terms; j++) ..
    [Show full text]
  • Generating Multithreaded Code from Parallel Haskell for Symmetric Multiprocessors Alejandro Caro
    Generating Multithreaded Code from Parallel Haskell for Symmetric Multiprocessors by Alejandro Caro S.B., Computer Science and Engineering, MIT 1990 S.M., Electrical Engineering and Computer Science, MIT 1993 Engineer in Computer Science, MIT 1998 Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 1999 @ 1999 Massachusetts Institute of Technology. All rights reserved. A u th or ....................... ....... ......... ................ Department 4f Electrical Engineering and Computer Science January 27, 1999 Certified by .:... .....- :...... 5.< ....... ............. V.. Arvind.I..v( Johnson Professor of Computer Science Thesis Supervisor 2' Accepted by .. .... .... .. .. .... Arthur C. Smith Chairman, Departmeital Committee on Graduate Students Generating Multithreaded Code from Parallel Haskell for Symmetric Multiprocessors by Alejandro Caro Submitted to the Department of Electrical Engineering and Computer Science on January 27, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract This dissertation presents pHc, a new compiler for Parallel Haskell (pH) with complete support for the entire language. pH blends the powerful features of a high-level, higher- order language with implicitly parallel, non-strict semantics. The result is a language that is easy to use but tough to compile. The principal contribution of this work is a new code- generation algorithm that works directly on A*, a terse intermediate representation based on the A-calculus. All the power of the original language is preserved in A*, and in addition, the new representation makes it easy to express the important concept of threads, groups of expressions that can execute sequentially, at a high-level.
    [Show full text]
  • The Vector-Thread Architecture
    Appears in, The 31st Annual International Symposium on Computer Architecture (ISCA-31), Munich, Germany, June 2004 The Vector-Thread Architecture Ronny Krashinsky, Christopher Batten, Mark Hampton, Steve Gerding, Brian Pharris, Jared Casper, and Krste Asanovic´ MIT Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, Cambridge, MA 02139 ronny,cbatten,krste @csail.mit.edu Abstract parallel code, each VP directs its own control flow by fetching its The vector-thread (VT) architectural paradigm unifies the vector own AIBs. Implementations of the VT architecture can also exploit and multithreaded compute models. The VT abstraction provides instruction-level parallelism within AIBs. the programmer with a control processor and a vector of virtual In this way, the VT architecture supports a modeless intermin- processors (VPs). The control processor can use vector-fetch com- gling of all forms of application parallelism. This flexibility pro- mands to broadcast instructions to all the VPs or each VP can use vides new ways to parallelize codes that are difficult to vectorize or thread-fetches to direct its own control flow. A seamless intermix- that incur excessive synchronization costs when threaded. Instruc- ing of the vector and threaded control mechanisms allows a VT ar- tion locality is improved by allowing common code to be factored chitecture to flexibly and compactly encode application parallelism out and executed only once on the control processor, and by execut- and locality, and a VT machine exploits these to improve perfor- ing the same AIB multiple times on each VP in turn. Data locality mance and efficiency. We present SCALE, an instantiation of the is improved as most operand communication is isolated to within VT architecture designed for low-power and high-performance em- an individual VP.
    [Show full text]
  • Threaded Code Variations and Optimizations
    Threaded Co de Variations and Optimizations M. Anton Ertl TU Wien Abstract header of x code field dovar code orth has been traditionally implemented as in- F body direct threaded co de, where the co de for non- es is the co de- eld address of the word. To primitiv header of foo docol code um b ene t from combining sequences get the maxim code field primitives into sup erinstructions, the co de pro- of code field e should b e a primitivefol- duced for a non-primitiv body @ code wed by a parameter (e.g., lit addr for variables). lo code field pap er takes a lo ok at the steps from a tra- This ;s code ditional threaded-co de implementation to sup erin- structions, and at the size and sp eed e ects of the Figure 1: Traditional indirect threaded co de various steps. The use of sup erinstructions gives sp eedups of up to a factor of 2 on large b enchmarks on pro cessors with branch target bu ers, but re- pap er describ es the various steps from an indirect quires more space for the primitives and the opti- threaded implementation to an implementation us- mization tables, and also a little more space for the ing sup erinstructions (Section 2), and evaluates the threaded co de. e ect of these steps on run-time and co de size (Sec- tion 4); these steps also a ect cache consistency is- sues on the 386 architecture, whichcanhave a large 1 Intro duction in uence on p erformance (Section 3).
    [Show full text]