
Recon®gurable Computing: What, Why, and Implications for Design Automation Andre DeHon and John Wawrzynek Berkeley Recon®gurable, Architectures, Software, and Systems Computer Science Division University of California at Berkeley Berkeley, CA 94720-1776 contact: <[email protected]> ¢¡¤£¦¥¦§©¨ ¥ single-chip silicon die capacity grows, this class of architectures be- comes increasingly viable, since more tasks can be pro®tably imple- Recon®gurable Computing is emerging as an important new orga- mented spatially, and increasingly important, since post-fabrication nizational structure for implementing computations. It combines customization is necessary to differentiate products, adapt to stan- the post-fabrication programmability of processors with the spatial dards, and provide broad applicability for monolithic IC designs. computational style most commonly employed in hardware de- In this tutorial we introduce the organizational aspects of re- signs. The result changes traditional ªhardwareº and ªsoftwareº con®gurable computing architectures, and we relate these recon- boundaries, providing an opportunity for greater computational ca- ®gurable architectures to more traditional alternatives (Section 2). pacity and density within a programmable media. Recon®gurable Section 3 distinguishes these different approaches in terms of in- Computing must leverage traditional CAD technology for building struction binding timing. We emphasize an intuitive appreciation spatial designs. Beyond that, however, reprogrammablility intro- for the bene®ts and tradeoffs implied by recon®gurable design duces new challenges and opportunities for automation, including (Section 4), and comment on its relevance to the design of future binding-time and specialization optimizations, regularity extraction computing systems (Section 6). We end with a roundup of CAD and exploitation, and temporal partitioning and scheduling. opportunities arising in the exploitation of recon®gurable systems (Section 7). ¤¥¦§¤ ¥ ¨ §©"!#¨ §$&%'£)(+*, -.¥/!#¨ §©$102*43¤¨ ¥¨"5 %6£ (78$ 9:3;"§¨"5 Traditionally, we either implemented computations in hardware (e.g. custom VLSI, ASICs, gate-arrays) or we implemented them When implementing a computation, we have traditionally decided in software running on processors (e.g. DSPs, microcontrollers, between custom hardware implementations and software imple- embedded or general-purpose microprocessors). More recently, mentations. In some systems, we make this decision on a subtask by however, Field-Programmable Gate Arrays (FPGAs) introduced a subtask basis, placing some subtasks in custom hardware and some new alternative which mixes and matches properties of the tradi- in software on more general-purpose processing engines. Hardware tional hardware and software alternatives. Machines based on these designs offer high performance because they are: FPGAs have achieved impressive performance [1] [11] [4]Ðoften customized to the problemÐno extra overhead for inter- achieving 100 the performance of processor alternatives and 10- pretation or extra circuitry capable of solving a more general 100 the performance per unit of silicon area. Using FPGAs for computing led the way to a general class problem of computer organizations which we now call recon®gurable com- relatively fastÐdue to highly parallel, spatial execution puting architectures. The key characteristics distinguishing these machines is that they both: Software implementations exploit a ªgeneral-purposeº execution engine which interprets a designated data stream as instructions can be customized to solve any problem after device fabrica- telling the engine what operations to perform. As a result, software tion is: exploit a large degree of spatially customized computation in ¯exibleÐtask can be changed simply by changing the in- order to perform their computation struction stream in rewriteable memory This class of architectures is important because it allows the com- relatively slowÐdue to mostly temporal execution putational capacity of the machine to be highly customized to the instantaneous needs of an application while also allowing the com- relatively inef®cient±since operators can be poorly matched putational capacity to be reused intime at a variety of time scales. As to computational task Figure 1 depicts the distinction between spatial and temporal computing. In spatial implementations, each operator exists at a different point in space, allowing the computation to exploit paral- lelism to achieve high throughput and low computational latencies. In temporal implementations, a small number of more general com- pute resources are reused in time, allowing the computation to be implemented compactly. Figure 3 shows that when we have only Spatial Computation Temporal Computation B B < 1 =?> t1 < X X < 2 = A 1 t2 D < C < @ 2 = 2 B < < A 2 = t2 1 A X A B = + @ t2 C C + D ALU C A1E 2 @ >1@ Figure 1: Spatial versus Temporal Computation for the expression A > B C X mul in in mul O1 A mul in B add O3 C add O2 O4 res=O5 ALU ALU ALU ALU ALU Y A&E 2 @ >F@ Figure 2: Spatially Con®gurable Implementation of expression A > B C When computation defined? G HI "¤JK7#9&$ pre−fabrication post−fabrication (hardware) (software) Instruction binding time is an important distinction amongst these three broad classes of computing media which helps us understand Space ASIC reconfigurable gate−array their relative merits. That is, in every case we must tell the compu- tational media how to behave, what operation to perform and how to connect operators. In the pre-fabrication hardware case, we do Time processors distributed in? Computation this by patterning the devices and interconnect, or programming, the device during the fabrication process. In the ªsoftwareº case, after fabrication we select the proper function from those supported Figure 3: Coarse Design Space for Computing Implementations by the silicon. This is done with a set of con®guration bits, an instruction, which tells each operator how to perform and where to get its input. In purely spatial software architectures, the bits these two options, we implicitly connect spatial processing with for each operator can be de®ned once and will then be used for hardware computation and temporal processing with software. a long processing epoch (Figure 2). This allows the operators to The key bene®t of FPGAs, and more broadly recon®gurable store only a single instruction local to the compute and interconnect devices, is that they introduce a class of post-fabrication con®g- operators. In temporal software architectures, the operator must urable devices which support spatial computations, thus giving us change with each cycle amongst a large number of instructions in a new organizational point in this space (Figure 3). Figure 2 shows order to implement the computation as a sequence of operations on a spatially con®gurable computation for comparison with Figure 1. a small number of active compute operators. As a result, the spatial Recon®gurable devices have the obvious bene®t of spatial paral- designs must have high bandwidth to a large amount of instruc- lelism, allowing them to perform more operations per cycle. As we tion memory for each operator (as shown in Figures 1). Figure 4 will see in Section 4, the organization has inherent density advan- shows this continuum from pre-fabrication operation binding time tages over traditionalprocessor designs. As a result, recon®gurables to cycle-by-cycle operation binding. can often pack this greater parallelism into the same die area as a Early operation binding time generally corresponds to less im- modern processor. plementation overhead. A fully custom design can implement only the circuits and gates needed for the task; it requires no extra mem- ‘‘Hardware’’ ‘‘Software’’ Datapath Width (w) M L One Media: Custom Gate FPGA Processors 1 4 16 64 256 1024 VLSI Array Time Prog. Binding N first metal fuse load every 1 FPGA Time: mask masks program config. cycle N fabrication 4 Reconfigurable time Figure 4: Binding Time Continuum \ Instruction Depth (c) ory for instructions or circuitry to perform operations not needed 2048 SIMD/Vector by a particular task. A gate-array implementation must use only Processors pre-patterned gates; it need only see the wire segments needed for a task, but must make do with the existing transistors and transistor arrangement regardless of task needs. In the spatial extreme, an Processors FPGA or recon®gurable design needs to hold a single instruction; this adds overhead for that instruction and for the more general Figure 5: Datapath Width Instruction Depth Architectural Design structures which handle all possible instructions. The processor Space needs to rebind its operation on every cycle, so it must pay a large price in instruction distribution mechanism, instruction storage, and limited instruction semantics in order to support this rebinding. operation on each cycle and are routed in the same manner to and On the ¯ip side, late operation binding implies an opportunity from memory or register ®les. For FPGAs, the datapath width is Y to more closely specialize the design to the instantaneous needs of a one ( E 1) since routing is controlled at the bit level and each given application. That is, if part of the data set used by an operator FPGA operator, typically a single-output Lookup-Table (LUT), can is bound later than the operation is bound, the design may have to be controlled independently. be much more general than the actual application requires. For
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-