
Computer Architecture Lecture 7: SIMD Processors and GPUs Dr. Juan Gómez Luna Prof. Onur Mutlu ETH Zürich Fall 2018 10 October 2018 Last Week n Main Memory and DRAM Fundamentals (Lecture 5) q Wrap-up Main Memory Challenges q Main Memory Fundamentals q DRAM Basics and Operation q Memory Controllers q Simulation q Memory Latency n Research in DRAM q ChargeCache (Lecture 6a) q SoftMC (Lecture 6b) q REAPER: The Reach Profiler (Lecture 6c) q The DRAM Latency PUF (Lecture 6d) 2 Agenda for This Lecture n SIMD Processing q Vector and Array Processors n Graphics Processing Units (GPUs) 3 Exploiting Data Parallelism: SIMD Processors and GPUs SIMD Processing: Exploiting Regular (Data) Parallelism Flynn’s Taxonomy of Computers n Mike Flynn, Very High-Speed Computing Systems, Proc. of IEEE, 1966 n SISD: Single instruction operates on single data element n SIMD: Single instruction operates on multiple data elements q Array processor q Vector processor n MISD: Multiple instructions operate on single data element q Closest form: systolic array processor, streaming processor n MIMD: Multiple instructions operate on multiple data elements (multiple instruction streams) q Multiprocessor q Multithreaded processor 6 Data Parallelism n Concurrency arises from performing the same operation on different pieces of data q Single instruction multiple data (SIMD) q E.g., dot product of two vectors n Contrast with data flow q Concurrency arises from executing different operations in parallel (in a data driven manner) n Contrast with thread (control) parallelism q Concurrency arises from executing different threads of control in parallel n SIMD exploits operation-level parallelism on different data q Same operation concurrently applied to different pieces of data q A form of ILP where instruction happens to be the same across data 7 SIMD Processing n Single instruction operates on multiple data elements q In time or in space n Multiple processing elements n Time-space duality q Array processor: Instruction operates on multiple data elements at the same time using different spaces q Vector processor: Instruction operates on multiple data elements in consecutive time steps using the same space 8 Array vs. Vector Processors ARRAY PROCESSOR VECTOR PROCESSOR Instruction Stream Same op @ same time DiFFerent ops @ time LD VR ß A[3:0] LD0 LD1 LD2 LD3 LD0 ADD VR ß VR, 1 AD0 AD1 AD2 AD3 LD1 AD0 MUL VR ß VR, 2 ST A[3:0] ß VR MU0 MU1 MU2 MU3 LD2 AD1 MU0 ST0 ST1 ST2 ST3 LD3 AD2 MU1 ST0 DiFFerent ops @ same space AD3 MU2 ST1 MU3 ST2 Time Same op @ space ST3 Space Space 9 SIMD Array Processing vs. VLIW n VLIW: Multiple independent operations packed together by the compiler 10 SIMD Array Processing vs. VLIW n Array processor: Single operation on multiple (different) data elements 11 Vector Processors (I) n A vector is a one-dimensional array of numbers n Many scientific/commercial programs use vectors for (i = 0; i<=49; i++) C[i] = (A[i] + B[i]) / 2 n A vector processor is one whose instructions operate on vectors ratHer tHan scalar (single data) values n Basic requirements q Need to load/store vectors à vector registers (contain vectors) q Need to operate on vectors of different lengtHs à vector length register (VLEN) q Elements of a vector migHt be stored apart from eacH otHer in memory à vector stride register (VSTR) n Stride: distance in memory between two elements of a vector 12 Vector Processors (II) n A vector instruction performs an operation on each element in consecutive cycles q Vector functional units are pipelined q Each pipeline stage operates on a different data element n Vector instructions allow deeper pipelines q No intra-vector dependencies à no hardware interlocking needed within a vector q No control flow within a vector q Known stride allows easy address calculation for all vector elements n Enables prefetching of vectors into registers/cache/memory 13 Vector Processor Advantages + No dependencies within a vector q Pipelining & parallelization work really well q Can have very deep pipelines, no dependencies! + Each instruction generates a lot of work q Reduces instruction fetch bandwidth requirements + Highly regular memory access pattern + No need to explicitly code loops q Fewer branches in the instruction sequence 14 Vector Processor Disadvantages -- Works (only) if parallelism is regular (data/SIMD parallelism) ++ Vector operations -- Very inefficient if parallelism is irregular -- How about searching for a key in a linked list? Fisher, Very Long Instruction Word architectures and the ELI-512, ISCA 1983. 15 Vector Processor Limitations -- Memory (bandwidth) can easily become a bottleneck, especially if 1. compute/memory operation balance is not maintained 2. data is not mapped appropriately to memory banks 16 Vector Processing in More Depth Vector Registers n Each vector data register holds N M-bit values n Vector control registers: VLEN, VSTR, VMASK n Maximum VLEN can be N q Maximum number of elements stored in a vector register n Vector Mask Register (VMASK) q Indicates which elements of vector to operate on q Set by vector test instructions n e.g., VMASK[i] = (Vk[i] == 0) M-bit wide M-bit wide V0,0 V1,0 V0,1 V1,1 V0,N-1 V1,N-1 18 Vector Functional Units n Use a deep pipeline to execute element operations à fast clock cycle V V V 1 2 3 n Control of deep pipeline is simple because elements in vector are independent Six stage multiply pipeline V1 * V2 à V3 Slide credit: Krste Asanovic 19 Vector Machine Organization (CRAY-1) n CRAY-1 n Russell, The CRAY-1 computer system, CACM 1978. n Scalar and vector modes n 8 64-element vector registers n 64 bits per element n 16 memory banks n 8 64-bit scalar registers n 8 24-bit address registers 20 CRAY X-MP-28 @ ETH (CAB, E Floor) 21 CRAY X-MP System Organization E CRAY X-MP system organization Cray Research Inc., “The CRAY X-MP Series of Computer Systems,” 1985 22 CRAY X-MP Design Detail CRAY X-MP designdetail Mainframe Memory size CRAY X-MP single- and (millions of Number multiprocessor systems are Model Number of CPUs 64-bit words) of banks designed to offer users outstandmg performance on large-scale, CRAY X-MPl416 compute-intensive and 110-bound CRAY X-MPl48 jobs. CRAY X-MPl216 CRAY X-MP128 CRAY X-MP mainframes consist of CRAY X-MPl24 SIX (X-MPII), eight (X-MPl2) or CRAY X-MPl18 twelve (X-MPl4) vertical columns CRAY X-MPl14 arranged in an arc. Power supplies CRAY X-MP112 and cooling are clustered around the CRAY X-MPII 1 base and extend outward. A description of the major system communications section coordinates components and their functions processing between CPUs, and follows. central memory is shared. CPU computation section Registers The basic set of programmable Within the computation section of registers is composed of: each CPU are operating registers, functional units and an instruction Eight 24-bit address (A) registers control network - hardware Sixty-four 24-b~tintermediate address elements that cooperate in executing (B) registers sequences of instructions. The Eight 64-bit scalar (S) registers instruction control network makes all Sixty-four 64-bit scalar-save decisions related to instruction issue (T) reg~sters as well as coordinating the three Eight 64-element (4096-bit) vector (V) types of processing within each registers with 64 bits per element CPU: vector, scalar and address. Each of the processing modes has The 24-bit A registers are generally its associated registers and used for addressing and counting functional unk operations. Associated with them are 64 B registers, also 24 bits wide. The block diagram of a CRAY Since the transfer between an A and X-MPl4 (opposite page) illustrates a B register takes only one clock Cray Research Inc., “The the relationship of the registers to the period, the B registers assume the functional units, instruction buffers, role of data cache, storing CRAY X-MP Series of I10 channel control registers, informationfor fast access without interprocessor communications tying up the A registers for relatively Computer Systems,” 1985 section and memory. For long periods. multiple-processorCRAY X-MP models, the interprocessor 23 CRAY X-MP CPU Functional Units shared registers for btcrprucessw mehwh1 28 7 @-bitinsfruetion comrnun~cat~onand synchronlzatlon cause ~tto swltch from user to parcels, twlce the capac~tyof the Each cluster of shared reglsters monitor mode. Addlt~onally,each CRAY-1 ~nstruct~onbuffer.Cray The Research Inc., “The cons~stsof eight 24-b~tshared processor In a cluster can instruction buffers of eachCRAY CPU areX- MP Series of address (SB) reglsters, e~ght64-b~t asynchronously perform scalar or baded from memory at the burst rate shared scalar (ST) reg~stersand vector operations dctated by user ~f eight words per clockComputer period. Systems,” 1985 thirty-two one-b~tsynchron~zation programs. The hardware also (SM) reglsters. provides bulk-ln detect~onof system The contents of the exchange deadlock w~thlnthe cluster. package are augmented to lnclude 24 Under operat~ngsystem control, a cluster and processor numbers. cluster may be allocated to zero, one, Real-timeclock Increased data protection IS also two, three or four processors, Programs can be precisely tlmed made possible through a separate depend~ngon system conflguratlon wlth a 64-b~treal-time clock shared memory field for user programs and The cluster may be accessed by any by the processors that increments data. Exchange sequences occur at processor to whlch ~tIS allocated In once each 9.5 nsec. the rate of two words per clock e~theruser or system (monitor) perlod on the CRAY X-MP.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages102 Page
-
File Size-