Multicore and Multicore Programming with Openmp

Multicore and Multicore Programming with Openmp

Multicore and Multicore programming with OpenMP [email protected] 1/ 77 Why multicores? the three walls What is the reason for introduction of multicores? Uniprocessors performance is leveling off due to the \three walls": I ILP wall: Instruction Level Parallelism is near its limits I Memory wall: caches show diminishing returns I Power wall: power per chip is getting painfully high 2/ 77 The ILP wall There are two common approaches to exploit ILP: I Vector instructions (SSE, AltiVec etc.) I Out-of-order issue with in-order retirement, speculation, register renaming, branch prediction etc. Neither of these can generate much concurrency because: I irregular memory access patterns I control dependent computations I data dependent memory access Multicore processors, on the other side, exploit Thread Level Parallelism (TLP) which can virtually achieve any degree of concurrency 3/ 77 The Memory wall The gap between processors and memory speed has increased dramatically. Caches are used to improve memory performance provided that data locality can be exploited. To deliver twice the performance with the same bandwidth, the cache miss rate must be cut in half; this means: I For dense matrix-matrix multiply or dense LU, 4x bigger cache I For sorting or FFTs, the square of its former size I For sparse or dense matrix-vector multiply, forget it What is the cost of complicated memory hierarchies? LATENCY TLP (that is, multicores) can help overcome this inefficiency by means of multiple streams of execution where memory access latency can be hidden. 4/ 77 The Power wall ILP techniques are based on the exploitation of higher clock frequencies. Processors performance can be improved by a factor k by increasing frequency by the same factor. Is this a problem? yes, it is. 2 P ' Pdynamic = CV f Pdynamic = dynamic power C = capacitance V = voltage f = frequency but fmax ∼ V Power consumption and heat dissipation grow as f 3! 5/ 77 The Power wall 6/ 77 The Power wall Is there any other way to increase performance without consuming too much power? Yes, with multicores: a k-way multicore is k times faster than an unicore and consumes only k times as much power. Pdynamic / C Thus power consumption and heat dissipation grow linearly with the number of cores (i.e., chip complexity or number of transistors). 7/ 77 The Power wall It is even possible to reduce power consumption while still increasing performance. Assume a single-core processor with frequency f and capacitance C. A quad-core with frequency 0:6 × f will consume 15% less power while delivering 2.4 higher performance. 8/ 77 The first Multicore: Power4 Power4 (2001) and Power5 Power6 (2007) (2004) 9/ 77 The first Multicore: Power4 Power4 (2001) and Power5 Power6 (2007) (2004) 9/ 77 The first Multicore: Power4 Power4 (2001) and Power5 Power6 (2007) (2004) 9/ 77 AMD Dual-core Opteron (2005) Phenom (2007) 10/ 77 Intel Clovertown (2007) Dunnington (2008) 11/ 77 Conventional Multicores What are the problems with all these designs? I Core-to-core communication. Although cores lie on the same piece of silicon, there is no direct communication channel between them. The only option is to communicate through main memory. I Shared memory bus. On modern systems, processors are much faster than memory; example: Intel Woodcrest: I at 3.0 GHz each core can process 3 × 4(SSE) × 2(dualissue) = 24 single-precision floating-point values in a nanosecond. I at 10.5 GB/s the memory can provide 10:5=4 ' 2:6 single-precision floating-point values in a nanosecond. One core is 9 times as fast as the memory! Attaching more cores to the same bus only makes the problem worse unless heavy data reuse is possible. 12/ 77 The future of multicores TILE64 is a microcontroller manufactured by Tilera. It consists of a mesh network of 64 "tiles", where each tile houses a general purpose processor, cache, and a non-blocking router, which the tile uses to communicate with the other tiles on the processor. I 4.5 TB/s on-chip mesh interconnect I 25 GB/s towards main memory I no floating-point 13/ 77 Intel Polaris Intel Polaris 80 cores prototype: I 80 tiles arranged in a 8 × 10 grid I on-chip mesh interconnect with 1.62 Tb/s bisection bandwidth I 3-D stacked memory (future) I consumes only 62 Watts and is 275 square millimeters I each tile has: I a router I 3 KB instruction memory I 2 KB data memory I 2 SP FMAC units I 32 SP registers That makes 4(FLOPS) × 80(tiles) × 3:16GHz ' 1TFlop=s. The first TFlop machine was the ASCII Red made up of 10000 Pentium Pro, taking 250 mq and 500 KW... 14/ 77 The IBM Cell The Cell Broadband Engine was released in 2005 by the STI (Sony Toshiba IBM) consortium. It is an 9-way multicore processor. I 1 control core + 8 working cores I computational power is achieved through exploitation of two levels of parallelism: I vector units I multiple cores I on-chip interconnect bus for core-to-core communications I caches are replaced by explicitly managed local memories I performance comes at a price: the Cell is very hard to program 15/ 77 The Cell: architecture I one POWER Processing Element (PPE): this is almost like a PowerPC processor (it does not have some ILP features) and it is almost exclusively meant for control work. I 8 Synergistic Processing Elements (SPEs) (only 6 in the PS3) I one Element Interconnect Bus (EIB): on-chip ring bus connecting all the SPEs and the PPE I one Memory Interface Controller (MIC) that connects the EIB to the main memory I PPE and SPEs have different ISAs and thus we have to write different code and use different compilers 16/ 77 The Cell: the SPE I each SPE has an SPU that is, essentially, a SIMD unit I 2 in-order, RISC type (i.e., short) pipelines I one for local/store, shuffle, shifts... I one for fixed and floating point operations. The floating point is a vector unit that can do 4 SP (2 DP) FMAC per clock cycle. At 3.2 GHz it amounts to 25.6 Gflop/s. DP computations are 2 × 7 = 14 times slower 17/ 77 The Cell: the SPE I each SPE has a 256 KB scratchpad memory. Data and code have to be explicitly moved into the local memory through the MFC I communication is managed by a Memory Flow Controller (MFC) which is essentially a DMA engine. The MFC can move data at a speed of 25.6 GB/s I large 128-entry 128-bit register file 18/ 77 The Cell: the EIB The Element Interconnect Bus: I is a bus made of four unidirectional rings; two moving data in one sense and two in the opposite I arbitration is token based I aggregate bandwidth is 102.4 GB/s I each link to SPEs, PPE or MIC (main memory) is 25.6 GB/s I half the system clock (i.e., 1.6 GHz) 19/ 77 Hello world! example PPE code #include <stdio.h> #include <libspe.h> #include <sys/wait.h> extern spe_program_handle_t ; SPE code extern hello_spu; #include <stdio.h> int main(void)f int main(unsigned long long speid, speid_t speid[8]; unsigned long long argp, int status[8]; unsigned long long envp)f int i; for (i=0;i<8;i++) printf("Hello world (0x%llx)\n", speid); speid[i] = spe create thread(0, &hello_spu, NULL, NULL, -1, 0); return 0; for (i=0;i<8;i++)f g spe wait(speid[i], &status[i], 0); printf("status = %d\n", WEXITSTATUS(status[i])); g return 0; g 20/ 77 Programming the SPE Simple, matrix-vector multiply/add c=c-A*b. void spu code tile(float *A, float *b, float *c) f int m, n; for (n = 0; n < 32; n++) for (m = 0; m < 32; m++) c[m] -= A[n*32+m] * b[n]; g I good news: standard C code will compile and run correctly I bad news: no performance, i.e., 0.24 Gflop/s (< 1% peak) 21/ 77 Programming the SPE Taking advantage of vector instructions: I alias scalar arrays into vector arrays I use SPU intrinsics void spu code tile(float *A, float *b, float *c) f vector float *Ap = (vector float*)A; vector float *cp = (vector float*)c; int m, n; for (n = 0; n < 32; n++) f vector floatb splat = spu splats(b[n]); for (m = 0; m < 8; m++) f cp[m] = spu nmsub(Ap[n*8+m], b splat, cp[m]); g g g 4.5 × faster: 1.08 Gflop/s, i.e., ∼4% of the peak 22/ 77 Programming the SPE 23/ 77 Programming the SPE void spu code tile(float *A, float *b, float *c) f vector float *Ap = (vector float*)A; vector float *cp = (vector float*)c; vector float c0, c1, c2, c3, c4, c5, c6, c7, c8; int n; c0 = cp[ 0]; c4 = cp[ 4]; c1 = cp[ 1]; c5 = cp[ 5]; Vectorization + unrolling c2 = cp[ 2]; c6 = cp[ 6]; c3 = cp[ 3]; c7 = cp[ 7]; I unroll loops for (n = 0; n < 32; n++) I reuse registers f vector floatb splat = spu splats(b[n]); between iterations c0 = spu nmsub(Ap[n*8+ 0], b splat, c0); c1 = spu nmsub(Ap[n*8+ 1], b splat, c1); I eliminate innermost c2 = spu nmsub(Ap[n*8+ 2], b splat, c2); c3 = spu nmsub(Ap[n*8+ 3], b splat, c3); loop completely c4 = spu nmsub(Ap[n*8+ 4], b splat, c4); c5 = spu nmsub(Ap[n*8+ 5], b splat, c5); 32 × faster: 7.64 Gflop/s, c6 = spu nmsub(Ap[n*8+ 6], b splat, c6); c7 = spu nmsub(Ap[n*8+ 7], b splat, c7); i.e., ∼30% peak.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    84 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us