TAMING the HYDRA MULTI-GPU PROGRAMMING with OPENACC Jeff Larkin, GTC 2019, March 2019 Why Use Multiple Devices?

TAMING the HYDRA MULTI-GPU PROGRAMMING with OPENACC Jeff Larkin, GTC 2019, March 2019 Why Use Multiple Devices?

TAMING THE HYDRA MULTI-GPU PROGRAMMING WITH OPENACC Jeff Larkin, GTC 2019, March 2019 Why use Multiple Devices? Because 2 > 1 : Effectively using multiple devices results in faster time to solution and/or solve more complex problems Because they’re there : If running on a machine with multiple devices you’re wasting resources by not using them all Because the code already runs on multiple nodes using MPI, so little change is necessary 2 SUMMIT NODE (2) IBM POWER9 + (6) NVIDIA VOLTA V100 256 GB 256 GB (DDR4) (DDR4) 135 GB/s 135 GB/s CPU 0 CPU 1 0 (0-3) 7 (28-31) 14 (56-59) 22 (88-91) 29 (116-119) 36 (144-147) 1 (4-7) 8 (32-35) 15 (60-63) 23 (92-95) 30 (120-123) 37 (148-151) 2 (8-11) 9 (36-39) 16 (64-67) 24 (96-99) 31 (124-127) 38 (152-155) 64 GB/s 3 (12-15) 10 (40-43) 17 (68-71) 25 (100-103) 32 (128-131) 39 (156-159) 4 (16-19) 11 (44-47) 18 (72-75) 26 (104-107) 33 (132-135) 40 (160-163) 5 (20-23) 12 (48-51) 19 (76-79) 27 (108-111) 34 (136-139) 41 (164-167) 6 (24-27) 13 (52-55) 20 (80-83) 28 (112-115) 35 (140-143) 42 (168-171) GPU 0 GPU 1 GPU 2 GPU 3 GPU 4 GPU 5 16 GB 16 GB 16 GB 16 GB 16 GB 16 GB (HBM2) (HBM2) (HBM2) (HBM2) (HBM2) (HBM2) NVLink2 (50 GB/s) (900 GB/s) 3 DGX-1 NODE 4 DGX-2 NODE 5 MULTI-GPU PROGRAMMING STRATEGIES 6 MULTI-GPU PROGRAMMING MODELS 1 CPU : Many GPUs • A single thread will change devices as-needed to send data and kernels to different GPUs Many Threads : 1 GPU (Each) • Using OpenMP, Pthreads, or similar, each thread can manage its own GPU 1+ CPU Process : 1 GPU • Each rank acts as-if there’s just 1 GPU, but multiple ranks per node use all GPUs Many Processes : Many GPUs • Each rank manages multiple GPUs, multiple ranks/node. Gets complicated quickly! 7 MULTI-GPU PROGRAMMING MODELS Trade-offs Between Approaches • Conceptually • Conceptually Very • Little to no code Simple Simple changes required • Easily share data • Requires • Set and forget the • Re-uses existing between peer additional loops device numbers domain devices • CPU can become a • Relies on external decomposition • Coordinating bottleneck Threading API • Probably already between GPUs • Remaining CPU • Can see improved using MPI extremely tricky cores often utilization • Watch affinity underutilized • Watch affinity 1 CPU : Many GPUs Many Threads : 1 GPU 1+ CPU : 1 GPU Many Processes : Many (Each) GPUs 8 MULTI-DEVICE OPENACC OpenACC presents devices numbered 0 – (N-1) for each device type available. The order of the devices comes from the runtime, almost certainly the same as CUDA By default all data and work go to the current device. Each device has its own async queues. Developers must change the current device and maybe the current device type using an API 9 MULTI-DEVICE CUDA CUDA by default exposes all devices, numbered 0 – (N-1), if devices are not all the same, it will reorder the “best” to device 0. Each device has its own pool of streams. If you do nothing, all work will go to Device #0. Developer must change the current device explicitly using an API 10 SAMPLE CODE 11 SAMPLE CODE Embarrassingly Parallel Image Filter 12 SAMPLE CODE Embarrassingly Parallel Image Filter 13 SAMPLE CODE Embarrassingly Parallel Image Filter 14 SAMPLE CODE Embarrassingly Parallel Image Filter 15 SAMPLE FILTER CODE for(y = 0; y < h; y++) { for(x = 0; x < w; x++) { For Each Pixel double blue = 0.0, green = 0.0, red = 0.0; for(int fy = 0; fy < filtersize; fy++) { long iy = y - (filtersize/2) + fy; for (int fx = 0; fx < filtersize; fx++) { long ix = x - (filtersize/2) + fx; blue += filter[fy][fx] * (double)imgData[iy * step + ix * ch]; green += filter[fy][fx] * (double)imgData[iy * step + ixApply * ch Filter+ 1]; red += filter[fy][fx] * (double)imgData[iy * step + ix * ch + 2]; } } out[y * step + x * ch] = 255 - (scale * blue); out[y * step + x * ch + 1 ] = 255 - (scale * green); Store the Values out[y * step + x * ch + 2 ] = 255 - (scale * red); } } } 16 1:MANY 17 1:MANY – THE PLAN 1. Introduce loop to initialize each device with needed arrays 2. Send each device its block of the image 3. Introduce loop to copy data back from and tear down each device. 18 SAMPLE FILTER CODE (1:MANY) for(int device = 0; device < ndevices; device++) { acc_set_device_num(device, acc_device_default); long lower = device*rows_per_device; for(int device = 0; device < ndevices; device++) { long upper = MIN(lower + rows_per_device, h); long copyLower = MAX(lower-(filtersize/2), 0); long copyUpper = MIN(upper+(filtersize/2), h); acc_set_device_num(device, acc_device_default); #pragma acc enter data \ create(imgData[copyLower*step:(copyUpper-copyLower)*step], \ out[lower*step:(upper-lower)*step]) \ long lower = device*rows_per_device; copyin(filter[:5][:5]) } long upper = MIN(lower + rows_per_device, h); for(int device = 0; device < ndevices; device++) { acc_set_device_num(device, acc_device_default); printf("Launching device %d\n", device); long copyLower = MAX(lower-(filtersize/2), 0); long lower = device*rows_per_device; long upper = MIN(lower + rows_per_device, h); long copyUpper = MIN(upper+(filtersize/2), h); long copyLower = MAX(lower-(filtersize/2), 0); long copyUpper = MIN(upper+(filtersize/2), h); #pragma acc update device(imgData[copyLower*step:(copyUpper-copyLower)*step]) async #pragma acc enter data \ #pragma acc parallel loop present(filter, \ imgData[copyLower*step:(copyUpper-copyLower)*step], \ out[lower*step:(upper-lower)*step]) async create(imgData[copyLower*step:(copyUpper-copyLower)*step], \ for(y = lower; y < upper; y++) { #pragma acc loop for(x = 0; x < w; x++) { out[lower*step:(upper-lower)*step]) \ double blue = 0.0, green = 0.0, red = 0.0; #pragma acc loop seq for(int fy = 0; fy < filtersize; fy++) { copyin(filter[:5][:5]) long iy = y - (filtersize/2) + fy; #pragma acc loop seq for (int fx = 0; fx < filtersize; fx++) { } long ix = x - (filtersize/2) + fx; if( (iy<0) || (ix<0) || (iy>=h) || (ix>=w) ) continue; blue += filter[fy][fx] * (double)imgData[iy * step + ix * ch]; green += filter[fy][fx] * (double)imgData[iy * step + ix * ch + 1]; red += filter[fy][fx] * (double)imgData[iy * step + ix * ch + 2]; } } out[y * step + x * ch] = 255 - (scale * blue); out[y * step + x * ch + 1 ] = 255 - (scale * green); out[y * step + x * ch + 2 ] = 255 - (scale * red); } } #pragma acc update self(out[lower*step:(upper-lower)*step]) async } for(int device = 0; device < ndevices; device++) { acc_set_device_num(device, acc_device_default); #pragma acc wait long lower = device*rows_per_device; long upper = MIN(lower + rows_per_device, h); long copyLower = MAX(lower-(filtersize/2), 0); long copyUpper = MIN(upper+(filtersize/2), h); #pragma acc exit data delete(out[lower*step:(upper-lower)*step], \ imgData[copyLower*step:(copyUpper-copyLower)*step], filter) } 19 SAMPLE FILTER CODE (1:MANY) for(int device = 0; device < ndevices; device++) { acc_set_device_num(device, acc_device_default); long lower = device*rows_per_device; long upper = MIN(lower + rows_per_device, h); long copyLower = MAX(lower-(filtersize/2), 0); long copyUpper = MIN(upper+(filtersize/2), h); #pragma acc enter data \ for(int device = 0; device < ndevices; device++) { create(imgData[copyLower*step:(copyUpper-copyLower)*step], \ out[lower*step:(upper-lower)*step]) \ copyin(filter[:5][:5]) acc_set_device_num(device, acc_device_default); } for(int device = 0; device < ndevices; device++) { long lower = device*rows_per_device; acc_set_device_num(device, acc_device_default); printf("Launching device %d\n", device); long lower = device*rows_per_device; long upper = MIN(lower + rows_per_device, h); long upper = MIN(lower + rows_per_device, h); long copyLower = MAX(lower-(filtersize/2), 0); long copyUpper = MIN(upper+(filtersize/2), h); long copyLower = MAX(lower-(filtersize/2), 0); #pragma acc update device(imgData[copyLower*step:(copyUpper-copyLower)*step]) async #pragma acc parallel loop present(filter, \ long copyUpper = MIN(upper+(filtersize/2), h); imgData[copyLower*step:(copyUpper-copyLower)*step], \ out[lower*step:(upper-lower)*step]) async for(y = lower; y < upper; y++) { #pragma acc update device(imgData[copyLower*step:(copyUpper- #pragma acc loop for(x = 0; x < w; x++) { double blue = 0.0, green = 0.0, red = 0.0; copyLower)*step]) async #pragma acc loop seq for(int fy = 0; fy < filtersize; fy++) { long iy = y - (filtersize/2) + fy; #pragma acc parallel loop present(filter, \ #pragma acc loop seq for (int fx = 0; fx < filtersize; fx++) { long ix = x - (filtersize/2) + fx; imgData[copyLower*step:(copyUpper-copyLower)*step], \ if( (iy<0) || (ix<0) || (iy>=h) || (ix>=w) ) continue; blue += filter[fy][fx] * (double)imgData[iy * step + ix * ch]; out[lower*step:(upper-lower)*step]) async green += filter[fy][fx] * (double)imgData[iy * step + ix * ch + 1]; red += filter[fy][fx] * (double)imgData[iy * step + ix * ch + 2]; for(y = lower; y < upper; y++) { } } out[y * step + x * ch] = 255 - (scale * blue); for(x = 0; x < w; x++) { out[y * step + x * ch + 1 ] = 255 - (scale * green); out[y * step + x * ch + 2 ] = 255 - (scale * red); } // Apply Filter } #pragma acc update self(out[lower*step:(upper-lower)*step]) async } } for(int device = 0; device < ndevices; device++) { } acc_set_device_num(device, acc_device_default); #pragma acc wait long lower = device*rows_per_device; #pragma acc update self(out[lower*step:(upper-lower)*step]) long upper = MIN(lower + rows_per_device, h); long copyLower = MAX(lower-(filtersize/2), 0); long copyUpper = MIN(upper+(filtersize/2), h); async #pragma acc exit data delete(out[lower*step:(upper-lower)*step], \ imgData[copyLower*step:(copyUpper-copyLower)*step], filter) } } 20 SAMPLE FILTER

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    46 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us