computation Article GPU Computing with Python: Performance, Energy Efficiency and Usability † Håvard H. Holm 1,2,* , André R. Brodtkorb 3,4 and Martin L. Sætra 4,5 1 Mathematics and Cybernetics, SINTEF Digital, P.O. Box 124 Blindern, NO-0314 Oslo, Norway 2 Department of Mathematical Sciences, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway 3 Research and Development Department, Norwegian Meteorological Institute, P.O. Box 43 Blindern, NO-0313 Oslo, Norway;
[email protected] 4 Department of Computer Science, Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, NO-0130 Oslo, Norway;
[email protected] 5 Information Technology Department, Norwegian Meteorological Institute, P.O. Box 43 Blindern, NO-0313 Oslo, Norway * Correspondence:
[email protected] † This paper is an extended version of our paper published in International Conference on Parallel Computing (ParCo2019). Received: 6 December 2019; Accepted: 6 January 2020; Published: 9 January 2020 Abstract: In this work, we examine the performance, energy efficiency, and usability when using Python for developing high-performance computing codes running on the graphics processing unit (GPU). We investigate the portability of performance and energy efficiency between Compute Unified Device Architecture (CUDA) and Open Compute Language (OpenCL); between GPU generations; and between low-end, mid-range, and high-end GPUs. Our findings showed that the impact of using Python is negligible for our applications, and furthermore, CUDA and OpenCL applications tuned to an equivalent level can in many cases obtain the same computational performance. Our experiments showed that performance in general varies more between different GPUs than between using CUDA and OpenCL.