
Teaching HPC Systems and Parallel Programming with Small-Scale Clusters Lluc Alvarez Eduard Ayguade Filippo Mantovani Barcelona Supercomputing Center Barcelona Supercomputing Center Barcelona Supercomputing Center Universitat Politecnica de Catalunya Universitat Politecnica de Catalunya fi[email protected] [email protected] [email protected] Abstract—In the last decades, the continuous proliferation of owned, managed and administrated by the university. This High-Performance Computing (HPC) systems and data centers methodology is convenient to teach parallel programming, as has augmented the demand for expert HPC system designers, the students only need to connect remotely to the cluster to do administrators, and programmers. For this reason, most uni- versities have introduced courses on HPC systems and parallel the programming work for the assignment. However, using a programming in their degrees. However, the laboratory assign- remote cluster prevents the students from experimenting with ments of these courses generally use clusters that are owned, the design, set up and the administration of such systems. managed and administrated by the university. This methodology With the advent of Single-Board Computers (SBCs) for has been shown effective to teach parallel programming, but the embedded and the multimedia domains, building a small- using a remote cluster prevents the students from experimenting with the design, set up and administration of such systems. scale cluster has become recently extremely affordable, both This paper presents a methodology and framework to teach economically and technically. Modern commercial SBCs for HPC systems and parallel programming using a small-scale the embedded domain are equipped with processors that are cluster of single-board computers. These boards are very cheap, fundamentally very similar to the ones found in HPC systems their processors are fundamentally very similar to the ones and are ready to execute Linux out of the box. So, these de- found in HPC, and they are ready to execute Linux out of the box. So they represent a perfect laboratory playground for vices provide a great opportunity for the students to experience students experiencing how to assemble a cluster, setting it up, with assembling a cluster, setting it up, and configuring all the and configuring its system software. Also, we show that these required software to have a fully operative small-scale HPC small-scale clusters can be used as evaluation platforms for both, cluster. introductory and advanced parallel programming assignments. This paper presents the methodology and framework that we Index Terms—HPC systems, parallel programming, teaching propose for teaching HPC systems and parallel programming using a small-scale HPC cluster of SBCs. This methodology I. INTRODUCTION has been successfully used to support teaching activities in The importance of High-Performance Computing (HPC) in Parallel Programming and Architectures (PAP), a third-year our society has continuously increased over the years. In the elective subject in the Bachelor Degree in Informatics Engi- early years, the very few existing HPC systems were based neering at the Barcelona School of Informatics (FIB) of the on vector processors specialized for scientific computations, Universitat Politcnica de Catalunya (UPC) - BarcelonaTech. and they were only used by a small number of experts; After presenting the PAP course description and environment, programmability and usability were not the critical issues at the paper gives an overview of the components of the small- that moment. The trend changed when supercomputers started scale cluster, which we name Odroid cluster after the Odroid- to adopt “high-end” commodity technologies (e.g., general- XU4 boards [1] that form it. Then the paper describes the purpose cores), which opened the door to a rich software methodology that we use in two laboratory assignments of ecosystem. As a consequence programming productivity in- the course. The first laboratory assignment consists on setting creased and HPC infrastructure became popular throughout the Odroid cluster up and performing an evaluation of its many research and industrial sectors. In the last years, the main characteristics. The cluster setup includes physically proliferation of HPC systems and data centers has gone even assembling the boards, configuring the network topology of further with the emergence of mobile devices and cloud the cluster, and installing all the software ecosystem typically services. In the current scenario, the demand for expert HPC found in HPC platforms. In the evaluation part the students system designers, administrators and programmers is higher discover the main characteristics of the Odroid-XU4 boards, than ever, and will likely continue growing to keep improving they learn how the threads and the processes of a parallel the performance and efficiency of HPC systems in the future. program are distributed among the processors and the nodes, In the last years, many universities have introduced courses and they experiment with the effects of heterogeneity. The on HPC systems and parallel programming in their degrees. second laboratory assignment consists on parallelizing an Given the cost of modern HPC infrastructures, the laboratory application implementing the heat diffusion algorithm with assignments of most of these courses use clusters that are MPI [2] and OpenMP [3] and evaluating it on the Odroid cluster. The complete framework presented in this paper black box behind the compilation and execution of OpenMP greatly facilitates the learning of the design, the setup and the programs [5], explaining the internals of runtime systems for software ecosystem of HPC systems, as well as being a very shared-memory programming and the most relevant aspects appealing platform for the evaluation of parallel programming of thread management, work generation and execution, and assignments. synchronization. In a very practical way, students explore The rest of this paper is organized as follows: Section II different alternatives for implementing a minimal OpenMP- explains the course and its methodology. Section III gives an compliant runtime library using Pthreads, providing support overview of the Odroid cluster and its components. Section IV for both the work-sharing and tasking execution models. This presents the step-by-step process that is followed by the block takes four theory/problems sessions (mainly covering students in the laboratory assignment to set up the cluster. low-level Pthreads programming) and six laboratory sessions Section V describes the work to be done by the students of individual work to program the OpenMP-compliant runtime to evaluate the Odroid cluster and to understand its main library. At the end of the laboratory sessions for this block, a characteristics. Section VI then shows how we use the Odroid class is devoted to sharing experiences and learnings. cluster as a platform for a parallel programming assignment. The second block has the objective of understanding the Section VII reviews other proposals that use of small-scale scaling of HPC systems with a large number of processors clusters in courses related to parallel and distributed comput- beyond the single-node shared-memory architectures students ing. Finally, Section VIII remarks the main conclusions of this are familiar with. This block introduces the most relevant work and presents some future directions to evolve both the aspects of multi-node HPC clusters and explains its main Odroid cluster and the assignments. hardware components (processors, accelerators, memories, interconnection networks, etc.). Also, this block takes four II. CONTEXT,COURSE DESCRIPTION AND theory/problems sessions devoted to analyze in detail how the METHODOLOGY ratio FLOPs/Byte evolves in the scaling path (i.e., the number Parallel Programming and Architectures (PAP) is a third- of potential floating-point operations per byte of data accessed year (sixth term) optional subject in the Bachelor Degree from/to memory/interconnect). The roofline model [6], which in Informatics Engineering at the Barcelona School of In- plots floating-point performance as a function of the compute formatics (FIB) of the Universitat Politecnica` de Catalunya units peak performance, data access peak bandwidth and (UPC) - BarcelonaTech. The subject comes after Parallelism arithmetic intensity, is used to understand this evolution and its (PAR), a core subject in the Bachelor Degree that covers the implications on data sharing in parallel programs. Finally, the fundamental aspects of parallelism, parallel programming with evolution of energy efficiency (Flops/Watt) is also covered in OpenMP and shared-memory multiprocessor architectures [4]. the theory classes. The laboratory part of this block consists on PAP extends the concepts and methodologies introduced in three sessions in which students physically assemble a small- PAR, focusing on the most relevant aspects of the implemen- scale cluster based on Odroid-XU4 boards. They set up the tation of runtime libraries for shared-memory programming Ethernet network and the Network File System (NFS), as well models such as OpenMP using low-level threading libraries as install and configure all the software required to execute (Pthreads), and also explaining distributed-memory cluster ar- MPI and OpenMP parallel programs. They eventually evaluate chitectures and how to program them with MPI. Another elec- the cluster and its main characteristics
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-