Fine-Grain Parallelism Using Multi-Core, Cell/BE, and GPU Systems: Accelerating the Phylogenetic Likelihood Function

Fine-Grain Parallelism Using Multi-Core, Cell/BE, and GPU Systems: Accelerating the Phylogenetic Likelihood Function

Fine-grain Parallelism using Multi-core, Cell/BE, and GPU Systems: Accelerating the Phylogenetic Likelihood Function y Frederico Pratas z ∗ Pedro Trancoso o Alexandros Stamatakis y ∗ Leonel Sousa y SiPS group, INESC-ID/IST Universidade Tecnica´ de Lisboa, Lisbon, Portugal ffcpp,[email protected] z CASPER group, Department of Computer Science, University of Cyprus, Nicosia, Cyprus [email protected] o The Exelixis Lab, Bioinformatics Unit (I12) Department of Computer Science, Technische Universitat¨ Munchen,¨ Munchen,¨ Germany [email protected] Abstract The major contribution of our work is to analyze the differ- ent existing architectures in terms of performance, scalabil- We are currently faced with the situation where applica- ity and programmability. To achieve this goal we focus on tions have increasing computational demands and there is three different types of multi-core architectures and how to a wide selection of parallel processor systems. In this paper exploit fine-grain parallelism for a demanding and relevant we focus on exploiting fine-grain parallelism for a demand- application from the area of bioinformatics. ing Bioinformatics application - MrBayes - and its Phylo- The three different architectures and models under genetic Likelihood Functions (PLF) using different archi- study are: general-purpose homogeneous multi-core (dual- tectures. Our experiments compare side-by-side the scal- and quad-core Intel and AMD processors); heterogeneous ability and performance achieved using general-purpose multi-core (IBM Cell/BE); and graphics processors units multi-core processors, the Cell/BE, and Graphics Processor (NVIDIA GPUs). The general-purpose multi-core and Units (GPU). The results indicate that all processors scale Cell/BE processors support the Multiple-Program Multiple- well for larger computation and data sets. Also, GPU and Data (MPMD) model, while the GPU processors support Cell/BE processors achieve the best improvement for the the Single-Program Multiple-Data (SPMD) model. The parallel code section. Nevertheless, data transfers and the memory hierarchy of the different architectures has also in- execution of the serial portion of the code are the reasons teresting distinct characteristics. For the general purpose for their poor overall performance. The general-purpose multi-core processors the caches are completely handled by multi-core processors prove to be simpler to program and hardware. In contrast, for the Cell/BE the cache memory provide the best balance between an efficient parallel and of the parallel processing elements is completely handled serial execution, resulting in the largest speedup. by software, i.e. the programmer has the responsibility of mapping the data and explicitly loading the data before its 1. Introduction use. For the GPU processors we have an intermediate solu- tion where the programmer is required to map the data to the cache memory but the accesses are handled by hardware. In order to address the constant demand for increased performance within the complexity and power budgets, cur- Regarding the target application, we assess how Mr- rent microprocessors are composed of multiple cores. Two Bayes [10], a program for Bayesian inference of evolution- major challenges need to be addressed in order to efficiently ary (phylogenetic) trees, can benefit from fine-grain paral- exploit the increasing on-chip parallel resources: the archi- lelism. Phylogenetic inference deals with the reconstruction tecture of these large-scale many-core processors and the of the evolutionary history for a set of organisms based on programmability of such systems. a multiple sequence alignment of molecular sequence data. The multi-core models currently available in the mar- Due to the large number of potential alternative unrooted ket represent different attempts to address the above issues. binary tree topologies the problem of finding the best scor- ing tree is NP-hard for the Maximum Likelihood model [2]. ∗P.Trancoso and L.Sousa are members of HiPEAC (EU FP7 program). The scoring function used in MrBayes is also adopted in other phylogenetic inference programs [5, 11]. 2.2. Cell/BE The Phylogenetic Likelihood Function (PLF) in Mr- Bayes is parallelized by using OpenMP for the general pur- The Cell/BE is a heterogeneous multi-core architecture pose multi-core systems, POSIX Threads for the Cell/BE consisting of 9 cores: one general purpose core, the Pow- systems, and CUDA for the GPU systems. Experimental re- erPC Processor Element (PPE), and eight special purpose sults show that all considered architectures scale well their cores, the Synergistic Processing Elements (SPEs). The performance when the input data set is increased. In the PPE is a simple processor that was designed for coordinat- systems with hardware-managed caches, the sharing of a ing the execution on the SPEs and run the Operating Sys- cache level, within the chip, by all cores, is a determining tem (OS). The SPEs are simpler processors as their pur- factor for efficient synchronization and therefore the scal- pose is the execution of the parallel code. Each SPE in- ability of the number of calls to the parallel section. The cludes a small private unified memory, the Local Store (LS), users effort in managing the software-managed caches is with 256KB. A key component of this loosely coupled compensated by the efficient synchronization mechanisms system is the Element Interconnect Bus (EIB). This high- in the Cell/BE. Moreover, since the PPE was designed only bandwidth, memory-coherent bus allows the cores to com- to coordinate the execution of the SPEs, it is not able to ex- municate through DMA data transfers between local and ploit single thread performance as the traditional CPUs. As remote memories. The Cell/BE does not support shared- such, its overall performance is affected by the penalty of memory at hardware level, giving the user the responsibil- the serial code execution. Overall, comparing to the serial ity to efficiently manage the memory space. Applications execution, the GPUs reduce significantly the execution time on the Cell/BE can be parallelized using pthreads but, un- of the parallel section but the data transfer overheads penal- like the general-purpose multi-core, the user is also respon- ize the effective overall speedup. Therefore, the best overall sible for the necessary data transfers and allocation to the speedup is achieved by the general-purpose multi-core sys- corresponding LS memories. tems, which combine efficient parallel and serial execution. While our results are based on MrBayes, the work presented 2.3. Graphics Processing Unit(GPU) here is of general interest, because it discusses program- ming techniques to efficiently exploit memory-intensive The GPU processors include a large number of very ba- fine-grained loop-level parallelism, and provides a perfor- sic cores and are typically used as accelerators to a host mance comparison across different architectures. system. A GPU is usually connected through a system bus (e.g. PCIe) to the CPU, and can be used for general-purpose The remainder of this paper is organized as follows. In applications (GPGPU) [7]. Section 2 we present the different architectures analyzed in The Compute Unified Device Architecture (CUDA) is a this work. Section 3 describes MrBayes and its paralleliza- compiler-supported programming model that offers an ex- tion strategies proposed for the different architectures. Sec- tended version of the C language for programming recent tion 4 describes the experimental setup and results. Finally, NVIDIA GPUs. Parallelism with CUDA is achieved by ex- Section 5 covers relevant related work and conclusions are ecuting the same function or kernel by N different CUDA presented in Section 6. threads which, in turn, are organized in blocks. During exe- cution CUDA threads may access data in multiple levels of the memory hierarchy: private local memory, shared mem- 2. Multi-core Architectures ory and global memory. Data organization in this hierarchi- cal memory is crucial to achieve the best efficiency. 2.1. General-Purpose Multi-Cores 3. Fine-Grain Parallelism in MrBayes 3.1. MrBayes Overview The general-purpose microprocessors support the MPMD model and include different levels of hardware- MrBayes is a popular program for Bayesian phyloge- managed cache memory. While the number of cores is still netic inference. This program is based on the Maximum relatively small, each core is able to exploit the ILP for Likelihood (ML) model [3] that represents a broadly ac- efficient single thread execution. Regarding the memory cepted criterion to score phylogenetic trees. To compute hierarchy, the last cache level on the chip is in most cases the Phylogenetic Likelihood Functions (PLF) on a fixed tree shared by all the cores. Typically the memory hierarchy is one needs to estimate the branch lengths and the parameters coherent therefore allowing the use of a shared-memory of the statistical model of nucleotide substitution. For DNA parallel programming model. In addition, the sharing of data sequences, a model of nucleotide substitution is pro- the internal cache by all cores allows for the efficient data vided by a 4x4 matrix (denoted as Q and shown in Figure 2), transfer and synchronization between them. For these that contains the instantaneous transition probabilities for a processors, parallel programming is relatively easy using certain DNA nucleotide (A - Adenine, C - Cytosine, G - Gua- POSIX Threads (pthreads) or OpenMP directives. nine, or T - Thymine) to mutate into a nucleotide A, C, G, Q Conditional Likelihood vector r vector A vector B A C G T 0 ... m elements r A n AGC X X X X r G A C G T 0 + + ... r T n + C T Inner Product Figure 1: A DNA substi- Figure 2: Nucleotide substitu- Figure 3: Conditional likelihood Figure 4: Inner product depen- tution model tion matrix Q (cl) vector (detail of one element) dencies graph or T, according to the substitution model in Figure 1. Ide- floating-point numbers as depicted in Figure 4.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us