Comparison of Threading Programming Models

Comparison of Threading Programming Models

2017 IEEE International Parallel and Distributed Processing Symposium Workshops Comparison of Threading Programming Models Solmaz Salehian, Jiawen Liu and Yonghong Yan Department of Computer Science and Engineering, Oakland University, Rochester, MI USA {ssalehian,jliu,yan}@oakland.edu Abstract—In this paper, we provide comparison of language performance comparisons of OpenMP, Cilkplus and C++11 features and runtime systems of commonly used threading for data and task parallelism on CPU, showing the impacts parallel programming models for high performance computing, of runtime systems on the performance. The paper makes including OpenMP, Intel Cilk Plus, Intel TBB, OpenACC, Nvidia CUDA, OpenCL, C++11 and PThreads. We then report our the following contributions: 1) a list of features for threading performance comparison of OpenMP, Cilk Plus and C++11 for programming APIs to support existing and emerging computer data and task parallelism on CPU using benchmarks. The results architectures; 2) comparison of threading models in terms of show that the performance varies with respect to factors such as feature support and runtime scheduling strategies; 3) perfor- runtime scheduling strategies, overhead of enabling parallelism mance comparisons of OpenMP, Cilk Plus and C++11 for data and synchronization, load balancing and uniformity of task workload among threads in applications. Our study summarizes and task parallelism on CPU using benchmark kernels and and categorizes the latest development of threading programming Rodinia [7]. APIs for supporting existing and emerging computer architec- The rest of the paper is organized as follows. In section tures, and provides tables that compare all features of different II, a list of API features of parallel APIs are summarized. APIs. It could be used as a guide for users to choose the APIs In section III, comparisons of interfaces and runtime systems for their applications according to their features, interface and performance reported. are presented. Section IV presents performance comparisons. Section V provides related work study and Section VI contains Keywords -threading; parallel programming; data parallelism; our conclusion. task parallelism; memory abstraction; synchronization; mutual exclusion II. FEATURES FOR THREADING PROGRAMMING APIS I. INTRODUCTION The evolvement of programming models has been mostly The High Performance Computing (HPC) community has driven by advances of computer system architectures and new developed a rich variety of parallel programming models to application requirements. Computing nodes of existing and facilitate the expression of the required levels of concurrency emerging computer systems might be comprised of many to exploit hardware capabilities. Programming APIs for node- identical computing cores in multiple coherency domains, or level parallelism, such as OpenMP, Cilk Plus, C++11, POSIX they may be heterogeneous, and contain specialized cores that threads (PThreads), Intel Threading Building Blocks (TBB), perform a restricted set of operations with high efficiency. OpenCL, Microsoft Parallel Patterns Library (PPL), to name Deeper memory hierarchies and more challenging NUMA a few, each has its unique set of capabilities and advantages. effects for performance optimization have been seen in the They also share certain functionalities realized in different emerging computer systems. Further, explicit data movement interfaces, e.g., most of them support both data parallelism is necessary for nodes that present distinct memory address and task parallelism patterns for CPU. They are all evolving spaces to different computing elements, as demonstrated in to become more complex and comprehensive to support new today’s accelerator architectures. computer architectures and emerging applications. It becomes To facilitate programming a diversified range of computer harder for users to choose from those APIs for their appli- systems, an ideal API must be expressive for the required cations with regards to the features and interfaces of these levels of concurrency and many other unique features of models. hardware, while permitting an efficient implementation by The same parallelism pattern could be realized using differ- system software. In this section, we categorized different ent interfaces and implemented using different runtime sys- features of threading model APIs. The detailed comparison tems. The runtime systems that support those features vary in of threading programming APIs based on these categories are terms of scheduling algorithms and implementation strategies, discussed in next section. which may cause dramatically different performance for the Parallelism: A programming model provides API for spec- same applications created using different programming APIs. ifying different kinds of parallelism that either map to parallel Thus, performance-wise selection of the right API requires architectures or facilitate expression of parallel algorithms. We efforts for studying and benchmarking for users’ application. consider four commonly used parallelism patterns in HPC: 1) In this paper, we provide an extensive comparison of data parallelism, which maps well to manycore accelerator language features and runtime systems of commonly used and vector architecture depending on the granularity of data threading parallel programming models for HPC, including parallel unit; 2) asynchronous task parallelism, which can OpenMP, Intel Cilk Plus, Intel TBB, OpenACC, Nvidia be used to effectively expresses certain parallel algorithms, CUDA, OpenCL, C++11 and PThreads. We then report our e.g., irregular and recursive parallelism; 3) data/event-driven 978-0-7695-6149-3/17978-1-5386-3408-0/17 $31.00 © 2017 IEEE 766 DOI 10.1109/IPDPSW.2017.141 computation, which captures computations characterized as manycore accelerators. Intel TBB and Cilk Plus are task based data flow; and 4) offloading parallelism between host and parallel programming models used on multi-core and shared device, which is used for accelerator-based systems. memory systems. OpenMP is a more comprehensive standard Abstraction of memory hierarchy and programming for that supports a wide variety of features we listed. data locality: Portable optimization of parallel applications on shared memory NUMA machines has been known to be A. Language Features and Interfaces challenging. Recent architectures that exhibit deeper memory The full comparisons of language features and interfaces hierarchies and possible distinct memory/address spaces make are summarized in Table I, II and III. For parallelism support portable memory optimization even harder. A programming listed in Table I, asynchronous tasking or threading can model helps in this aspect by providing: 1) API abstraction of be viewed as the foundational parallel mechanism that is memory hierarchy and core architecture, e.g., an explicit no- supported by all the models. Overall, OpenMP provides the tion of NUMA memory regions or high bandwidth memory; 2) most comprehensive set of features to support all the four language construct to support binding of computation and data parallelism patterns. For accelerators (NVIDIA GPUs and Intel to influence runtime execution under the principle of locality; Xeon Phis), both OpenACC and OpenMP provide high-level 3) means to specify explicit data mapping and movement for offloading constructs and implementation though OpenACC sharing data between different memory and address spaces; supports mainly offloading. Only OpenMP and Cilk Plus and 4) interfaces for specifying memory consistency model. provide constructs for vectorization support (OpenMP’s simd Synchronizations: A programming model often provides directives and Cilk Plus’ array notations and elemental func- constructs for supporting coordination between parallel work tions). For data/event driven parallelism, C++’s std::future, units. Commonly used constructs include barrier, reduction OpenMP’s depend clause, and OpenACC’s wait are all for and join operations for synchronizing a group of threads or user to specify asynchronous task dependency to achieve tasks, point-to-point signal/wait operations to create pipeline such kind of parallelism. Other approaches, including CUDA’s or workflow executions of parallel tasks, and phase-based stream, OpenCL pipe, and TBB’s pipeline, provide pipelining synchronization for streaming computations. mechanisms for asynchronous executions with dependencies Mutual exclusion: Interfaces such as locks are still widely between CPU tasks. used for protecting data access. A model provides lan- For supporting abstraction of memory systems and data guage constructs for creating exclusive data access mecha- locality programming, the comparison is listed in Table II. nism needed for parallel programming, and may also define Only OpenMP provides constructs for programmers to specify appropriate semantics for mutual exclusion to reduce the memory hierarchy (as places) and the binding of compu- opportunities of introducing deadlocks. tation with data (proc bind clause). Programming models Error handling, tools support, and language binding: that support manycore architectures provide interfaces for Error handling provides support for dealing with faults from organizing a large number threads (x1000) into a two-level user programs or the system to improve system and applica- thread hierarchy, e.g., OpenMP’s teams of threads, Ope- tion resilience. Support for tools, e.g., performance profiling nACC’s gang/worker/vector clause, CUDA’s blocks/threads

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us