Supercomputer Using Cluster Computing
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
2.5 Classification of Parallel Computers
52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor. -
Distributed & Cloud Computing: Lecture 1
Distributed & cloud computing: Lecture 1 Chokchai “Box” Leangsuksun SWECO Endowed Professor, Computer Science Louisiana Tech University [email protected] CTO, PB Tech International Inc. [email protected] Class Info ■" Class Hours: noon-1:50 pm T-Th ! ■" Dr. Box’s & Contact Info: ●"www.latech.edu/~box ●"Phone 318-257-3291 ●"Email: [email protected] Class Info ■" Main Text: ! ●" 1) Distributed Systems: Concepts and Design By Coulouris, Dollimore, Kindberg and Blair Edition 5, © Addison-Wesley 2012 ●" 2) related publications & online literatures Class Info ■" Objective: ! ●"the theory, design and implementation of distributed systems ●"Discussion on abstractions, Concepts and current systems/applications ●"Best practice on Research on DS & cloud computing Course Contents ! •" The Characterization of distributed computing and cloud computing. •" System Models. •" Networking and inter-process communication. •" OS supports and Virtualization. •" RAS, Performance & Reliability Modeling. Security. !! Course Contents •" Introduction to Cloud Computing ! •" Various models and applications. •" Deployment models •" Service models (SaaS, PaaS, IaaS, Xaas) •" Public Cloud: Amazon AWS •" Private Cloud: openStack. •" Cost models ( between cloud vs host your own) ! •" ! !!! Course Contents case studies! ! •" Introduction to HPC! •" Multicore & openMP! •" Manycore, GPGPU & CUDA! •" Cloud-based EKG system! •" Distributed Object & Web Services (if time allows)! •" Distributed File system (if time allows)! •" Replication & Disaster Recovery Preparedness! !!!!!! Important Dates ! ■" Apr 10: Mid Term exam ■" Apr 22: term paper & presentation due ■" May 15: Final exam Evaluations ! ■" 20% Lab Exercises/Quizzes ■" 20% Programming Assignments ■" 10% term paper ■" 20% Midterm Exam ■" 30% Final Exam Intro to Distributed Computing •" Distributed System Definitions. •" Distributed Systems Examples: –" The Internet. –" Intranets. –" Mobile Computing –" Cloud Computing •" Resource Sharing. •" The World Wide Web. •" Distributed Systems Challenges. Based on Mostafa’s lecture 10 Distributed Systems 0. -
Distributed Algorithms with Theoretic Scalability Analysis of Radial and Looped Load flows for Power Distribution Systems
Electric Power Systems Research 65 (2003) 169Á/177 www.elsevier.com/locate/epsr Distributed algorithms with theoretic scalability analysis of radial and looped load flows for power distribution systems Fangxing Li *, Robert P. Broadwater ECE Department Virginia Tech, Blacksburg, VA 24060, USA Received 15 April 2002; received in revised form 14 December 2002; accepted 16 December 2002 Abstract This paper presents distributed algorithms for both radial and looped load flows for unbalanced, multi-phase power distribution systems. The distributed algorithms are developed from tree-based sequential algorithms. Formulas of scalability for the distributed algorithms are presented. It is shown that computation time dominates communication time in the distributed computing model. This provides benefits to real-time load flow calculations, network reconfigurations, and optimization studies that rely on load flow calculations. Also, test results match the predictions of derived formulas. This shows the formulas can be used to predict the computation time when additional processors are involved. # 2003 Elsevier Science B.V. All rights reserved. Keywords: Distributed computing; Scalability analysis; Radial load flow; Looped load flow; Power distribution systems 1. Introduction Also, the method presented in Ref. [10] was tested in radial distribution systems with no more than 528 buses. Parallel and distributed computing has been applied More recent works [11Á/14] presented distributed to many scientific and engineering computations such as implementations for power flows or power flow based weather forecasting and nuclear simulations [1,2]. It also algorithms like optimizations and contingency analysis. has been applied to power system analysis calculations These works also targeted power transmission systems. [3Á/14]. -
Su(3) Gluodynamics on Graphics Processing Units V
Proceedings of the International School-seminar \New Physics and Quantum Chromodynamics at External Conditions", pp. 29 { 33, 3-6 May 2011, Dnipropetrovsk, Ukraine SU(3) GLUODYNAMICS ON GRAPHICS PROCESSING UNITS V. I. Demchik Dnipropetrovsk National University, Dnipropetrovsk, Ukraine SU(3) gluodynamics is implemented on graphics processing units (GPU) with the open cross-platform compu- tational language OpenCL. The architecture of the first open computer cluster for high performance computing on graphics processing units (HGPU) with peak performance over 12 TFlops is described. 1 Introduction Most of the modern achievements in science and technology are closely related to the use of high performance computing. The ever-increasing performance requirements of computing systems cause high rates of their de- velopment. Every year the computing power, which completely covers the overall performance of all existing high-end systems reached before is introduced according to popular TOP-500 list of the most powerful (non- distributed) computer systems in the world [1]. Obvious desire to reduce the cost of acquisition and maintenance of computer systems simultaneously, along with the growing demands on their performance shift the supercom- puting architecture in the direction of heterogeneous systems. These kind of architecture also contains special computational accelerators in addition to traditional central processing units (CPU). One of such accelerators becomes graphics processing units (GPU), which are traditionally designed and used primarily for narrow tasks visualization of 2D and 3D scenes in games and applications. The peak performance of modern high-end GPUs (AMD Radeon HD 6970, AMD Radeon HD 5870, nVidia GeForce GTX 580) is about 20 times faster than the peak performance of comparable CPUs (see Fig. -
Grid Computing: What Is It, and Why Do I Care?*
Grid Computing: What Is It, and Why Do I Care?* Ken MacInnis <[email protected]> * Or, “Mi caja es su caja!” (c) Ken MacInnis 2004 1 Outline Introduction and Motivation Examples Architecture, Components, Tools Lessons Learned and The Future Questions? (c) Ken MacInnis 2004 2 What is “grid computing”? Many different definitions: Utility computing Cycles for sale Distributed computing distributed.net RC5, SETI@Home High-performance resource sharing Clusters, storage, visualization, networking “We will probably see the spread of ‘computer utilities’, which, like present electric and telephone utilities, will service individual homes and offices across the country.” Len Kleinrock (1969) The word “grid” doesn’t equal Grid Computing: Sun Grid Engine is a mere scheduler! (c) Ken MacInnis 2004 3 Better definitions: Common protocols allowing large problems to be solved in a distributed multi-resource multi-user environment. “A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.” Kesselman & Foster (1998) “…coordinated resource sharing and problem solving in dynamic, multi- institutional virtual organizations.” Kesselman, Foster, Tuecke (2000) (c) Ken MacInnis 2004 4 New Challenges for Computing Grid computing evolved out of a need to share resources Flexible, ever-changing “virtual organizations” High-energy physics, astronomy, more Differing site policies with common needs Disparate computing needs -
Improving Tasks Throughput on Accelerators Using Opencl Command Concurrency∗
Improving tasks throughput on accelerators using OpenCL command concurrency∗ A.J. L´azaro-Mu~noz1, J.M. Gonz´alez-Linares1, J. G´omez-Luna2, and N. Guil1 1 Dep. of Computer Architecture University of M´alaga,Spain [email protected] 2 Dep. of Computer Architecture and Electronics University of C´ordoba,Spain Abstract A heterogeneous architecture composed by a host and an accelerator must frequently deal with situations where several independent tasks are available to be offloaded onto the accelerator. These tasks can be generated by concurrent applications executing in the host or, in case the host is a node of a computer cluster, by applications running on other cluster nodes that are willing to offload tasks in the accelerator connected to the host. In this work we show that a runtime scheduler that selects the best execution order of a group of tasks on the accelerator can significantly reduce the total execution time of the tasks and, consequently, increase the accelerator use. Our solution is based on a temporal execution model that is able to predict with high accuracy the execution time of a set of concurrent tasks launched on the accelerator. The execution model has been validated in AMD, NVIDIA, and Xeon Phi devices using synthetic benchmarks. Moreover, employing the temporal execution model, a heuristic is proposed which is able to establish a near-optimal tasks execution ordering that signicantly reduces the total execution time, including data transfers. The heuristic has been evaluated with five different benchmarks composed of dominant kernel and dominant transfer real tasks. Experiments indicate the heuristic is able to find always an ordering with a better execution time than the average of every possible execution order and, most times, it achieves a near-optimal ordering (very close to the execution time of the best execution order) with a negligible overhead. -
Computing at Massive Scale: Scalability and Dependability Challenges
This is a repository copy of Computing at massive scale: Scalability and dependability challenges. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/105671/ Version: Accepted Version Proceedings Paper: Yang, R and Xu, J orcid.org/0000-0002-4598-167X (2016) Computing at massive scale: Scalability and dependability challenges. In: Proceedings - 2016 IEEE Symposium on Service-Oriented System Engineering, SOSE 2016. 2016 IEEE Symposium on Service-Oriented System Engineering (SOSE), 29 Mar - 02 Apr 2016, Oxford, United Kingdom. IEEE , pp. 386-397. ISBN 9781509022533 https://doi.org/10.1109/SOSE.2016.73 (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. -
Lecture 18: Parallel and Distributed Systems
Lecture 18: Parallel and What is a distributed system? Distributed Systems n From various textbooks: u “A distributed system is a collection of independent computers n Classification of parallel/distributed that appear to the users of the system as a single computer.” architectures u “A distributed system consists of a collection of autonomous n SMPs computers linked to a computer network and equipped with n Distributed systems distributed system software.” u n Clusters “A distributed system is a collection of processors that do not share memory or a clock.” u “Distributed systems is a term used to define a wide range of computer systems from a weakly-coupled system such as wide area networks, to very strongly coupled systems such as multiprocessor systems.” 1 2 What is a distributed system?(cont.) What is a distributed system?(cont.) n Michael Flynn (1966) P1 P2 P3 u n A distributed system is Network SISD — single instruction, single data u SIMD — single instruction, multiple data a set of physically separate u processors connected by MISD — multiple instruction, single data u MIMD — multiple instruction, multiple data one or more P4 P5 communication links n More recent (Stallings, 1993) n Is every system with >2 computers a distributed system?? u Email, ftp, telnet, world-wide-web u Network printer access, network file access, network file backup u We don’t usually consider these to be distributed systems… 3 4 MIMD Classification parallel and distributed Classification of the OS of MIMD computers Architectures tightly loosely coupled -
State-Of-The-Art in Parallel Computing with R
Markus Schmidberger, Martin Morgan, Dirk Eddelbuettel, Hao Yu, Luke Tierney, Ulrich Mansmann State-of-the-art in Parallel Computing with R Technical Report Number 47, 2009 Department of Statistics University of Munich http://www.stat.uni-muenchen.de State-of-the-art in Parallel Computing with R Markus Schmidberger Martin Morgan Dirk Eddelbuettel Ludwig-Maximilians-Universit¨at Fred Hutchinson Cancer Debian Project, Munchen,¨ Germany Research Center, WA, USA Chicago, IL, USA Hao Yu Luke Tierney Ulrich Mansmann University of Western Ontario, University of Iowa, Ludwig-Maximilians-Universit¨at ON, Canada IA, USA Munchen,¨ Germany Abstract R is a mature open-source programming language for statistical computing and graphics. Many areas of statistical research are experiencing rapid growth in the size of data sets. Methodological advances drive increased use of simulations. A common approach is to use parallel computing. This paper presents an overview of techniques for parallel computing with R on com- puter clusters, on multi-core systems, and in grid computing. It reviews sixteen different packages, comparing them on their state of development, the parallel technology used, as well as on usability, acceptance, and performance. Two packages (snow, Rmpi) stand out as particularly useful for general use on com- puter clusters. Packages for grid computing are still in development, with only one package currently available to the end user. For multi-core systems four different packages exist, but a number of issues pose challenges to early adopters. The paper concludes with ideas for further developments in high performance computing with R. Example code is available in the appendix. Keywords: R, high performance computing, parallel computing, computer cluster, multi-core systems, grid computing, benchmark. -
Hybrid Computing – Co-Processors/Accelerators Power-Aware Computing – Performance of Applications Kernels
C-DAC Four Days Technology Workshop ON Hybrid Computing – Co-Processors/Accelerators Power-aware Computing – Performance of Applications Kernels hyPACK-2013 (Mode-1:Multi-Core) Lecture Topic : Multi-Core Processors : Algorithms & Applications Overview - Part-II Venue : CMSD, UoHYD ; Date : October 15-18, 2013 C-DAC hyPACK-2013 Multi-Core Processors : Alg.& Appls Overview Part-II 1 Killer Apps of Tomorrow Application-Driven Architectures: Analyzing Tera-Scale Workloads Workload convergence The basic algorithms shared by these high-end workloads Platform implications How workload analysis guides future architectures Programmer productivity Optimized architectures will ease the development of software Call to Action Benchmark suites in critical need for redress C-DAC hyPACK-2013 Multi-Core Processors : Alg.& Appls Overview Part-II 2 Emerging “Killer Apps of Tomorrow” Parallel Algorithms Design Static and Load Balancing Mapping for load balancing Minimizing Interaction Overheads in parallel algorithms design Data Sharing Overheads C-DAC hyPACK-2013 Multi-Core Processors : Alg.& Appls Overview Part-II 3 Emerging “Killer Apps of Tomorrow” Parallel Algorithms Design (Contd…) General Techniques for Choosing Right Parallel Algorithm Maximize data locality Minimize volume of data Minimize frequency of Interactions Overlapping computations with interactions. Data replication Minimize construction and Hot spots. Use highly optimized collective interaction operations. Collective data transfers and computations • Maximize Concurrency. -
Cost Model: Work, Span and Parallelism 1 Overview of Cilk
CSE 539 01/15/2015 Cost Model: Work, Span and Parallelism Lecture 2 Scribe: Angelina Lee Outline of this lecture: 1. Overview of Cilk 2. The dag computation model 3. Performance measures 4. A simple greedy scheduler 1 Overview of Cilk A brief history of Cilk technology In this class, we will overview various parallel programming technology developed. One of the technology we will examine closely is Cilk, a C/C++ based concurrency platform. Before we overview the development and implementation of Cilk, we shall first overview a brief history of Cilk technology to account for where the major concepts originate. Cilk technology has developed and evolved over more than 15 years since its origin at MIT. The text under this subheading is partially abstracted from the “Cilk” entry in Encyclopedia of Distributed Computing [14] with the author’s consent. I invite interested readers to go through the original entry for a more complete review of the history. Cilk (pronounced “silk”) is a linguistic and runtime technology for algorithmic multithreaded programming originally developed at MIT. The philosophy behind Cilk is that a programmer should concentrate on structuring her or his program to expose parallelism and exploit locality, leaving Cilk’s runtime system with the responsibility of scheduling the computation to run effi- ciently on a given platform. The Cilk runtime system takes care of details like load balancing, synchronization, and communication protocols. Cilk is algorithmic in that the runtime system guarantees efficient and predictable performance. Important milestones in Cilk technology include the original Cilk-1 [2,1, 11], 1 Cilk-5 [7, 18,5, 17], and the commercial Cilk++ [15] by Cilk Arts. -
Difference Between Grid Computing Vs. Distributed Computing
Difference between Grid Computing Vs. Distributed Computing Definition of Distributed Computing Distributed Computing is an environment in which a group of independent and geographically dispersed computer systems take part to solve a complex problem, each by solving a part of solution and then combining the result from all computers. These systems are loosely coupled systems coordinately working for a common goal. It can be defined as 1. A computing system in which services are provided by a pool of computers collaborating over a network. 2. A computing environment that may involve computers of differing architectures and data representation formats that share data and system resources. Distributed Computing, or the use of a computational cluster, is defined as the application of resources from multiple computers, networked in a single environment, to a single problem at the same time - usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. Picture: The concept of distributed computing is simple -- pull together and employ all available resources to speed up computing. The key distinction between distributed computing and grid computing is mainly the way resources are managed. Distributed computing uses a centralized resource manager and all nodes cooperatively work together as a single unified resource or a system. Grid computingutilizes a structure where each node has its own resource manager and the system does not act as a single unit. Definition of Grid Computing The Basic idea between Grid Computing is to utilize the ideal CPU cycles and storage of millions of computer systems across a worldwide network function as a flexible, pervasive, and inexpensive accessible pool that could be harnessed by anyone who needs it, similar to the way power companies and their users share the electrical grid.