A Practice of Cloud Computing for HPC & Other Applications
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Univa Grid Engine 8.0 Superior Workload Management for an Optimized Data Center
Univa Grid Engine 8.0 Superior Workload Management for an Optimized Data Center Why Univa Grid Engine? Overview Univa Grid Engine 8.0 is our initial Grid Engine is an industry-leading distributed resource release of Grid Engine and is quality management (DRM) system used by hundreds of companies controlled and fully supported by worldwide to build large compute cluster infrastructures for Univa. This release enhances the Grid processing massive volumes of workload. A highly scalable and Engine open source core platform with reliable DRM system, Grid Engine enables companies to produce higher-quality products, reduce advanced features and integrations time to market, and streamline and simplify the computing environment. into enterprise-ready systems as well as cloud management solutions that Univa Grid Engine 8.0 is our initial release of Grid Engine and is quality controlled and fully allow organizations to implement the supported by Univa. This release enhances the Grid Engine open source core platform with most scalable and high performing advanced features and integrations into enterprise-ready systems as well as cloud management distributed computing solution on solutions that allow organizations to implement the most scalable and high performing distributed the market. Univa Grid Engine 8.0 computing solution on the market. marks the start for a product evolution Protect Your Investment delivering critical functionality such Univa Grid Engine 8.0 makes it easy to create a as improved 3rd party application cluster of thousands of machines by harnessing the See why customers are integration, advanced capabilities combined computing power of desktops, servers and choosing Univa Grid Engine for license and policy management clouds in a simple, easy-to-administer environment. -
A Simulated Grid Engine Environment for Large-Scale Supercomputers
Ito et al. BMC Bioinformatics 2019, 20(Suppl 16):591 https://doi.org/s12859-019-3085-x SOFTWARE Open Access Virtual Grid Engine: a simulated grid engine environment for large-scale supercomputers Satoshi Ito1*, Masaaki Yadome1, Tatsuo Nishiki2, Shigeru Ishiduki2,HikaruInoue2, Rui Yamaguchi1 and Satoru Miyano1 From IEEE International Conference on Bioinformatics and Biomedicine (2018) Madrid, Spain, 3-6 December 2018 Abstract Background: Supercomputers have become indispensable infrastructures in science and industries. In particular, most state-of-the-art scientific results utilize massively parallel supercomputers ranked in TOP500. However, their use is still limited in the bioinformatics field due to the fundamental fact that the asynchronous parallel processing service of Grid Engine is not provided on them. To encourage the use of massively parallel supercomputers in bioinformatics, we developed middleware called Virtual Grid Engine, which enables software pipelines to automatically perform their tasks as MPI programs. Result: We conducted basic tests to check the time required to assign jobs to workers by VGE. The results showed that the overhead of the employed algorithm was 246 microseconds and our software can manage thousands of jobs smoothly on the K computer. We also tried a practical test in the bioinformatics field. This test included two tasks, the split and BWA alignment of input FASTQ data. 25,055 nodes (2,000,440 cores) were used for this calculation and accomplished it in three hours. Conclusion: We considered that there were four important requirements for this kind of software, non-privilege server program, multiple job handling, dependency control, and usability. We carefully designed and checked all requirements. -
Eliminate Testing Bottlenecks - Develop and Test Code at Scale on Virtualized Production Systems in the Archanan Development Cloud
Eliminate Testing Bottlenecks - Develop and Test Code at Scale on Virtualized Production Systems in the Archanan Development Cloud Archanan emulates complex high performance and distributed clusters to the network component level with accurate run-time estimates of developers’ codes Supercomputers and other forms of large-scale, distributed computing systems are complex and expensive systems that enable discovery and insight only when they are running as production machines, executing proven applications at scale. Yet researchers and developers targeting their codes for particular systems, such as MareNostrum4 at Barcelona Supercomputing Center (BSC), must be able to develop or port, optimize, and test their software—eventually at scale—for the desired architecture before consuming production computing time. Supercomputing center directors, heads of research departments, and IT managers hosting these clusters have to balance the needs of the users’ production runs against the developers’ requests for time on the machine. The result is that developers wait in queues for the supercomputer. Some coding issues only appear on the full complement of cores for which the code is intended, so the wait times can get longer as core count needs increase. Delaying important development extends time to insight and discovery, impacting a researcher’s, a supercomputing center’s, a research department’s, and a company’s competitive position. What if we could use the cloud to virtualize and emulate an organization’s production system to provide every developer on the team with their own, personal Integrated Development Environment. The Archanan Development Cloud uses the power of emulation to recreate an organization’s production system in the cloud, eliminating test queues and enabling programmers to develop their code in real-time, at scale. -
Hepix Fall 2013 Workshop Grid Engine: One Roadmap
HEPiX Fall 2013 Workshop Grid Engine: One Roadmap Cameron Brunner Director of Engineering [email protected] Agenda • Grid Engine History • Univa Acquisition of Grid Engine Assets • What Does Univa Offer Our Customers • Grid Engine 8.0 and 8.1 • Grid Engine 8.2 Features • Case Studies Copyright © 2013 Univa Corporation, All Rights Reserved. 2 20 Years of History • 1992: Initial developments @ Genias in Regensburg • 1993: First Customer Shipment (as CODINE) • 1996: Addition of GRD policy module • Collaboration with Raytheon & Instrumental Inc • 1999: Merger with Chord Systems into Gridware • 2000: Acquisition through Sun • Re-launch as Sun Grid Engine • 2001: Open Sourcing • Until 2010: Massive growth in adoption (>10,000 sites) • 2010: Acquisition through Oracle • Open Source gets orphaned • 2011: Key engineering team joins Univa • 2013: Grid Engine Assets join team at Univa Copyright © 2013 Univa Corporation, All Rights Reserved. 3 Univa Acquisition of GE Assets • Univa acquires Grid Engine software IP • Copyright • Trademarks • Univa assumes support for Oracle customers under support for Grid Engine software • Version 6.2u6, u7 or u8 binaries • Customers: • Can elect to easily, economically upgrade to Univa Grid Engine • Services available to simplify upgrade process univa.com {welcome home} What does this mean? • Univa is the Home of Grid Engine • Re-unites intellectual property with original developers • Fritz Ferstl and his original team • Consolidation eliminates confusion • There is one Grid Engine software • There is one roadmap -
Grid Engine Users's Guide
Univa Corporation Grid Engine Documentation Grid Engine Users’s Guide Author: Version: Univa Engineering 8.5.3 July 19, 2017 Copyright ©2012–2017 Univa Corporation. All rights reserved. Contents Contents 1 Overview of Basic User Tasks1 2 A Simple Workflow Example1 3 Displaying Univa Grid Engine Status Information5 3.1 Cluster Overview....................................5 3.2 Hosts and Queues...................................5 3.3 Requestable Resources.................................8 3.4 User Access Permissions and Affiliations....................... 10 4 Submitting Batch Jobs 13 4.1 What is a Batch Job?................................. 13 4.2 How to Submit a Batch Job.............................. 13 4.2.1 Example 1: A Simple Batch Job....................... 13 4.2.2 Example 2: An Advanced Batch Job..................... 14 4.2.3 Example 3: Another Advanced Batch Job.................. 14 4.2.4 Example 4: A Simple Binary Job....................... 15 4.3 Specifying Requirements................................ 15 4.3.1 Request Files.................................. 16 4.3.2 Requests in the Job Script........................... 17 5 Using Job Classes to Prepare Templates for Jobs 17 5.1 Examples Motivating the Use of Job Classes..................... 18 5.2 Defining Job Classes.................................. 19 5.2.1 Attributes describing a Job Class....................... 20 5.2.2 Example 1: Job Classes - Identity, Ownership, Access........... 22 5.2.3 Attributes to Form a Job Template...................... 22 5.2.4 Example 2: Job Classes - Job Template................... 25 5.2.5 Access Specifiers to Allow Deviation..................... 26 5.2.6 Example 3: Job Classes - Access Specifiers................. 28 5.2.7 Different Variants of the same Job Class................... 29 5.2.8 Example 4: Job Classes - Multiple Variants................. 30 5.2.9 Enforcing Cluster Wide Requests with the Template Job Class..... -
Integrating Grid Computing Technology Into Multimedia Application Rafid Al-Khannak1, Abdulkareem A
International Journal of Computer Trends and Technology (IJCTT) – volume 23 Number 3 – May 2015 Integrating Grid Computing Technology into Multimedia Application Rafid Al-Khannak1, Abdulkareem A. Kadhim2, Nada S. Ibrahim3 1Computing Department, Buckingham New University, UK 2Networks Engineering Department, Al-Nahrain University, Iraq 3Information and Communications Engineering Department, Al-Nahrain University, Iraq Abstract— Systems now days are requiring huge database and illusion that is simple and yet robust self-managing computer massive power of computation. This cannot be gained using the which is virtual and out of a large heterogeneous systems available computation technology or computers. Execution time collection that are connected and sharing dissimilar sets of and large data processing are problems which are usually resources [5,6]. The grid portal is a way in which end users encountered in highly demanded applications such as multimedia can access the resources through. It is representing a method processing. Grid computing technology offers a possible solution for computational intensive applications. Canny edge detection for accessing these grid resources in a regular manner. Grid algorithm is known as the optimum algorithm for edge detection portal is supporting the users with some facilities such as that performs edge detection in five different and submission of jobs, management of these jobs, and monitoring computationally extensive stages, which consume large them [7,8]. Grid computing systems cover two main types, processing time. In this work, grid computing is used to perform which are the computational and the data grids. The Canny edge detection as an application of grid computing in computational grid is focusing on the optimization of the multimedia processing. -
Smart Orchestration Speeds HPC Workflows in the Cloud
Technology Spotlight Smart Orchestration Speeds HPC Workflows in the Cloud Sponsored by Amazon Web Services Steve Conway, Alex Norton, Earl Joseph and Bob Sorensen September 2019 HYPERION RESEARCH OPINION Extrapolated findings from Hyperion Research's most recent worldwide study indicate that HPC sites have rapidly increased the portion of their workloads they perform on third-party clouds. On average, they now send about 20% of all their HPC workloads to third-party clouds, double the 10% average reported in our studies less than two years ago. Cloud services providers (CSPs), and to a lesser extent HPC system vendors offering cloud services, have helped to propel this growth by adding features, functions and partners designed to make their platforms run a broader spectrum of HPC-related workflows cost- and time-effectively. For years, an important attraction of third-party cloud platforms has been their ability to offer wide selections of hardware, software and storage resources, especially resources that users lack on premises (or lack in sufficient quantity). These selections have expanded more recently to address the growing heterogeneity of HPC workflows, especially with the rise of high performance data analysis (HPDA) and HPC-enabled artificial intelligence (AI). The trend toward increased heterogeneity is nicely described in the U.S. Department of Energy's January 2018 report, Productive Computational Science in the Era of Extreme Heterogeneity. The growing heterogeneity and scale of major CSP offerings has increased the need for mechanisms that help HPC users deploy and manage their workloads in third-party clouds. Automating frequent and repetitive tasks is an important help but does little to manage workflows that might need to pass through a dozen or more stages from start to finish, with each stage requiring a different mix of hardware, software and storage resources. -
High Performance Computing for Dummies‰
™ Making Everything Easier! Sun and AMD Special Edition High Performance Computing Learn to: • Pick out hardware and software • Find the best vendor to work with • Get your people up to speed on HPC Douglas Eadline, PhD High Performance Computing FOR DUMmIES‰ SUN AND AMD SPECIAL EDITION by Douglas Eadline, PhD High Performance Computing For Dummies®, Sun and AMD Special Edition Published by Wiley Publishing, Inc. 111 River Street Hoboken, NJ 07030-5774 Copyright © 2009 by Wiley Publishing, Inc., Indianapolis, Indiana Published by Wiley Publishing, Inc., Indianapolis, Indiana No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Trademarks: Wiley, the Wiley Publishing logo, For Dummies, the Dummies Man logo, A Reference for the Rest of Us!, The Dummies Way, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permission. AMD, the AMD Arrow logo, AMD Opteron, AMD Virtualization, AMD-V, and combinations thereof are registered trademarks of Advanced Micro Devices, Inc. -
Cloud Computing Session 3 – Iaas and Paas Dr. Jean-Claude
Cloud Computing Session 3 – IaaS and PaaS Dr. Jean-Claude Franchitti New York University Computer Science Department Courant Institute of Mathematical Sciences 1 Agenda 1 Session Overview 2 Infrastructure as a Service (Continued) 3 Platform as a Service (PaaS) 4 Summary and Conclusion 2 Session Agenda ▪ Session Overview ▪ Infrastructure as a Service (Continued) ▪ Platform as a Service (PaaS) ▪ Summary & Conclusion 3 What is the class about? ▪ Course description and syllabus: » http://www.nyu.edu/classes/jcf/CSCI-GA.3033-010/ » http://www.cs.nyu.edu/courses/spring20/CSCI-GA.3033-010/ ▪ Session 3 Reference material: » Web sites for various IaaS providers as noted in presentation » Web sites for various PaaS vendors as noted in presentation 4 Icons / Metaphors Information Common Realization Knowledge/Competency Pattern Governance Alignment Solution Approach 5 Agenda 1 Session Overview 2 Infrastructure as a Service (Continued) 3 Platform as a Service (PaaS) 4 Summary and Conclusion 6 IaaS & Amazon IaaS Cloud ▪ IaaS Cloud and Amazon EC2 » Amazon EC2 Programming » Deconstructing Provisioning (Create a Machine) in a IaaS Cloud ▪ Understanding and Leveraging On-Demand Infrastructure » How to Preserve State Using Amazon EBS • Persistence Storage for Data (EBS for now) • Persisting software/config changes by creating own AMI » Virtualization • Key enabler for on-demand resource availability ▪ Supporting Elasticity » Elasticity Basics » How Elasticity is Supported in Amazon » Project Ideas ▪ Object-Based Cloud Storage ▪ Large File Systems Concepts