Managing Applications and Data in Distributed Computing Infrastructures

Total Page:16

File Type:pdf, Size:1020Kb

Managing Applications and Data in Distributed Computing Infrastructures Dedicated to my Family List of papers This thesis is based on the following papers, which are referred to in the text by their Roman numerals. Project - 1: In papers I and II, work on application execution environments is described. In paper I, we present tools for general purpose solutions using portal technology while paper II addresses access of grid resources within an application specific problem solving environment. I Erik Elmroth, Sverker Holmgren, Jonas Lindemann, Salman Toor, and Per-Olov Östberg. Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework. In Proc. 9th Workshop on State-of-the-art in Scientific and Parallel Computing (PARA 2008), Springer series Lecture Notes in Computer Science (LNCS), 6126 – 6127. II Mahen Jayawardena, Carl Nettelblad, Salman Toor, Per–Olov Östberg, Erik Elmroth, and Sverker Holmgren. A Grid–Enabled Problem Solving Environment for QTL Analysis in R. In Proc. 2nd International Conference on Bioinformatics and Computational Biology (BiCoB 2010), 2010. ISBN 978-1-880843-76-5. Contributions: In this project I participated in architecture design, integration component implementation and design of the QTL specific interface in LAP. I have also participated in system deployment, running experiments and in writing the article. Project - 2: Paper III, IV and V describe file-oriented distributed storage solutions. Papers III is focused on the architectural design of the Chelonia system whereas papers IV and V addressed stability, performance and identified issues. III Jon Kerr Nilsen, Salman Toor, Zsombor Nagy, and Bjarte Mohn. Chelonia – A Self-healing Storage Cloud. M. Bubak, M. Turala, and K. Wiatr, editors, In CGW’09 Proceedings, Krakow, 2 2010. ACC CYFRONET AGH. ISBN 978-83-61433-01-9. IV Jon Kerr Nilsen, Salman Toor, Zsombor Nagy, and Alex Read. Chelonia: A self-healing, replicated storage system. Published in Journal of Physics: Conference Series, 331(6):062019, 2011. V Jon Kerr Nilsen, Salman Toor, Zsombor Nagy, Bjarte Mohn, and Alex Read. Performance and Stability of the Chelonia Storage System. Accepted in International Symposium on Grids and Clouds (ISGC) 2012. Contributions: I did part of the system design and implementation. Also I designed, implemented and executed the test scenarios presented in all the articles. I was also heavily involved in technical discussions and papers writing. Project - 3: In papers VI and VII a database driven approach for managing data and the analysis requirements from scientific applications is discussed. Paper VI focuses on the data management whereas paper VII presents a solution for data analysis. VI Salman Toor, Manivasakan Sabesan, Sverker Holmgren, and Tore Risch. A Scalable Architecture for e-Science Data Management. Published in Proc. 7th IEEE International Conference on e-Science, ISBN 978-1-4577-2163-2. VII Salman Toor, Andrej Andrejev, Andreas Hellander, Sverker Holmgren, and Tore Risch. Scientific Analysis by Queries in Extended SPARQL Over a Distributed e-Science Data Store. Submitted in The International Conference for High Performance Computing, Networking, Storage and Analysis (SC 2012). Contributions: I did the architecture design, interface implementation and static partitioning for complex datatypes in Chelonia. I also participated in designing use-cases to demonstrate the system and in article writing. Project - 4: Paper VIII also addresses a distributed storage solution. In this paper we explore a cloud based storage solution for scientific applications. VIII Salman Toor, Rainer Töebbicke, Maitane Zotes Resines, and Sverker Holmgren. Investigating an Open Source Cloud Infrastructure for CERN-Specific Data Analysis. Accepted in 7th IEEE International Conference on Networking, Architecture, and Storage (NAS 2012). Contributions: I participated in enabling access from the ROOT framework to SWIFT and in prototype system deployment. I worked on design, implementation and execution of the test-cases presented, contributed to the technical discussion, and participated in paper writing. Reproduced with the permission of the publishers, presented here in another format than in the original publication. Contents Part I: Introduction ........................................................................................... 11 1 Introduction ................................................................................................ 13 1.1 Overview of Distributed Computing ............................................. 15 1.1.1 Communication Protocols ............................................... 15 1.1.2 Architectural Designs ...................................................... 16 1.1.3 Frameworks for Distributed Computing ........................ 17 1.2 Models for Scalable Distributed Computing Infrastructures ...... 18 1.2.1 Grid Computing ............................................................... 19 1.2.2 Cloud Computing ............................................................ 20 1.2.3 Grids vs Clouds ............................................................... 21 1.2.4 Other Relevant Models ................................................... 21 1.3 Technologies for Large Scale Distributed Computing Infrastructures ................................................................................. 21 Part II: Application Execution Environments ................................................ 25 2 Application Environments for Grids ........................................................ 27 2.1 Grid Portals ..................................................................................... 27 2.2 Application Workflows .................................................................. 27 2.3 The Job Management Component ................................................ 28 2.4 Thesis Contribution ........................................................................ 28 2.4.1 System Architecture ........................................................ 29 Part III: Distributed Storage Solution ............................................................. 31 3 Distributed Storage Systems ..................................................................... 33 3.1 Characteristics of Distributed Storage .......................................... 34 3.2 Challenges of Distributed Storage ................................................ 34 3.3 Thesis Contribution ........................................................................ 35 3.3.1 Chelonia Storage System ................................................ 35 3.3.2 Database Enabled Chelonia ............................................ 38 3.3.3 Cloud based Storage Solution ......................................... 39 Part IV: Resource Allocation in Distributed Computing Infrastructures ..... 41 4 Resource Allocation in Distributed Computing Infrastructures ............. 43 4.1 Models for Resource Allocation ................................................... 43 4.2 Thesis Contribution ........................................................................ 44 Part V: Article Summary ................................................................................. 47 5 Summary of Papers in the Thesis ............................................................. 49 5.1 Paper-I ............................................................................................. 49 5.2 Paper-II ........................................................................................... 49 5.3 Paper-III .......................................................................................... 50 5.4 Paper-IV .......................................................................................... 50 5.5 Paper-V ........................................................................................... 50 5.6 Paper-VI .......................................................................................... 51 5.7 Paper-VII ........................................................................................ 51 5.8 Paper-VIII ....................................................................................... 52 6 Svensk sammanfattning ............................................................................. 53 7 Acknowledgments ................................................................................... 55 References ........................................................................................................ 57 List of Other Publications These publications have been written during my PhD studies but are not part of the thesis. However, some of the material in publications I and II below is included in other papers in the thesis. Also, some of the conclusions in publi- cation III are presented in Section 4.2 in the thesis summary. I. Mahen Jayawardena, Salman Toor, and Sverker Holmgren. A grid portal for genetic analysis of complex traits. Proc. 32nd International Conven- tion on Information and Communication Technology, Electronics and Microelectronics : Volume I. - Rijeka, Croatia : MIPRO, 2009. - S. 281-284. II. Mahen Jayawardena, Salman Toor, and Sverker Holmgren. Compu- tational and visualization tools for genetic analysis of complex traits. Technical Report no. 2010-001. Department of Information Technol- ogy, Uppsala University. III. Salman Toor, Bjarte Mohn, David Cameron, and Sverker Holmgren. Case-Study for Different Models of Resource Brokering in Grid Sys- tems. Technical Report no. 2010-009. Department of Information Tech- nology, Uppsala University. 9 List of Presentations The material presented in this thesis has
Recommended publications
  • Consistency in Cloud-Based Database Systems
    https://doi.org/10.31449/inf.v43i1.2650 Informatica 43 (2019) 313–319 313 Consistency in Cloud-based Database Systems Zohra Mahfoud USTHB University, Algeria E-mail: [email protected] Nadia Nouali-Taboudjemat CERIST Research Center, Algeria E-mail: [email protected] Keywords: cloud computing, consistency, distributed databases, relational databases, No-SQL, CAP Received: July 15, 2019 Cloud computing covers the large spectrum of services available on the internet. Cloud services use replication to ensure high availability. Within database replication, various copies of the same data item are stored in different sites, this situation requires managing the consistency of the multiple copies. In fact, the requirement for consistency level can be different according to application natures and other metrics; a delay of some minutes in visualizing latest posts in social networks can be tolerated, while some seconds can make a loss of a bid in an auction system. Wide variety of database management systems are used actually by cloud services, they support different levels of consistency to meet the diversity of needs. This paper draws a presentation of the main characteristics of cloud computing and data management systems and describes different consistency models. Then it discusses the most famous cloud-based database management systems from the point of view of their data and consistency models. Povzetek: Prispevek analizira podatkovna skladišča v oblakih predvsem s stališča konsistentnosti. 1 Introduction Cloud computing refers to the large spectrum of services cloud systems and describes the implemented models of available on the internet. These services manage big data and consistency. Section 6 concludes the paper.
    [Show full text]
  • Introduction to Amazon EC2 Running IBM
    Introduction to Amazon EC2 Running IBM Featuring Mike Culver, Technical Evangelist for Amazon Web Services Melody Ng, Manager, Data Management Emerging Partnerships & Technologies for IBM Jason Chan, Linux and Virtualization Lead, Data Management Emerging Partnerships & Technologies for IBM Majed Itani, Chief Software Architect for SugarCRM Webinar — Introduction to Amazon EC2 Running IBM Introducon IBM SugarCRM Q&A Q&A Amazon Has Three Parts 1 2 3 What You Want… Develop Test Operate What You Get… Undifferenated heavy liing • Hardware costs • Soware costs • Maintenance • Load balancing • Scaling Develop Test Operate • Ulizaon • Idle machines • Bandwidth management • Server hosng • Storage Management • High availability Continuous Process Improvement Makes it Worse Undifferenated heavy liing • Hardware costs • Soware costs • Maintenance • Load balancing • Scaling Develop Test Operate • Ulizaon • Idle machines • Bandwidth management • Server hosng • Storage Management • High availability The 70/30 Switch Differenated Value Undifferenated Heavy Liing Creaon Undifferenated Differenated Value Creaon Heavy Liing We Think of the Cloud as a Set of Building Block Services Infrastructure As a Service Payments As a Service Amazon Simple Storage Service Amazon Flexible Payments Amazon Elastic Compute Cloud Service Amazon Simple Queue Service Amazon DevPay Amazon SimpleDB Amazon CloudFront Fulfillment and Associates Amazon Elastic MapReduce Amazon Fulfillment Web Service Amazon Associates Web Service People As a Service Amazon Mechanical Turk What is Amazon
    [Show full text]
  • The Translational Journey of the Htcondor-CE
    Journal of Computational Science xxx (xxxx) xxx Contents lists available at ScienceDirect Journal of Computational Science journal homepage: www.elsevier.com/locate/jocs Principles, technologies, and time: The translational journey of the HTCondor-CE Brian Bockelman a,*, Miron Livny a,b, Brian Lin b, Francesco Prelz c a Morgridge Institute for Research, Madison, USA b Department of Computer Sciences, University of Wisconsin-Madison, Madison, USA c INFN Milan, Milan, Italy ARTICLE INFO ABSTRACT Keywords: Mechanisms for remote execution of computational tasks enable a distributed system to effectively utilize all Distributed high throughput computing available resources. This ability is essential to attaining the objectives of high availability, system reliability, and High throughput computing graceful degradation and directly contribute to flexibility, adaptability, and incremental growth. As part of a Translational computing national fabric of Distributed High Throughput Computing (dHTC) services, remote execution is a cornerstone of Distributed computing the Open Science Grid (OSG) Compute Federation. Most of the organizations that harness the computing capacity provided by the OSG also deploy HTCondor pools on resources acquired from the OSG. The HTCondor Compute Entrypoint (CE) facilitates the remote acquisition of resources by all organizations. The HTCondor-CE is the product of a most recent translational cycle that is part of a multidecade translational process. The process is rooted in a partnership, between members of the High Energy Physics community and computer scientists, that evolved over three decades and involved testing and evaluation with active users and production infrastructures. Through several translational cycles that involved researchers from different organizations and continents, principles, ideas, frameworks and technologies were translated into a widely adopted software artifact that isresponsible for provisioning of approximately 9 million core hours per day across 170 endpoints.
    [Show full text]
  • Condiciones De Uso Del Sitio AWS
    Condiciones de Uso del Sitio AWS Última actualización: 30 de agosto de 2017. Bienvenido al sitio de Amazon Web Services (el “Sitio AWS”). Amazon Web Services, Inc. y/o sus filiales (“AWS”) le ofrecen acceso al Sitio AWS con sujeción a las siguientes condiciones de uso (“Condiciones del Sitio”). Al visitar el Sitio AWS, acepta usted las Condiciones del Sitio. Le rogamos las lea detenidamente. Por otra parte, cuando utilice cualquier servicio, contenido u otros materiales de AWS, actuales o futuros, también estará usted sujeto al Contrato de Usuario AWS u otro acuerdo que rija el uso por su parte de nuestros servicios (el “Contrato”). PRIVACIDAD Le rogamos lea nuestra Política de Privacidad, que también regirá sus visitas al Sitio AWS, para familiarizarse con nuestras prácticas. COMUNICACIONES ELECTRÓNICAS Cuando visita usted el Sitio AWS o nos envía correos electrónicos, se está comunicando usted con nosotros por vía electrónica. Usted consiente recibir comunicaciones enviadas por nosotros por vía electrónica. Nos comunicaremos con usted por correo electrónico o publicando avisos en el Sitio AWS. Acepta que todos los acuerdos, notificaciones, divulgaciones y otras comunicaciones que le facilitemos por vía electrónica satisfacen el requisito legal de que dichas comunicaciones se realicen por escrito. DERECHOS DE AUTOR Todo el contenido incluido en el Sitio AWS, como puede ser texto, gráficos, logotipos, iconos de botones, imágenes, clips de audio, descargas digitales, compilaciones de datos y software, es propiedad de AWS o de sus proveedores de contenido y está protegido por las leyes de derechos de autor internacionales y de los Estados Unidos de América. La compilación de la totalidad del contenido del Sitio AWS es propiedad exclusiva de AWS y está protegida por las leyes de derechos de autor internacionales y de los Estados Unidos de América.
    [Show full text]
  • AWS SDK for .NET Developer Guide Version V2.0.0 AWS SDK for .NET Developer Guide
    AWS SDK for .NET Developer Guide Version v2.0.0 AWS SDK for .NET Developer Guide AWS SDK for .NET: Developer Guide Copyright © 2014 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. The following are trademarks of Amazon Web Services, Inc.: Amazon, Amazon Web Services Design, AWS, Amazon CloudFront, Cloudfront, CloudTrail, Amazon DevPay, DynamoDB, ElastiCache, Amazon EC2, Amazon Elastic Compute Cloud, Amazon Glacier, Kinesis, Kindle, Kindle Fire, AWS Marketplace Design, Mechanical Turk, Amazon Redshift, Amazon Route 53, Amazon S3, Amazon VPC. In addition, Amazon.com graphics, logos, page headers, button icons, scripts, and service names are trademarks, or trade dress of Amazon in the U.S. and/or other countries. Amazon©s trademarks and trade dress may not be used in connection with any product or service that is not Amazon©s, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS SDK for .NET Developer Guide Table of Contents AWS SDK for .NET Developer Guide ................................................................................................ 1 How to Use This Guide ........................................................................................................... 1 Supported Services and Revision History .................................................................................
    [Show full text]
  • Data with Simpledb
    DATA WITH SIMPLEDB Amazon’s SimpleDB is a distributed document-oriented database. SimpleDB shares many characteristics of relational databases, but it also has some significant dissimilarities. For instance, like a relational database, SimpleDB is designed for storing tuples of related information. Unlike a relational database, SimpleDB does not provide data ordering services—that is left to the programmer. Instead of tables, SimpleDB offers domains, which are schema-less. A domain is filled with items— similar to a row. Each item must have a unique name (which is not generated by SimpleDB) and may contain up to 256 attributes. Each attribute can have multiple values. The following table is an example domain for a Person. Note that the “Phones” attribute contains more than one value for Bob and Joe. Item Name Age Phones Bob 21 (555)555-1212 (555)555-7777 Alice 22 (555)555-1213 Joe 32 (555)555-1432 (555)555-5432 TABLE 1 - TABULAR REPRESENTATION OF A SAMPLE DOMAIN. All item names and attribute values in SimpleDB are strings. All comparisons during queries are string-based comparisons. This means that non-string data types (such as numbers), need to be converted into a single, consistent lexigraphic representation for storage (zero padding for numbers.) Queries are performed in a custom query language. Queries are formed from predicates. A predicate is a simple comparison expression, surrounded by square brackets. For instance: [‘Age’ = ‘22’]. A query with that predicate will return Alice from TABLE 1. Predicates can be combined with the union, intersection, or not keyword. For instance: [‘Age’ = ‘22’] union [‘Age’ = ‘32’] will return Bob and Joe from TABLE 1.
    [Show full text]
  • The Advanced Resource Connector User Guide
    NORDUGRID NORDUGRID-MANUAL-6 7/12/2010 The NorduGrid/ARC User Guide Advanced Resource Connector (ARC) usage manual 2 Contents 1 Preface 11 2 Roadmap 13 3 Client Installation 15 3.1 Beginner Installation . 15 3.1.1 Download . 15 3.1.2 Unpack . 15 3.1.3 Configure . 16 3.2 System-wide installation . 16 3.2.1 Installation via repositories . 17 3.2.2 Globus Toolkit . 18 3.2.3 Download the client . 18 3.2.4 Build . 19 3.2.5 Install the client . 20 3.3 Client configuration files . 21 3.3.1 client.conf . 21 3.3.2 srms.conf . 23 3.3.3 Deprecated configuration files . 24 4 Grid Certificates 25 4.1 Quick start with certificates . 25 4.1.1 Certificates in local standalone client . 25 4.1.2 Certificates for system-wide client installation . 27 4.1.3 Obtain a personal certificate . 27 4.2 Grid Authentication And Certificate Authorities . 28 4.2.1 Grid Login, Proxies . 28 4.2.2 Certificate Authorities . 29 4.2.3 Globus Grid certificates . 29 4.2.4 Associating yourself with a proper CA . 30 4.2.5 Friendly CAs . 31 4.2.6 Certificate Request . 32 4.2.7 Working with certificates: examples . 33 3 4 CONTENTS 5 Getting Access to Grid Resources: Virtual Organisations 35 5.1 NorduGrid Virtual Organisation . 35 5.2 Other Virtual Organisations . 36 6 Grid Session 37 6.1 Logging Into The Grid . 37 6.1.1 Proxy Handling Tips . 38 6.2 First Grid test . 38 6.3 Logging Out .
    [Show full text]
  • Facilitating E-Science Discovery Using Scientific Workflows on the Grid
    Chapter 13 Facilitating e-Science Discovery Using Scientific Workflows on the Grid Jianwu Wang, Prakashan Korambath, Seonah Kim, Scott Johnson, Kejian Jin, Daniel Crawl, Ilkay Altintas, Shava Smallen, Bill Labate, and Kendall N. Houk Abstract e-Science has been greatly enhanced from the developing capability and usability of cyberinfrastructure. This chapter explains how scientific workflow systems can facilitate e-Science discovery in Grid environments by providing features including scientific process automation, resource consolidation, parallelism, provenance tracking, fault tolerance, and workflow reuse. We first overview the core services to support e-Science discovery. To demonstrate how these services can be seamlessly assembled, an open source scientific workflow system, called Kepler, is integrated into the University of California Grid. This architecture is being applied to a computational enzyme design process, which is a formidable and collaborative problem in computational chemistry that challenges our knowledge of protein chemistry. Our implementation and experiments validate how the Kepler workflow system can make the scientific computation process automated, pipe- lined, efficient, extensible, stable, and easy-to-use. 13.1 Introduction “e-Science is about global collaboration in key areas of science and the next genera- tion of infrastructure that will enable it.”1 Grid computing “coordinates resources that are not subject to centralized control by using standard, open, general-purpose 1 John Taylor, Director General of Research Councils, Office of Science and Technology, UK. J. Wang (*), D. Crawl, I. Altintas, and S. Smallen San Diego Supercomputer Center, UCSD, 9500 Gilman Drive, MC 0505, La Jolla, CA 92093, USA e-mail: [email protected] P. Korambath, K.
    [Show full text]
  • Netflix's Transition to a Key V3
    Netflix’s Transition to High-Availability Storage Systems Siddharth “Sid” Anand October 2010 1. Abstract The CAP Theorem states that it is possible to optimize for any 2 of Consistency, Availability, and Network Partition Tolerance, but not all three. Though presented by Eric Brewer in 2000 and proved in 2002, the CAP Theorem has only recently gained significant awareness and that primarily among engineers working on high-traffic applications. With spreading awareness of the CAP theorem, there has been a proliferation of development on AP (a.k.a. Available-Network Partition Tolerant) systems – systems that offer weaker consistency guarantees for higher availability and network partition tolerance. Much of the interest in these AP systems is in the social networking and web entertainment space where eventual consistency issues are relatively easy to mask. Netflix is one company that is embracing AP systems. This paper addresses Netflix’s transition to AWS SimpleDB and S3, examples of AP storage systems. 2. Motivation Circa late 2008, Netflix had a single data center. This single data center raised a few concerns. As a single- point-of-failure (a.k.a. SPOF), it represented a liability – data center outages meant interruptions to service and negative customer impact. Additionally, with growth in both streaming adoption and subscription levels, Netflix would soon outgrow this data center -- we foresaw an imminent need for more power, better cooling, more space, and more hardware. One option was to build more data centers. Aside from high upfront costs, this endeavor would likely tie up key engineering resources in data center scale out activities, making them unavailable for new product initiatives.
    [Show full text]
  • Glite Middleware
    Enabling Grids for E-sciencE Overview of the EGEE project and the gLite middleware www. eu-egee.org EGEE-III INFSO-RI-222667 Outline Enabling Grids for E-sciencE • What is EGEE? – The project – The infrastructure • gLite middleware • EGEE applications • Sources of further information EGEE-III INFSO-RI-222667 2 Defining the Grid Enabling Grids for E-sciencE • A Grid is the combination of networked resources and the corresponding middleware, which provides services for the user. EGEE-III INFSO-RI-222667 Providing a Production Grid Infrastructure for Collaborative Science 3 The EGEE Project Enabling Grids for E-sciencE • Aim of EGEE: “to establish a seamless European Grid infrastructure for the support of the European Research Area (ERA)” • EGEE – 1 April 2004 – 31 March 2006 – 71 partners in 27 countries, federated in regional Grids • EGEE-II – 1 April 2006 – 30 April 2008 – EddtiExpanded consortium • EGEE-III – 1 May 2008 – 30 April 2010 – Transition to sustainable model EGEE-III INFSO-RI-222667 Providing a Production Grid Infrastructure for Collaborative Science 4 Defining the Grid Enabling Grids for E-sciencE • A Grid is the combination of networked resources and the corresponding middleware, which provides services for the user. EGEE-III INFSO-RI-222667 Providing a Production Grid Infrastructure for Collaborative Science 5 EGEE working with related Enabling Grids for E-sciencE infrastructure projects GIN EGEE-III INFSO-RI-222667 Providing a Production Grid Infrastructure for Collaborative Science 6 What is happening now? Enabling Grids
    [Show full text]
  • Pos(EGICF12-EMITC2)050
    A Science Gateway Getting Ready for Serving the International Molecular Simulation Community PoS(EGICF12-EMITC2)050 Sandra Gesing Center for Bioinformatics & Department of Computer Science, University of Tübingen Sand 14, 72076 Tübingen, Germany E-mail: [email protected] Sonja Herres-Pawlis1 Department of Chemistry, Ludwig-Maximilians-University Munich Butenandtstr. 5-13, 81377 München, Germany E-mail: [email protected] Georg Birkenheuer Paderborn Center for Parallel Computing, University of Paderborn Warburger Str. 100, 33089 Paderborn, Germany E-mail: [email protected] André Brinkmann Johannes Gutenberg-University Mainz 55099 Mainz, Germany E-mail: [email protected] Richard Grunzke Center for Information Services and High Performance Computing, Technische Universität Dresden Zellescher Weg 12-14, Germany E-mail: [email protected] Peter Kacsuk Laboratory of Parallel and Distributed Systems, MTA SZTAKI Kende Street 13-17, 1111 Budapest, Hungary E-mail: [email protected] Oliver Kohlbacher Center for Bioinformatics & Department of Computer Science, University of Tübingen Sand 14, 72076 Tübingen, Germany E-mail: [email protected] 1 Speaker Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it Miklos Kozlovszky Laboratory of Parallel and Distributed Systems, MTA SZTAKI Kende Street 13-17, 1111 Budapest, Hungary E-mail: [email protected] Jens Krüger Center for Bioinformatics & Department
    [Show full text]
  • Migrating AWS Resources to a New Region
    Migrating AWS Resources to a New AWS Region July 2017 This paper has been archived For the latest technical content, see the AWS Whitepapers & Guides page: https://aws.amazon.com/whitepapers Archived Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. Archived Contents Abstract .................................................................................................................. 5 Introduction ............................................................................................................ 1 Scope of AWS Resources .................................................................................. 1 AWS IAM and Security Considerations ............................................................. 1 Migrating Compute Resources .............................................................................
    [Show full text]