Clusterable Task Scheduler

Total Page:16

File Type:pdf, Size:1020Kb

Clusterable Task Scheduler Masaryk University Faculty of Informatics Clusterable Task Scheduler Bachelor’s Thesis Ján Michalov Brno, Fall 2019 Masaryk University Faculty of Informatics Clusterable Task Scheduler Bachelor’s Thesis Ján Michalov Brno, Fall 2019 This is where a copy of the official signed thesis assignment and a copy ofthe Statement of an Author is located in the printed version of the document. Declaration Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Ján Michalov Advisor: RNDr. Adam Rambousek, Ph.D. i Acknowledgements I would like to sincerely thank my advisor RNDr. Adam Rambousek, Ph.D. for his guidance, patience and precious advice. I am also grateful to my consultant Bc. Matej Lazar, who directed me and helped with the design across countless hours of meetings. I wish to thank my friends and family for their support during stressful days. iii Abstract The purpose of this thesis is to create a microservice that would sched- ule tasks happening on other devices. The tasks can have different tasks declared as dependencies, and the microservice must execute them in the correct order. Additionally, the microservice must be able to be deployed in a clustered environment, which means ensuring data consistency and preventing duplicate execution of a task. The chosen platform for the microservice is Java. iv Keywords microservice, dependency resolution, scheduling, Java, data consis- tency, cluster v Contents Introduction 1 1 Theoretical background 3 1.1 Scheduling ..........................3 1.2 Applicaton clustering .....................3 1.3 Microservice architecture ...................4 2 JBoss MSC 5 2.1 Architecture ..........................5 2.1.1 Service . .5 2.1.2 ServiceController . .6 2.1.3 ServiceRegistrationImpl . .7 2.1.4 Dependency and Dependent . .7 2.1.5 ServiceRegistry . .8 2.1.6 ServiceTarget . .8 2.1.7 ServiceContainer . .9 2.2 Transitional period of a ServiceController ..........9 2.3 Concurrency and synchronization .............. 11 2.4 Pros and cons ......................... 12 2.5 Conclusion .......................... 13 3 Infinispan 15 3.1 Client-server mode ...................... 15 3.1.1 Network protocols . 16 3.1.2 Server . 16 3.1.3 Hot Rod Java client . 17 4 Application platform 21 4.1 JBoss EAP ........................... 21 4.2 Thorntail ........................... 22 4.3 Quarkus ............................ 22 4.4 Decision ............................ 23 5 Design 25 5.1 Requirements ......................... 25 5.1.1 Must have . 25 vii 5.1.2 Should have . 25 5.1.3 Could have . 26 5.1.4 Remote entity requirements . 26 5.2 Differences and similarities to JBoss MSC .......... 26 5.2.1 Naming changes . 26 5.2.2 What stayed . 27 5.2.3 What changed . 27 5.3 States, Transitions, Modes and Jobs ............. 28 5.3.1 Modes . 29 5.3.2 Jobs . 29 5.3.3 StageGroups . 29 5.3.4 States . 30 5.4 Modules ............................ 31 6 Implementation 33 6.1 Context dependency injection ................. 33 6.1.1 Maven module problem . 33 6.2 Transactions .......................... 34 6.2.1 Partial updates . 34 6.2.2 Prevention of duplicate execution . 34 6.3 Mapping ........................... 35 6.4 REST ............................. 35 6.5 Installation .......................... 36 6.5.1 Prerequisites . 36 6.5.2 Setting up an Infinispan server . 37 6.5.3 Compilation and execution . 38 7 Testing 39 7.1 Local integration testing ................... 39 7.2 Clustered testing ....................... 40 8 Conclusion 43 8.1 Future improvements ..................... 43 Bibliography 45 A Attached files 47 viii List of Figures 2.1 State-diagram of JBoss MSC. Source: [6] 6 5.1 State-machine diagram of a Task 28 5.2 The diagram of package dependencies in the scheduler 31 ix Introduction This thesis was created as an effort from company Red Hat, to im- prove the scalability of a product called Project Newcastle. Nowadays, scalability is a common problem across products. There are two ways to scale a product. Vertically with an addition of power in the form of memory and CPU cores or horizontally with clustering. However, some products suffer from a sophisticated monolithic design. This problem also persists in Project Newcastle. One of the techniques that solve this dilemma is microservice architecture. Microservice architec- ture aims to dissect these monoliths into smaller parts, each with their function, that communicates with each other. These parts are simple, therefore easier to maintain, and should be designed in a way to scale in a cluster. One of the functions of Project Newcastle is to schedule tasks, which are executed remotely. Additionally, these tasks have defined dependencies and therefore have to be scheduled in the correct order. The goal of this thesis is to create an open-source remote scheduler with an ability to scale in the cluster and microservice architecture in mind. The thesis is made of seven chapters excluding conclusion. The first chapter focuses on the theoretical aspect of the thesis and introduces the reader to complexities of scheduling, clustering and microservice architecture. The following chapter regards to an analysis of JBoss MSC library. The library implements a scheduling solution with a dif- ferent use-case but flexible implementation, which concepts are used in the design. Chapter three concentrates on a Red Hat developed datastore solution Infinispan, which intent is to be used for enable- ment of clustering for a variety of applications. Next chapter is briefly introducing available Red Hat application platforms, their strong and weak aspects, and which is the most suitable for the scheduler. Chapter five is the design of the application. The chapter defines and explains the requirements, points out the major distinctions against JBoss MSC and defines the states of a task and other essential models. Chapter six delves into the problematics of the implementation part of the the- sis. This chapter points out some flaws of used libraries/frameworks, describes how is data consistency guaranteed and concludes with 1 a guide for compiling from source and subsequent execution of the scheduler. The last chapter before the conclusion is focused on testing. The testing includes local integration tests and clustered tests, which the chapter describes in detail. 2 1 Theoretical background 1.1 Scheduling In a scheduling problem, there is a set of tasks and a set of constraints. These constraints state that executing a specific task could depend on other tasks being completed beforehand. These sets can be mapped into a directed graph, with the tasks as the nodes and the direct pre- requisite constraints as the edges. If this graph has a cycle, then it implies that there exists a task, which is transitively dependent on itself. Therefore, this task has to complete before it can start, which doesn’t make sense. Hence, the graph can not have cycles and can be expressed as a directed acyclic graph (DAG). To schedule a set of tasks an ordering is needed. The order has to respect dependency constraints. For DAG, this ordering is called topological sorting. Every finite DAG has a topological sort. However, there can be more than one possible topological sort.[1] A topological sort can be found by iteratively marking a task that has either no dependencies or all of its dependencies are marked. The ordering of marks suggests a topological sorting. This algorithm can produce a different order if it has more than one task available to mark. For parallel task scheduling, the algorithm mentioned above can be modified. Instead of marking one task each iteration, it marks all tasks ready for marking. All tasks marked in one iteration are independent of each other and can execute concurrently. A further modification could be to only allow a certain number of marks in one iteration, which simulates a situation where resources are limited. 1.2 Applicaton clustering Application clustering typically refers to a method of grouping mul- tiple computer servers into an entity that behaves as a single system. A server in a cluster is referred to as a node. Typically, each node runs the same copy of an application that is usually deployed on an application server which provides clustering features (Wildfly). For 3 1. Theoretical background instance, Wildfly1 application servers can discover each other on a network and replicate the state of a deployed application [2]. Benefits of clustering include [3]: 1. Load Balancing (Scalability): Processing requests are distributed across the cluster nodes. The main objective of load balancing is to limit nodes getting overloaded and possibly shutting down. Adding more nodes to a cluster increases the cluster’s whole computing capabilities. 2. Fail-over (High Availability): Clusters enable services to last for longer periods. Singular servers have a single point of failure. A server can fail unexpectedly due to unforeseen causes such as infrastructure issues, networking problems or software crash- ing. On the other hand, a cluster is more resilient. If one node crashes, there are still other nodes that can handle incoming requests. A direct method of developing a clusterable application is without keeping a state. A stateless application does not retain data for later use. However, the state can be stored in a database instead. Each stateless application can connect to the database where they keep all information. Stateless applications are easily scalable.[4] 1.3 Microservice architecture Microservice is architectural style motivated by a service-oriented architecture (SOA) that appeared due to a need for flexible and conve- niently scalable applications as opposed to monolithic style, which is challenging to use in distributed systems. Microservices handle gradu- ally increasing complexity of large systems by decomposing them into a set of independent services. These services are loosely-coupled, and each should provide specific functionality.
Recommended publications
  • Concurrent and Distributed Cloudsim Simulations
    Concurrent and Distributed CloudSim Simulations Pradeeban Kathiravelu Luis Veiga INESC-ID Lisboa INESC-ID Lisboa Instituto Superior Tecnico,´ Universidade de Lisboa Instituto Superior Tecnico,´ Universidade de Lisboa Lisbon, Portugal Lisbon, Portugal Email: [email protected] Email: [email protected] Abstract—Cloud Computing researches involve a tremendous CloudSim was further developed as a cloud simulator on its amount of entities such as users, applications, and virtual ma- own. Due to its modular architecture which facilitates cus- chines. Due to the limited access and often variable availabil- tomizations, it is extended into different simulation tools such ity of such resources, researchers have their prototypes tested as CloudAnalyst [15] and NetworkCloudSim [16]. Developed against the simulation environments, opposed to the real cloud in Java, CloudSim is portable. CloudSim can be easily modified environments. Existing cloud simulation environments such as by extending the classes, with a few changes to the CloudSim CloudSim and EmuSim are executed sequentially, where a more advanced cloud simulation tool could be created extending them, core. Its source code is open and maintained. Hence, CloudSim leveraging the latest technologies as well as the availability of was picked as the core module to build the distributed simulator. multi-core computers and the clusters in the research laboratories. In the remaining of the paper, we will discuss the pre- This research seeks to develop Cloud2Sim, a concurrent and liminary background information on CloudSim in section II. distributed cloud simulator, extending CloudSim while exploiting the features provided by Hazelcast, Infinispan and Hibernate Section III discusses design and implementation of Cloud2Sim, Search to distribute the storage and execution of the simulation.
    [Show full text]
  • An Analysis of Network-Partitioning Failures in Cloud Systems
    An Analysis of Network-Partitioning Failures in Cloud Systems Ahmed Alquraan, Hatem Takruri, Mohammed Alfatafta, and Samer Al-Kiswany, University of Waterloo https://www.usenix.org/conference/osdi18/presentation/alquraan This paper is included in the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’18). October 8–10, 2018 • Carlsbad, CA, USA ISBN 978-1-939133-08-3 Open access to the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX. An Analysis of Network-Partitioning Failures in Cloud Systems Ahmed Alquraan, Hatem Takruri, Mohammed Alfatafta, Samer Al-Kiswany University of Waterloo, Canada Abstract production networks, network-partitioning faults occur We present a comprehensive study of 136 system as frequently as once a week and take from tens of failures attributed to network-partitioning faults from minutes to hours to repair. 25 widely used distributed systems. We found that the Given that network-partitioning fault tolerance is a majority of the failures led to catastrophic effects, such well-studied problem [13, 14, 17, 20], this raises as data loss, reappearance of deleted data, broken locks, questions about how these faults sill lead to system and system crashes. The majority of the failures can failures. What is the impact of these failures? What are easily manifest once a network partition occurs: They the characteristics of the sequence of events that lead to require little to no client input, can be triggered by a system failure? What are the characteristics of the isolating a single node, and are deterministic. However, network-partitioning faults? And, foremost, how can we the number of test cases that one must consider is improve system resilience to these faults? To help answer these questions, we conducted a extremely large.
    [Show full text]
  • Sexy Technology Like Schema on the Fly
    Sexy Technology Like Schema On The Fly Sergei never change-overs any Botswana depolymerize speedfully, is Apollo acquitted and broadcast enough? Sometimes limiest Jarvis misdoes her sinfonia provokingly, but bespectacled Layton understudies belive or prognosticated hooly. Drearisome and mystifying Taylor prosper her enthrallment mechanizes or waver successfully. Id that much greater numbers in place to already instrumented with schema on technology the sexy like Anil has contain a technical contributor for various blogs such as IBM Watson. This method of keeping the live application up-to-date is called Hot Module. Monitor the dot database vulnerable to identify hot spots in future data. Worry throughput can be provisioned on the legislation without any downtime. Apache HBase Reference Guide. Convolutional neural networks we want to collected, for you to have to face development literature on the events on other technologies are generated automatically spins up and adults. Is OLAP Dead Senturus. As their name implies schemaless does not allege a schema. The Analyze Vacuum schema utility helps you automate the table. New scripts are hollow-deployed inside your already running Flink job as described in accept next section. Roles and schemas in part has their observation of characters in video. Amazon Timestream Is Finally Released Is inventory Worth Your. In making deal a large volume big data various databases technologies emerged adopting SQL like paradigmApache Hive is one cherish them. Using Infinispan as taking database replacement using Hibernate OGM you can. Find himself right schemas for your military and your thread on Schemaorg. Agents and Ambient Intelligence Achievements and Challenges. Gender Development Research for Sex Roles Historical.
    [Show full text]
  • Big Data Performance and Comparison with Different DB
    Matteo D’Aloia et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 8 (1) , 2017, 59-63 Big Data Performance and Comparison with Different DB Systems Matteo D’Aloia#1, Ruggero Russo#2, Gianpaolo Cice#3, Angela Montingelli#4, Gaetano Frulli*5, Egidio Frulli*5, Francesca Mancini*5, Maria Rizzi+6, Annalisa Longo+7 #Masvis srl Via Dalla Chiesa 27, 70014 Conversano (Bari), Italy *Fruman Rappresentanze srl, Via Demetrio Marin, 35 - 70125 Bari (BA) – Italy +Dipartimento di Ingegneria Elettrica e dell’Informazione Politecnico di Bari, Bari, Italy Abstract— In this work we analyze the computational costs of edited at runtime without blocking updating process and write, read and delete operations for five database (DB) queries. systems. Starting from a complete overview of database Another DB platform is Hbase [2]. HBase is an open source systems, two standard DB systems, such as MySQL and DB system modeled on BigTable. It was developed as part Postgres, and three Big Data systems such as Hbase, of the project Hadoop of the Apache Software Foundation Cassandra and MongoDB have been analyzed. The computation time estimation has been performed for a single and run on HDFS (Hadoop Distributed File System), node implementation and for the computation of 50.000 and providing capacity similar to those of BigTable for Hadoop. 1.000.000 records. The operations has been executed on It is observed that BigTable introduces a data template quite random data. Authors found best performances for Cassandra innovative. Here are reported some characteristics [3]: DB system performing, for 1.000.000 records, an average • The database is constituted only by a large table computational time of: 1,46’’ for reading, 5’58’’ for writing, with a certain number of columns.
    [Show full text]
  • Hibernate OGM Reference Guide
    Hibernate OGM Reference Guide 4.1.3.Final by Emmanuel Bernard (Red Hat), Sanne Grinovero (Red Hat), Gunnar Morling (Red Hat), and Davide D'Alto (Red Hat) Preface ............................................................................................................................ vii 1. Goals .................................................................................................................. viii 2. What we have today ............................................................................................. ix 3. Experimental features ............................................................................................ x 4. Use cases ............................................................................................................. x 1. How to get help and contribute on Hibernate OGM ..................................................... 1 1.1. How to get help .................................................................................................. 1 1.2. How to contribute ............................................................................................... 1 1.2.1. How to build Hibernate OGM .................................................................... 1 1.2.2. How to contribute code effectively ............................................................. 2 2. Getting started with Hibernate OGM ............................................................................ 5 3. Architecture ..............................................................................................................
    [Show full text]
  • DISTRIBUTED DATABASE OPTIMIZATIONS with Nosql MEMBERS
    U.P.B. Sci. Bull., Series C, Vol. 77, Iss. 2, 2015 ISSN 2286-3540 DISTRIBUTED DATABASE OPTIMIZATIONS WITH NoSQL MEMBERS George Dan POPA1 Distributed database complexity, as well as wide usability area, raised diverse problems concerning data coherence, accessibility and performance. NoSQL (Not only SQL) databases provide solutions for some of these problems, completing or total replacing relational databases in specific situations. This paper presents functional setups for two NoSQL databases, Infinispan and MongoDB, presenting an optimal architecture and obtained results. Keywords: distributed databases, Infinispan, MongoDB, performance 1. Introduction Distributed databases have conquered the world of data warehousing, due to the complex requirements of current day applications and increasing data quantity. However, there aren’t only advantages in using a distributed environment, but new challenges, too, in connecting all members, collaboration, data coherence, availability, performance and much more. Some of these issues have been addressed by additional data nodes, load balancers, proxy servers, but more problems remain unsolved due to the low scalability or data formats. NoSQL (Not only SQL) databases represent the data storing systems that, unlike relational databases, don’t necessarily have relational properties between stored objects. Even if such properties may exist at certain levels, they don’t play key roles in the system management. One of the most important NoSQL database purpose is to obtain superior horizontal and vertical scalability, overcoming relational database limitations in storing large and complex information. Storing mechanisms of NoSQL databases are as many as the number of the technologies used: object oriented, XML or JSON documents, grid, graph or key – value pairs.
    [Show full text]
  • Getting Started with Infinispan 9.0
    Getting Started with Infinispan 9.0 The Infinispan community Table of Contents 1. Introduction . 2 1.1. Runtimes. 2 1.2. Modes . 2 1.3. Interacting with Infinispan . 2 2. Downloading and installing Infinispan . 3 2.1. JDK . 3 2.2. Maven . 3 2.3. Infinispan . 3 2.3.1. Getting Infinispan from Maven. 3 2.3.2. Installing Infinispan inside Apache Karaf . 4 2.4. Download the quickstarts . 5 3. Infinispan GUI demo . 6 3.1. Step 1: Start the demo GUI . 6 3.2. Step 2: Start the cache . 6 3.3. Step 3: Manipulate data . 7 3.4. Step 4: Start more cache instances . 7 3.5. Step 5: Manipulate more data . 7 4. Using Infinispan as an embedded cache in Java SE . 9 4.1. Creating a new Infinispan project . 9 4.1.1. Maven users . 9 4.1.2. Ant users . 9 4.2. Running Infinispan on a single node. 9 4.3. Use the default cache . 10 4.4. Use a custom cache . 11 4.5. Sharing JGroups channels . 12 4.6. Running Infinispan in a cluster . 13 4.6.1. Replicated mode . 13 4.6.2. Distributed mode . 13 4.7. clustered-cache quickstart architecture . 14 4.7.1. Logging changes to the cache . 14 4.7.2. What’s going on? . 15 4.8. Configuring the cluster . 16 4.8.1. Tweaking the cluster configuration for your network . 16 4.9. Configuring a replicated data-grid. 17 4.10. Configuring a distributed data-grid. 18 5. Creating your own Infinispan project . 19 5.1.
    [Show full text]
  • Getting Started with Infinispan 9.1
    Getting Started with Infinispan 9.1 The Infinispan community Table of Contents 1. Introduction . 2 1.1. Runtimes. 2 1.2. Modes . 2 1.3. Interacting with Infinispan . 2 2. Downloading and installing Infinispan . 4 2.1. JDK . 4 2.2. Maven . 4 2.3. Infinispan . 4 2.3.1. Getting Infinispan from Maven. 4 2.3.2. Installing Infinispan inside Apache Karaf . 5 2.4. Download the quickstarts . 6 3. Infinispan GUI demo . 7 3.1. Step 1: Start the demo GUI . 7 3.2. Step 2: Start the cache . 7 3.3. Step 3: Manipulate data . 8 3.4. Step 4: Start more cache instances . 8 3.5. Step 5: Manipulate more data . 8 4. Using Infinispan as an embedded cache in Java SE . 10 4.1. Creating a new Infinispan project . 10 4.1.1. Maven users. 10 4.1.2. Ant users . 10 4.2. Running Infinispan on a single node . 10 4.3. Use the default cache . 11 4.4. Use a custom cache . 12 4.5. Sharing JGroups channels . 13 4.6. Running Infinispan in a cluster . 14 4.6.1. Replicated mode . 14 4.6.2. Distributed mode . 14 4.7. clustered-cache quickstart architecture . 15 4.7.1. Logging changes to the cache . 15 4.7.2. What’s going on? . 16 4.8. Configuring the cluster . 17 4.8.1. Tweaking the cluster configuration for your network . 17 4.9. Configuring a replicated data-grid. 18 4.10. Configuring a distributed data-grid. 19 5. Creating your own Infinispan project . 20 5.1.
    [Show full text]
  • Odata Protocol for Infinispan
    ISSN (Online) 2278-1021 ISSN (Print) 2319-5940 International Journal of Advanced Research in Computer and Communication Engineering Vol. 4, Issue 2, February 2015 OData protocol for Infinispan Sarith Divakar M Assistant Professor, Computer Science, Cochin College of Engineering and Technology, Valanchery, India Abstract: Modern application demands high throughput and scalability. Achieving high level of throughput and scalability requires deploying applications in computer grids. But this results in intensifying database bottlenecks. Infinispan is a key/value NoSQL data store and it is a data grid solution written in Java. Infinispan is exceptionally quick even when it deals with a large number of concurrent accesses. Infinispan have an end point called REST endpoint that uses http/web standards for providing services. If you want to connect to Infinispan data grids from non java clients, REST is the solution. Data inserted into the grid is replicated and can be accessed by client application from nearest location. The Open Data Protocol (OData) is an alternative endpoint. It was developed for querying and updating data. OData protocol is built upon technologies like HTTP, Atom Publishing Protocol (AtomPub) and JSON. Accessing Infinispan data store from a wide range of applications is possible using OData protocol. Combining the features of Infinispan and OData provides a distributed cache that is having features like scalability, availability and open access to data. Keywords: Infinispan, OData, NoSQL, Data grid, REST I. INTRODUCTION Biggest challenges long-faced by today’s application are models. It provides facilities for metadata, a machine- unit growth of knowledge. Quantity of knowledge created readable description of the data model exposed by a and consumed is growing in size.
    [Show full text]
  • Odata Server Endpoint for Infinispan
    Masaryk University Faculty of Informatics Û¡¢£¤¥¦§¨ª«¬­Æ°±²³´µ·¸¹º»¼½¾¿Ý OData server endpoint for Infinispan Diploma thesis Tomáš Sýkora Brno, 2014 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Tomáš Sýkora Advisor: Mgr. Marek Grác, Ph.D. ii Acknowledgement I would like to thank Mgr. Marek Grác, Ph.D., for official thesis ad- visory, precise answers for questions connected to official matter and review of thesis’ text part; Ing. Martin Genčúr, for advice and lead- ing from technical point of view, and for always being ready to help me; JDG QE team for fantastic every-day working atmosphere in place where I was able to get in touch with awesome technologies and for their support, including discussions on lunch and in the kitchen. Additionally, I would like to thank Vitalii for many interesting brain- storming sessions and motivation; Michal for all answers around Infin- ispan servers, performance aspects and automation; Anna for a dis- cussion about Infinispan queries; and Radim for precise answers for all of the tricky questions; Infinispan community developers, especially Adrian Nistor for discussion around Infinispan queries and providing very helpful insight, Sanne Grinovero for his advice and thorough in- formation about performance aspects of Infinispan queries, and Manik Surtani for this very interesting thesis topic. Many thanks to my close friends Jiří Sviták and Lukáš Žilka for suggestions and support; and to Zuzana Bzonková for help with English, which is still a great challenge for me.
    [Show full text]
  • Hibernate OGM Reference Guide
    Hibernate OGM Reference Guide 4.0.0.Beta4 by Emmanuel Bernard (Red Hat) and Sanne Grinovero (Red Hat) Preface ............................................................................................................................. v 1. Goals .................................................................................................................... v 2. What we have today ............................................................................................. vi 3. Use cases ........................................................................................................... vii 1. How to get help and contribute on Hibernate OGM ..................................................... 1 1.1. How to get help .................................................................................................. 1 1.2. How to contribute ............................................................................................... 1 1.2.1. How to build Hibernate OGM .................................................................... 1 1.2.2. How to contribute code effectively ............................................................. 2 2. Getting started with Hibernate OGM ............................................................................ 5 3. Architecture ................................................................................................................. 9 3.1. General architecture ............................................................................................ 9 3.2. How is data persisted
    [Show full text]
  • Red Hat Data Grid 8.1 Migrating to Data Grid 8
    Red Hat Data Grid 8.1 Migrating to Data Grid 8 Data Grid Migration Guide Last Updated: 2021-05-27 Red Hat Data Grid 8.1 Migrating to Data Grid 8 Data Grid Migration Guide Legal Notice Copyright © 2021 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
    [Show full text]