Web-Scale Data Management for the Cloud Web-Scale Data Management for the Cloud

Total Page:16

File Type:pdf, Size:1020Kb

Web-Scale Data Management for the Cloud Web-Scale Data Management for the Cloud Wolfgang Lehner Kai-Uwe Sattler Web-Scale Data Management for the Cloud Web-Scale Data Management for the Cloud Wolfgang Lehner • Kai-Uwe Sattler Web-Scale Data Management for the Cloud 123 Wolfgang Lehner Kai-Uwe Sattler Dresden University of Technology Ilmenau University of Technology Dresden, Germany Ilmenau, Germany ISBN 978-1-4614-6855-4 ISBN 978-1-4614-6856-1 (eBook) DOI 10.1007/978-1-4614-6856-1 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2013932864 © Springer Science+Business Media New York 2013 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) This book is dedicated to our families – this book would not have been possible without their constant and comprehensive support. Preface The efficient management of data is a core asset of almost every organization. Data is collected, stored, manipulated, and analysed to drive the business and derive support for the decision making process. Establishing and running the data management platform within a larger organization is a complex, time- and budget-intensive tasks. Not only do data management systems have to be installed, deployed, and populated with different sets of data. The data management platform of an organization has to constantly maintained on a technical as well as on a content level. Changing applications and business requirements have to be reflected within the technical setup, new software versions have to be installed, hardware has to be exchanged etc. Although the benefit of an efficient data management platform can be enormous not only in terms of a direct controlling of the business or a competitive advantage but also in terms of strategic advantages, the technical challenges are enormous to keep a data management machinery running. In the age of “Everything-as-a-Service” using cloud-based solutions, it is fair enough to question on-premise data management installations and consider “Software-as-a-Service” an alternative or complementary concept for system plat- forms to run (parts of) a larger data management solution. Within this book, we tackle this question to give answers to some of the huge number of different facets, which have to be considered in this context. We will therefore review the general concept of the “Everything-as-a-Service” paradigm and discuss situations where this concept may fit perfectly or where the concept is only well-suited for specific scenarios. The book will also review the general relationship between cloud computing on the one hand and data management on the other hand. We will also outline the impact of being able to perform data management on a web-scale as one of the key challenges and also requirements for future commercial data management platforms. In order to provide a comprehensive understanding of data management services in the large, we will introduce the notion of virtualization as one of the key techniques to establish the “Everything-as-a-Service”-paradigm. Virtualization can be exploited on a physical and logical layer to abstract from specific hardware software environments providing an additional indirection layer which can be vii viii Preface used to differentiate between services or components running remotely in cloud environments or locally within a totally controlled system and administration setup. Chapter 2 will therefore discuss different techniques ranging from low-level machine virtualization to application-aware virtualization techniques on schema and application level. In the following two chapters, we dive into detail while discussing different methods and technologies to run web-scale data management services with special emphasis on operational or analytical workloads. Operational workload on the one hand usually consists of highly concurrent accesses to the same database with conflict potential resulting in a requirement of transactional behavior of the data management service. Analytical workloads on the other hand usually touch hundreds of million of rows within a single query feeding complex statistical models or produce the raw material for sophisticated cockpit applications within larger Business Intelligence environments. Due to the completely different nature of using a data management system, we take a detailed look with respect to the individual scenarios. The first part (Chap. 3) addressing challenges and solutions for transactional workload patterns focuses on consistency models, data fragmentation, and data replication schemes for large web-scale data management scenarios. The second part (Chap. 4) outlines the major techniques and challenges to efficiently handle analytics in big data scenarios by discussing pros and cons of the MapReduce data processing paradigm and presenting different query languages to directly interact with the data management platform. Cloud-based services do not only offer an efficient data processing framework by providing a specific degree of consistency but also comprise non-functional services like considering quality of service agreements or adhere to certain security and privacy constraints. In Chap. 5, we will outline the core requirements and provide insights into some already existing approaches. In a final step, the book gives the reader a state-of-the-art overview of currently available “everything-as-a-service”- implementations or commercial services with respect to infrastructural services, platform services or within the context of software as a service. Preface ix The book closes with an outlook on challenges and open research questions in the context of providing a data management solution following the “as-a- service”-paradigm. Within this discussion, we reflect the different discussion points and outline future directions for a wide and open field of research potential and commercial opportunities. The overall structure of the book is reflected in the accompanying figure. Since the individual sections are – with the exception of the general introduction – independent of each other, readers are encouraged to jump into the middle of the book or use a different path to capture the presented material. After having read the book, the readers should have a deep understanding of the requirements and existing solutions to establish and operate an efficient data management platform following the “as-a-service”-paradigm. Dresden, Germany Wolfgang Lehner Ilmenau, Germany Kai-Uwe Sattler Acknowledgements Like every book written in addition to the regular life as university professor, this manuscript would not have been possible without the help of quite a few members of the active research community. We highly appreciate the effort and would like to thank all of them! • Volker Markl and Felix Naumann helped us to start the adventure of writing such a book. We highly appreciate their comments on the general outline and their continuing support. • Florian Kerschbaum who authored the section on security in outsourced data management environments (Sect. 5.3). Thanks for tackling this especially for commercial applications highly relevant topic. We are very grateful. • The thanks to Maik Thiele goes for authoring significant parts of the introductory chapter of this book. Maik did a wonderful job in spanning the global context and providing a comprehensive and well-structured overview. He also authored the section on Open Data as an example of “Data-as-a-Service” in Sect. 6.3. • Tim Kiefer and Hannes Voigt covered major parts of the virtualization section. While Tim was responsible for the virtualization stack (Sect. 2.2) and partially for the section on hardware virtualization (Sect. 2.3), Hannes was in charge for the material describing the core concepts for logical virtualization (Sect. 2.4). • Daniel Warneke contributed the sections on “Infrastructure-as-a-Service” and “Platform-as-a-service”
Recommended publications
  • Hadoop Tutorials  Cassandra  Hector API  Request Tutorial  About
    Home Big Data Hadoop Tutorials Cassandra Hector API Request Tutorial About LABELS: HADOOP-TUTORIAL, HDFS 3 OCTOBER 2013 Hadoop Tutorial: Part 1 - What is Hadoop ? (an Overview) Hadoop is an open source software framework that supports data intensive distributed applications which is licensed under Apache v2 license. At-least this is what you are going to find as the first line of definition on Hadoop in Wikipedia. So what is data intensive distributed applications? Well data intensive is nothing but BigData (data that has outgrown in size) anddistributed applications are the applications that works on network by communicating and coordinating with each other by passing messages. (say using a RPC interprocess communication or through Message-Queue) Hence Hadoop works on a distributed environment and is build to store, handle and process large amount of data set (in petabytes, exabyte and more). Now here since i am saying that hadoop stores petabytes of data, this doesn't mean that Hadoop is a database. Again remember its a framework that handles large amount of data for processing. You will get to know the difference between Hadoop and Databases (or NoSQL Databases, well that's what we call BigData's databases) as you go down the line in the coming tutorials. Hadoop was derived from the research paper published by Google on Google File System(GFS) and Google's MapReduce. So there are two integral parts of Hadoop: Hadoop Distributed File System(HDFS) and Hadoop MapReduce. Hadoop Distributed File System (HDFS) HDFS is a filesystem designed for storing very large files with streaming data accesspatterns, running on clusters of commodity hardware.
    [Show full text]
  • MÁSTER EN INGENIERÍA WEB Proyecto Fin De Máster
    UNIVERSIDAD POLITÉCNICA DE MADRID Escuela Técnica Superior de Ingeniería de Sistemas Informáticos MÁSTER EN INGENIERÍA WEB Proyecto Fin de Máster …Estudio Conceptual de Big Data utilizando Spring… Autor Gabriel David Muñumel Mesa Tutor Jesús Bernal Bermúdez 1 de julio de 2018 Estudio Conceptual de Big Data utilizando Spring AGRADECIMIENTOS Gracias a mis padres Julian y Miriam por todo el apoyo y empeño en que siempre me mantenga estudiando. Gracias a mi tia Gloria por sus consejos e ideas. Gracias a mi hermano José Daniel y mi cuñada Yule por siempre recordarme que con trabajo y dedicación se pueden alcanzar las metas. [UPM] Máster en Ingeniería Web RESUMEN Big Data ha sido el término dado para aglomerar la gran cantidad de datos que no pueden ser procesados por los métodos tradicionales. Entre sus funciones principales se encuentran la captura de datos, almacenamiento, análisis, búsqueda, transferencia, visualización, monitoreo y modificación. Las empresas han visto en Big Data una poderosa herramienta para mejorar sus negocios en una economía mundial basada firmemente en el conocimiento. Los datos son el combustible para las compañías modernas y, por lo tanto, dar sentido a estos datos permite realmente comprender las conexiones invisibles dentro de su origen. En efecto, con mayor información se toman mejores decisiones, permitiendo la creación de estrategias integrales e innovadoras que garanticen resultados exitosos. Dada la creciente relevancia de Big Data en el entorno profesional moderno ha servido como motivación para la realización de este proyecto. Con la utilización de Java como software de desarrollo y Spring como framework web se desea analizar y comprobar qué herramientas ofrecen estas tecnologías para aplicar procesos enfocados en Big Data.
    [Show full text]
  • Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected]
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St. Cloud State University St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 3-2017 Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Annangi, Harikrishna, "Security Log Analysis Using Hadoop" (2017). Culminating Projects in Information Assurance. 19. https://repository.stcloudstate.edu/msia_etds/19 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Security Log Analysis Using Hadoop by Harikrishna Annangi A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Assurance April, 2016 Starred Paper Committee: Dr. Dennis Guster, Chairperson Dr. Susantha Herath Dr. Sneh Kalia 2 Abstract Hadoop is used as a general-purpose storage and analysis platform for big data by industries. Commercial Hadoop support is available from large enterprises, like EMC, IBM, Microsoft and Oracle and Hadoop companies like Cloudera, Hortonworks, and Map Reduce. Hadoop is a scheme written in Java that allows distributed processes of large data sets across clusters of computers using programming models. A Hadoop frame work application works in an environment that provides storage and computation across clusters of computers.
    [Show full text]
  • Orchestrating Big Data Analysis Workflows in the Cloud: Research Challenges, Survey, and Future Directions
    00 Orchestrating Big Data Analysis Workflows in the Cloud: Research Challenges, Survey, and Future Directions MUTAZ BARIKA, University of Tasmania SAURABH GARG, University of Tasmania ALBERT Y. ZOMAYA, University of Sydney LIZHE WANG, China University of Geoscience (Wuhan) AAD VAN MOORSEL, Newcastle University RAJIV RANJAN, Chinese University of Geoscienes and Newcastle University Interest in processing big data has increased rapidly to gain insights that can transform businesses, government policies and research outcomes. This has led to advancement in communication, programming and processing technologies, including Cloud computing services and technologies such as Hadoop, Spark and Storm. This trend also affects the needs of analytical applications, which are no longer monolithic but composed of several individual analytical steps running in the form of a workflow. These Big Data Workflows are vastly different in nature from traditional workflows. Researchers arecurrently facing the challenge of how to orchestrate and manage the execution of such workflows. In this paper, we discuss in detail orchestration requirements of these workflows as well as the challenges in achieving these requirements. We alsosurvey current trends and research that supports orchestration of big data workflows and identify open research challenges to guide future developments in this area. CCS Concepts: • General and reference → Surveys and overviews; • Information systems → Data analytics; • Computer systems organization → Cloud computing; Additional Key Words and Phrases: Big Data, Cloud Computing, Workflow Orchestration, Requirements, Approaches ACM Reference format: Mutaz Barika, Saurabh Garg, Albert Y. Zomaya, Lizhe Wang, Aad van Moorsel, and Rajiv Ranjan. 2018. Orchestrating Big Data Analysis Workflows in the Cloud: Research Challenges, Survey, and Future Directions.
    [Show full text]
  • Apache Oozie the Workflow Scheduler for Hadoop
    Apache Oozie The Workflow Scheduler For Hadoop televises:Hookier and he sopraninopip his rationalists Jere decrescendo usuriously hisand footie fragilely. inundates Larry disburdenchummed educationally.untimely. Seismographical Evan Apache Zookepeer Tutorial Zookeeper in Hadoop Hadoop. Oozie offers replacement only be used files into hadoop ecosystem components are used, including but for any. The below and action, time if we saw how does flipkart first emi option. Here, how to reduce their costs and increase the time to market. Are whole a Author? Who uses Apache Oozie? What is the estimated delivery time? Oozie operates by running with a prior in a Hadoop cluster with clients submitting workflow definitions for sink or delayed processing. Specifies that cannot span file. Explanation Oozie is a workflow scheduler system where manage Hadoop jobs. Other events and schedule apache storm for all set of a free. Oozie server using REST. Supermart is available only in select cities. Action contains description of hangover or more workflows to be executed Oozie is lightweight as it uses existing Hadoop MapReduce framework for. For sellers on a great features: they implemented has been completed. For example, TORT OR hassle, and SSH. Apache Oozie provides you the power to easily handle these kinds of scenarios. Have doubts regarding this product? Oozie is a workflow scheduler system better manage apache hadoop jobs Oozie workflow jobs are directed acyclical graphs dags of actions By. Recipient as is required. Needed when any oozie client is anger on separated node. Sorry, French, Straus and Giroux. Data pipeline job scheduling in GoDaddy Developer's point of.
    [Show full text]
  • Persisting Big-Data the Nosql Landscape
    Information Systems 63 (2017) 1–23 Contents lists available at ScienceDirect Information Systems journal homepage: www.elsevier.com/locate/infosys Persisting big-data: The NoSQL landscape Alejandro Corbellini n, Cristian Mateos, Alejandro Zunino, Daniela Godoy, Silvia Schiaffino ISISTAN (CONICET-UNCPBA) Research Institute1, UNICEN University, Campus Universitario, Tandil B7001BBO, Argentina article info abstract Article history: The growing popularity of massively accessed Web applications that store and analyze Received 11 March 2014 large amounts of data, being Facebook, Twitter and Google Search some prominent Accepted 21 July 2016 examples of such applications, have posed new requirements that greatly challenge tra- Recommended by: G. Vossen ditional RDBMS. In response to this reality, a new way of creating and manipulating data Available online 30 July 2016 stores, known as NoSQL databases, has arisen. This paper reviews implementations of Keywords: NoSQL databases in order to provide an understanding of current tools and their uses. NoSQL databases First, NoSQL databases are compared with traditional RDBMS and important concepts are Relational databases explained. Only databases allowing to persist data and distribute them along different Distributed systems computing nodes are within the scope of this review. Moreover, NoSQL databases are Database persistence divided into different types: Key-Value, Wide-Column, Document-oriented and Graph- Database distribution Big data oriented. In each case, a comparison of available databases
    [Show full text]
  • Analysis of Web Log Data Using Apache Pig in Hadoop
    [VOLUME 5 I ISSUE 2 I APRIL – JUNE 2018] e ISSN 2348 –1269, Print ISSN 2349-5138 http://ijrar.com/ Cosmos Impact Factor 4.236 ANALYSIS OF WEB LOG DATA USING APACHE PIG IN HADOOP A. C. Priya Ranjani* & Dr. M. Sridhar** *Research Scholar, Department of Computer Science, Acharya Nagarjuna University, Guntur, Andhra Pradesh, INDIA, **Associate Professor, Department of Computer Applications, R.V.R & J.C College of Engineering, Guntur, India Received: April 09, 2018 Accepted: May 22, 2018 ABSTRACT The wide spread use of internet and increased web applications accelerate the rampant growth of web content. Every organization produces huge amount of data in different forms like text, audio, video etc., from multiplesources. The log data stored in web servers is a great source of knowledge. The real challenge for any organization is to understand the behavior of their customers. Analyzing such web log data will help the organizations to understand navigational patterns and interests of their users. As the logs are growing in size day by day, the existing database technologies face a bottleneck to process such massive unstructured data. Hadoop provides a best solution to this problem. Hadoop framework comes up with Hadoop Distributed File System, a reliable distributed storage for data and MapReduce, a distributed parallel processing for executing large volumes of complex data. Hadoop ecosystem constitutes of several other tools like Pig, Hive, Flume, Sqoop etc., for effective analysis of web log data. To write scripts in Map Reduce, one should acquire a good programming knowledge in Java. However Pig, a simple dataflow language can be easily used to analyze such data.
    [Show full text]
  • Apache Pig's Optimizer
    Apache Pig’s Optimizer Alan F. Gates, Jianyong Dai, Thejas Nair Hortonworks Abstract Apache Pig allows users to describe dataflows to be executed in Apache Hadoop. The distributed nature of Hadoop, as well as its execution paradigms, provide many execution opportunities as well as impose constraints on the system. Given these opportunities and constraints Pig must make decisions about how to optimize the execution of user scripts. This paper covers some of those optimization choices, focussing one ones that are specific to the Hadoop ecosystem and Pig’s common use cases. It also discusses optimizations that the Pig community has considered adding in the future. 1 Introduction Apache Pig [10] provides an SQL-like dataflow language on top of Apache Hadoop [11] [7]. With Pig, users write dataflow scripts in a language called Pig Latin. Pig then executes these dataflow scripts in Hadoop using MapReduce. Providing users with a scripting language, rather than requiring them to write MapReduce pro- grams in Java, drastically decreases their development time and enables non-Java developers to use Hadoop. Pig also provides operators for most common data processing operations, such as join, sort, and aggregation. It would otherwise require huge amounts of effort for a handcrafted Java MapReduce program to implement these operators. Many different types of data processing are done on Hadoop. Pig does not seek to be a general purpose solution for all of them. Pig focusses on use cases where users have a DAG of transformations to be done on their data, involving some combination of standard relational operations (join, aggregation, etc.) and custom processing which can be included in Pig Latin via User Defined Functions, or UDFs, which can be written in Java or a scripting language.1 Pig also focusses on situations where data may not yet be cleansed and normal- ized.
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Hot Technologies” Within the O*NET® System
    Identification of “Hot Technologies” within the O*NET® System Phil Lewis National Center for O*NET Development Jennifer Norton North Carolina State University Prepared for U.S. Department of Labor Employment and Training Administration Office of Workforce Investment Division of National Programs, Tools, & Technical Assistance Washington, DC April 4, 2016 www.onetcenter.org National Center for O*NET Development, Post Office Box 27625, Raleigh, NC 27611 Table of Contents Background ......................................................................................................................... 2 Hot Technologies Identification Procedure ...................................................................... 3 Mine data to collect the top technology related terms ................................................ 3 Convert the data-mined technology terms into O*NET technologies ......................... 3 Organize the hot technologies within the O*NET Tools & Technology Taxonomy ..... 4 Link the hot technologies to O*NET-SOC occupations .............................................. 4 Determine the display of occupations linked to a hot technology ............................... 4 Summary ............................................................................................................................. 5 Figure 1: O*NET Hot Technology Icon .............................................................................. 6 Appendix A: Hot Technologies Identified During the Initial Implementation ................ 7 National Center
    [Show full text]
  • Pentaho EMR46 SHIM 7.1.0.0 Open Source Software Packages
    Pentaho EMR46 SHIM 7.1.0.0 Open Source Software Packages Contact Information: Project Manager Pentaho EMR46 SHIM Hitachi Vantara Corporation 2535 Augustine Drive Santa Clara, California 95054 Name of Product/Product Version License Component An open source Java toolkit for 0.9.0 Apache License Version 2.0 Amazon S3 AOP Alliance (Java/J2EE AOP 1.0 Public Domain standard) Apache Commons BeanUtils 1.9.3 Apache License Version 2.0 Apache Commons CLI 1.2 Apache License Version 2.0 Apache Commons Daemon 1.0.13 Apache License Version 2.0 Apache Commons Exec 1.2 Apache License Version 2.0 Apache Commons Lang 2.6 Apache License Version 2.0 Apache Directory API ASN.1 API 1.0.0-M20 Apache License Version 2.0 Apache Directory LDAP API Utilities 1.0.0-M20 Apache License Version 2.0 Apache Hadoop Amazon Web 2.7.2 Apache License Version 2.0 Services support Apache Hadoop Annotations 2.7.2 Apache License Version 2.0 Name of Product/Product Version License Component Apache Hadoop Auth 2.7.2 Apache License Version 2.0 Apache Hadoop Common - 2.7.2 Apache License Version 2.0 org.apache.hadoop:hadoop-common Apache Hadoop HDFS 2.7.2 Apache License Version 2.0 Apache HBase - Client 1.2.0 Apache License Version 2.0 Apache HBase - Common 1.2.0 Apache License Version 2.0 Apache HBase - Hadoop 1.2.0 Apache License Version 2.0 Compatibility Apache HBase - Protocol 1.2.0 Apache License Version 2.0 Apache HBase - Server 1.2.0 Apache License Version 2.0 Apache HBase - Thrift - 1.2.0 Apache License Version 2.0 org.apache.hbase:hbase-thrift Apache HttpComponents Core
    [Show full text]
  • Rapid Adoption of Cloud Data Warehouse Technology Using Datometry Hyper-Q
    Technical White Paper Copyright (C) 2019 Datometry Inc. Rapid Adoption of Cloud Data Warehouse Technology Using Datometry Hyper-Q L. Antova, D. Bryant, T. Cao, M. Duller, M. A. Soliman, F. M. Waas Datometry Inc. 300 Brannan St #610, San Francisco, CA 94107, U.S.A. www.datometry.com ABSTRACT query or manipulate data freely, yet shield them from the The database industry is about to undergo a fundamen- burden and headaches of having to maintain and operate tal transformation of unprecedented magnitude as enter- their own database. prises start trading their well-established database stacks The prospect of this new paradigm of data management for cloud-native database technology in order to take ad- is extremely powerful and well-received by IT departments vantage of the economics cloud service providers promise. across all industries [2]. However, it comes with signifi- Industry experts and analysts expect 2017 to become the cant adoption challenges. Through applications that de- watershed moment in this transformation as cloud-native pend on the specific databases they were originally written databases finally reached critical mass and maturity. Enter- for, database technology has over time established some of prises eager to move to the cloud face a significant dilemma: the strongest vendor lock-in in all of IT. Moving to a cloud- moving the content of their databases to the cloud is rela- native database requires adjusting or even rewriting of ex- tively easy. However making existing applications work with isting applications and migrations may take up to several new database platforms is an enormously costly undertaking years, cost millions, and are heavily fraught with risk.
    [Show full text]