Xoring Elephants: Novel Erasure Codes for Big Data

Total Page:16

File Type:pdf, Size:1020Kb

Xoring Elephants: Novel Erasure Codes for Big Data XORing Elephants: Novel Erasure Codes for Big Data Maheswaran Megasthenis Asteris Dimitris Papailiopoulos Sathiamoorthy University of Southern University of Southern University of Southern California California California [email protected] [email protected] [email protected] Alexandros G. Dimakis Ramkumar Vadali Scott Chen University of Southern Facebook Facebook California [email protected] [email protected] [email protected] Dhruba Borthakur Facebook [email protected] ABSTRACT using Hadoop MapReduce. Standard implementations rely Distributed storage systems for large clusters typically use on a distributed file system that provides reliability by ex- replication to provide reliability. Recently, erasure codes ploiting triple block replication. The major disadvantage have been used to reduce the large storage overhead of three- of replication is the very large storage overhead of 200%, replicated systems. Reed-Solomon codes are the standard which reflects on the cluster costs. This overhead is be- design choice and their high repair cost is often considered coming a major bottleneck as the amount of managed data an unavoidable price to pay for high storage efficiency and grows faster than data center infrastructure. high reliability. For this reason, Facebook and many others are transition- This paper shows how to overcome this limitation. We ing to erasure coding techniques (typically, classical Reed- present a novel family of erasure codes that are efficiently Solomon codes) to introduce redundancy while saving stor- repairable and offer higher reliability compared to Reed- age [4, 19], especially for data that is more archival in na- Solomon codes. We show analytically that our codes are ture. In this paper we show that classical codes are highly optimal on a recently identified tradeoff between locality suboptimal for distributed MapReduce architectures. We and minimum distance. introduce new erasure codes that address the main chal- We implement our new codes in Hadoop HDFS and com- lenges of distributed data reliability and information theo- pare to a currently deployed HDFS module that uses Reed- retic bounds that show the optimality of our construction. Solomon codes. Our modified HDFS implementation shows We rely on measurements from a large Facebook production a reduction of approximately 2× on the repair disk I/O and cluster (more than 3000 nodes, 30 PB of logical data stor- repair network traffic. The disadvantage of the new coding age) that uses Hadoop MapReduce for data analytics. Face- scheme is that it requires 14% more storage compared to book recently started deploying an open source HDFS Mod- Reed-Solomon codes, an overhead shown to be information ule called HDFS RAID ([2, 8]) that relies on Reed-Solomon theoretically optimal to obtain locality. Because the new (RS) codes. In HDFS RAID, the replication factor of \cold" codes repair failures faster, this provides higher reliability, (i.e., rarely accessed) files is lowered to 1 and a new parity which is orders of magnitude higher compared to replica- file is created, consisting of parity blocks. tion. Using the parameters of Facebook clusters, the data blocks of each large file are grouped in stripes of 10 and for each such set, 4 parity blocks are created. This system (called 1. INTRODUCTION RS (10; 4)) can tolerate any 4 block failures and has a stor- MapReduce architectures are becoming increasingly pop- age overhead of only 40%. RS codes are therefore signifi- ular for big data management due to their high scalabil- cantly more robust and storage efficient compared to repli- ity properties. At Facebook, large analytics clusters store cation. In fact, this storage overhead is the minimal possible, petabytes of information and handle multiple analytics jobs for this level of reliability [7]. Codes that achieve this op- timal storage-reliability tradeoff are called Maximum Dis- tance Separable (MDS) [32] and Reed-Solomon codes [27] form the most widely used MDS family. Classical erasure codes are suboptimal for distributed en- vironments because of the so-called Repair problem: When a single node fails, typically one block is lost from each stripe that is stored in that node. RS codes are usually repaired with the simple method that requires transferring 10 blocks and recreating the original 10 data blocks even if a single 1 block is lost [28], hence creating a 10× overhead in repair 110 bandwidth and disk I/O. Recently, information theoretic results established that it 88 is possible to repair erasure codes with much less network bandwidth compared to this naive method [6]. There has 66 been significant amount of very recent work on designing such efficiently repairable codes, see section 6 for an overview 44 of this literature. Nodes Failed Our Contributions: We introduce a new family of era- 22 sure codes called Locally Repairable Codes (LRCs), that are efficiently repairable both in terms of network bandwidth Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed and disk I/O. We analytically show that our codes are in- Dec 26 Jan 2 Jan 9 Jan 16 Jan 23 formation theoretically optimal in terms of their locality, Figure 1: Number of failed nodes over a single i.e., the number of other blocks needed to repair single block month period in a 3000 node production cluster of failures. We present both randomized and explicit LRC con- Facebook. structions starting from generalized Reed-Solomon parities. We also design and implement HDFS-Xorbas, a module data center failure events [9, 19]. During the period of a that replaces Reed-Solomon codes with LRCs in HDFS- transient failure event, block reads of a coded stripe will be RAID. We evaluate HDFS-Xorbas using experiments on Ama- degraded if the corresponding data blocks are unavailable. zon EC2 and a cluster in Facebook. Note that while LRCs In this case, the missing data block can be reconstructed by are defined for any stripe and parity size, our experimen- a repair process, which is not aimed at fault tolerance but at tal evaluation is based on a RS(10,4) and its extension to a higher data availability. The only difference with standard (10,6,5) LRC to compare with the current production clus- repair is that the reconstructed block does not have to be ter. written in disk. For this reason, efficient and fast repair can Our experiments show that Xorbas enables approximately significantly improve data availability. a 2× reduction in disk I/O and repair network traffic com- The second is the problem of efficient node decommis- pared to the Reed-Solomon code currently used in produc- sioning. Hadoop offers the decommission feature to retire tion. The disadvantage of the new code is that it requires a faulty data node. Functional data has to be copied out 14% more storage compared to RS, an overhead shown to be of the node before decommission, a process that is compli- information theoretically optimal for the obtained locality. cated and time consuming. Fast repairs allow to treat node One interesting side benefit is that because Xorbas re- decommissioning as a scheduled repair and start a MapRe- pairs failures faster, this provides higher availability, due to duce job to recreate the blocks without creating very large more efficient degraded reading performance. Under a sim- network traffic. ple Markov model evaluation, Xorbas has 2 more zeros in The third reason is that repair influences the performance Mean Time to Data Loss (MTTDL) compared to RS (10; 4) of other concurrent MapReduce jobs. Several researchers and 5 more zeros compared to 3-replication. have observed that the main bottleneck in MapReduce is the network [5]. As mentioned, repair network traffic is cur- 1.1 Importance of Repair rently consuming a non-negligible fraction of the cluster net- At Facebook, large analytics clusters store petabytes of work bandwidth. This issue is becoming more significant as information and handle multiple MapReduce analytics jobs. the storage used is increasing disproportionately fast com- In a 3000 node production cluster storing approximately 230 pared to network bandwidth in data centers. This increasing million blocks (each of size 256MB), only 8% of the data is storage density trend emphasizes the importance of local re- currently RS encoded (`RAIDed'). Fig. 1 shows a recent pairs when coding is used. trace of node failures in this production cluster. It is quite Finally, local repair would be a key in facilitating geo- typical to have 20 or more node failures per day that trig- graphically distributed file systems across data centers. Geo- ger repair jobs, even when most repairs are delayed to avoid diversity has been identified as one of the key future direc- transient failures. A typical data node will be storing ap- tions for improving latency and reliability [13]. Tradition- proximately 15 TB and the repair traffic with the current ally, sites used to distribute data across data centers via configuration is estimated around 10−20% of the total aver- replication. This, however, dramatically increases the total age of 2 PB/day cluster network traffic. As discussed, (10,4) storage cost. Reed-Solomon codes across geographic loca- RS encoded blocks require approximately 10× more network tions at this scale would be completely impractical due to repair overhead per bit compared to replicated blocks. We the high bandwidth requirements across wide area networks. estimate that if 50% of the cluster was RS encoded, the re- Our work makes local repairs possible at a marginally higher pair network traffic would completely saturate the cluster storage overhead cost. network links. Our goal is to design more efficient coding Replication is obviously the winner in optimizing the four schemes that would allow a large fraction of the data to be issues discussed, but requires a very large storage overhead.
Recommended publications
  • How Big Data Enables Economic Harm to Consumers, Especially to Low-Income and Other Vulnerable Sectors of the Population
    How Big Data Enables Economic Harm to Consumers, Especially to Low-Income and Other Vulnerable Sectors of the Population The author of these comments, Nathan Newman, has been writing for twenty years about the impact of technology on society, including his 2002 book Net Loss: Internet Profits, Private Profits and the Costs to Community, based on doctoral research on rising regional economic inequality in Silicon Valley and the nation and the role of Internet policy in shaping economic opportunity. He has been writing extensively about big data as a research fellow at the New York University Information Law Institute for the last two years, including authoring two law reviews this year on the subject. These comments are adapted with some additional research from one of those law reviews, "The Costs of Lost Privacy: Consumer Harm and Rising Economic Inequality in the Age of Google” published in the William Mitchell Law Review. He has a J.D. from Yale Law School and a Ph.D. from UC-Berkeley’s Sociology Department; Executive Summary: ϋBͤ͢ Dͯ͜͜ό ͫͧͯͪͭͨͮ͜͡ ͮͰͣ͞ ͮ͜ Gͪͪͧ͢͠ ͩ͜͟ Fͪͪͦ͜͞͠͝ ͭ͜͠ ͪͨͤͩ͢͝͠͞ ͪͨͤͩͩͯ͟͜ institutions organizing information not just about the world but about consumers themselves, thereby reshaping a range of markets based on empowering a narrow set of corporate advertisers and others to prey on consumers based on behavioral profiling. While big data can benefit consumers in certain instances, there are a range of new consumer harms to users from its unregulated use by increasingly centralized data platforms. A. These harms start with the individual surveillance of users by employers, financial institutions, the government and other players that these platforms allow, but also extend to ͪͭͨͮ͡ ͪ͡ Ͳͣͯ͜ ͣͮ͜ ͩ͝͠͠ ͧͧ͜͟͞͠ ϋͧͪͭͤͯͣͨͤ͜͢͞ ͫͭͪͤͧͤͩ͢͡ό ͯͣͯ͜ ͧͧͪ͜Ͳ ͪͭͫͪͭͯ͜͞͠ ͤͩͮͯͤͯͰͯͤͪͩͮ to discriminate and exploit consumers as categorical groups.
    [Show full text]
  • Issues, Challenges, and Solutions: Big Data Mining
    ISSUES , CHALLENGES , AND SOLUTIONS : BIG DATA MINING Jaseena K.U. 1 and Julie M. David 2 1,2 Department of Computer Applications, M.E.S College, Marampally, Aluva, Cochin, India [email protected],[email protected] ABSTRACT Data has become an indispensable part of every economy, industry, organization, business function and individual. Big Data is a term used to identify the datasets that whose size is beyond the ability of typical database software tools to store, manage and analyze. The Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper presents the literature review about the Big data Mining and the issues and challenges with emphasis on the distinguished features of Big Data. It also discusses some methods to deal with big data. KEYWORDS Big data mining, Security, Hadoop, MapReduce 1. INTRODUCTION Data is the collection of values and variables related in some sense and differing in some other sense. In recent years the sizes of databases have increased rapidly. This has lead to a growing interest in the development of tools capable in the automatic extraction of knowledge from data [1]. Data are collected and analyzed to create information suitable for making decisions. Hence data provide a rich resource for knowledge discovery and decision support. A database is an organized collection of data so that it can easily be accessed, managed, and updated. Data mining is the process discovering interesting knowledge such as associations, patterns, changes, anomalies and significant structures from large amounts of data stored in databases, data warehouses or other information repositories.
    [Show full text]
  • Oracle Big Data SQL Release 4.1
    ORACLE DATA SHEET Oracle Big Data SQL Release 4.1 The unprecedented explosion in data that can be made useful to enterprises – from the Internet of Things, to the social streams of global customer bases – has created a tremendous opportunity for businesses. However, with the enormous possibilities of Big Data, there can also be enormous complexity. Integrating Big Data systems to leverage these vast new data resources with existing information estates can be challenging. Valuable data may be stored in a system separate from where the majority of business-critical operations take place. Moreover, accessing this data may require significant investment in re-developing code for analysis and reporting - delaying access to data as well as reducing the ultimate value of the data to the business. Oracle Big Data SQL enables organizations to immediately analyze data across Apache Hadoop, Apache Kafka, NoSQL, object stores and Oracle Database leveraging their existing SQL skills, security policies and applications with extreme performance. From simplifying data science efforts to unlocking data lakes, Big Data SQL makes the benefits of Big Data available to the largest group of end users possible. KEY FEATURES Rich SQL Processing on All Data • Seamlessly query data across Oracle Oracle Big Data SQL is a data virtualization innovation from Oracle. It is a new Database, Hadoop, object stores, architecture and solution for SQL and other data APIs (such as REST and Node.js) on Kafka and NoSQL sources disparate data sets, seamlessly integrating data in Apache Hadoop, Apache Kafka, • Runs all Oracle SQL queries without modification – preserving application object stores and a number of NoSQL databases with data stored in Oracle Database.
    [Show full text]
  • Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners
    Additional praise for Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners “Jared’s book is a great introduction to the area of High Powered Analytics. It will be useful for those who have experience in predictive analytics but who need to become more versed in how technology is changing the capabilities of existing methods and creating new pos- sibilities. It will also be helpful for business executives and IT profes- sionals who’ll need to make the case for building the environments for, and reaping the benefi ts of, the next generation of advanced analytics.” —Jonathan Levine, Senior Director, Consumer Insight Analysis at Marriott International “The ideas that Jared describes are the same ideas that being used by our Kaggle contest winners. This book is a great overview for those who want to learn more and gain a complete understanding of the many facets of data mining, knowledge discovery and extracting value from data.” —Anthony Goldbloom Founder and CEO of Kaggle “The concepts that Jared presents in this book are extremely valuable for the students that I teach and will help them to more fully under- stand the power that can be unlocked when an organization begins to take advantage of its data. The examples and case studies are particu- larly useful for helping students to get a vision for what is possible. Jared’s passion for analytics comes through in his writing, and he has done a great job of making complicated ideas approachable to multiple audiences.” —Tonya Etchison Balan, Ph.D., Professor of Practice, Statistics, Poole College of Management, North Carolina State University Big Data, Data Mining, and Machine Learning Wiley & SAS Business Series The Wiley & SAS Business Series presents books that help senior-level managers with their critical management decisions.
    [Show full text]
  • Internet of Things Big Data Artificial Intelligence
    IGF 2019 Best Practices Forum on Internet of Things Big Data Artificial Intelligence Draft BPF Output Report - November 2019 - Acknowledgements The Best Practice Forum on Internet of Things, Big Data, Artificial Intelligence (BPF IoT, Big Data, AI) is an open multistakeholder group conducted as an intersessional activity of the Internet Governance Forum (IGF). This report is the draft output of the IGF2019 BPF IoT, Big Data, AI and is the product of the collaborative work of many. Facilitators of the BPF IoT, Big Data, AI: Ms. Concettina CASSA, MAG BPF Co-coordinator Mr. Alex Comninos, BPF Co-coordinator Mr. Michael Nelson, BPF Co-coordinator Mr. Wim Degezelle, BPF Consultant (editor) The BPF draft document was developed through open discussions on the BPF mailing list and virtual meetings. We would like to acknowledge participants to these discussions for their input. The BPF would also like to thank all who submitted input through the BPF’s Call for contributions. Disclaimer: The IGF Secretariat has the honour to transmit this paper prepared by the 2019 Best Practice Forum on IoT, Big Data, AI. The content of the paper and the views expressed therein reflect the BPF discussions and are based on the various contributions received and do not imply any expression of opinion on the part of the United Nations. IGF2019 BPF IoT, Big Data, AI - Draft output report - November 2019 2 / 37 DOCUMENT REVIEW & FEEDBACK (editorial note 12 November 2019) The BPF IoT, Big Data, AI is inviting Community feedback on this draft report! How ? Please send your feedback to [email protected] Format? Feedback can be sent in an email or as a word or pdf document attached to an email.
    [Show full text]
  • Artificial Intelligence and Big Data – Innovation Landscape Brief
    ARTIFICIAL INTELLIGENCE AND BIG DATA INNOVATION LANDSCAPE BRIEF © IRENA 2019 Unless otherwise stated, material in this publication may be freely used, shared, copied, reproduced, printed and/or stored, provided that appropriate acknowledgement is given of IRENA as the source and copyright holder. Material in this publication that is attributed to third parties may be subject to separate terms of use and restrictions, and appropriate permissions from these third parties may need to be secured before any use of such material. ISBN 978-92-9260-143-0 Citation: IRENA (2019), Innovation landscape brief: Artificial intelligence and big data, International Renewable Energy Agency, Abu Dhabi. ACKNOWLEDGEMENTS This report was prepared by the Innovation team at IRENA’s Innovation and Technology Centre (IITC) with text authored by Sean Ratka, Arina Anisie, Francisco Boshell and Elena Ocenic. This report benefited from the input and review of experts: Marc Peters (IBM), Neil Hughes (EPRI), Stephen Marland (National Grid), Stephen Woodhouse (Pöyry), Luiz Barroso (PSR) and Dongxia Zhang (SGCC), along with Emanuele Taibi, Nadeem Goussous, Javier Sesma and Paul Komor (IRENA). Report available online: www.irena.org/publications For questions or to provide feedback: [email protected] DISCLAIMER This publication and the material herein are provided “as is”. All reasonable precautions have been taken by IRENA to verify the reliability of the material in this publication. However, neither IRENA nor any of its officials, agents, data or other third- party content providers provides a warranty of any kind, either expressed or implied, and they accept no responsibility or liability for any consequence of use of the publication or material herein.
    [Show full text]
  • Big Data Platforms, Tools, and Research at IBM
    IBM Research Big Data Platforms, Tools, and Research at IBM Ed Pednault CTO, Scalable Analytics Business Analytics and Mathematical Sciences, IBM Research © 2011 IBM Corporation Please Note: IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here. IBM Big Data Strategy: Move the Analytics Closer to the Data New analytic applications drive the Analytic Applications BI / Exploration / Functional Industry Predictive Content BI / requirements
    [Show full text]
  • Artificial Intelligence, Big Data and Cloud Taxonomy
    DEPARTMENT OF DEFENSE ARTIFICIAL INTELLIGENCE, BIG DATA AND CLOUD TAXONOMY Foreword by Hon. Robert O. Work, 32nd Deputy Secretary of Defense COMPANIES INCLUDED GOVINI CLIENT MARKET VIEWS Altamira Technologies Corp. University of Southern California Artificial Intelligence - Computer Vision Aptima Inc. United Technologies Corp. AT&T Inc. (T) Vector Planning & Services Inc. Artificial Intelligence - Data Mining BAE Systems PLC (BAESY) Vision Systems Inc. Artificial Intelligence - Deep Learning Boeing Co. (BA) Vykin Corp. Booz Allen Hamilton Inc. (BAH) World Wide Technology Inc. Artificial Intelligence - Machine Learning CACI International Inc. (CACI) Artificial Intelligence - Modeling & Simulation Carahsoft Technology Corp. Combined Technical Services LLC Artificial Intelligence - Natural Language Processing Concurrent Technologies Corp. Cray Inc. Artificial Intelligence - Neuromorphic Engineering CSRA Inc. (CSRA) Artificial Intelligence - Quantum Computing deciBel Research Inc. Dell Inc. Artificial Intelligence - Supercomputing Deloitte Consulting LLP DLT Solutions Inc. Artificial Intelligence - Virtual Agents DSD Laboratories Artificial Intelligence - Virtual Reality DXC Technologies Corp. (DXC) General Dynamics Corp. (GD) Big Data - Business Analytics Georgia Tech. Harris Corp. Big Data - Data Analytics Johns Hopkins APL Big Data - Data Architecture & Modeling HRL Laboratories International Business Machines Inc. (IBM) Big Data - Data Collection Inforeliance Corp. Big Data - Data Hygiene Insight Public Sector Inc. Intelligent Automation Inc. Big Data - Data Hygiene Software Intelligent Software Solutions Inc. JF Taylor Inc. Big Data - Data Visualization Software L3 Technologies Inc. (LLL) Big Data - Data Warehouse Leidos Inc. (LDOS) Lockheed Martin Co. (LMT) Big Data - Distributed Processing Software Logos Technologies LLC ManTech International Corp. (MANT) Big Data - ETL & Data Processing Microsoft Corp. (MSFT) Big Data - Intelligence Exploitation Mythics Inc. NANA Regional Corp. Cloud - IaaS Northrop Grumman Corp.
    [Show full text]
  • Big Data, Data Mining and Machine Learning
    Contents Forward xiii Preface xv Acknowledgments xix Introduction 1 Big Data Timeline 5 Why This Topic Is Relevant Now 8 Is Big Data a Fad? 9 Where Using Big Data Makes a Big Difference 12 Part One The Computing Environment ..............................23 Chapter 1 Hardware 27 Storage (Disk) 27 Central Processing Unit 29 Memory 31 Network 33 Chapter 2 Distributed Systems 35 Database Computing 36 File System Computing 37 Considerations 39 Chapter 3 Analytical Tools 43 Weka 43 Java and JVM Languages 44 R 47 Python 49 SAS 50 ix ftoc ix April 17, 2014 10:05 PM Excerpt from Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners by Jared Dean. Copyright © 2014 SAS Institute Inc., Cary, North Carolina, USA. ALL RIGHTS RESERVED. x ▸ CONTENTS Part Two Turning Data into Business Value .......................53 Chapter 4 Predictive Modeling 55 A Methodology for Building Models 58 sEMMA 61 Binary Classifi cation 64 Multilevel Classifi cation 66 Interval Prediction 66 Assessment of Predictive Models 67 Chapter 5 Common Predictive Modeling Techniques 71 RFM 72 Regression 75 Generalized Linear Models 84 Neural Networks 90 Decision and Regression Trees 101 Support Vector Machines 107 Bayesian Methods Network Classifi cation 113 Ensemble Methods 124 Chapter 6 Segmentation 127 Cluster Analysis 132 Distance Measures (Metrics) 133 Evaluating Clustering 134 Number of Clusters 135 K‐means Algorithm 137 Hierarchical Clustering 138 Profi ling Clusters 138 Chapter 7 Incremental Response Modeling 141 Building the Response
    [Show full text]
  • Hard Disk Drive Failure Detection with Recurrence Quantification Analysis
    HARD DISK DRIVE FAILURE DETECTION WITH RECURRENCE QUANTIFICATION ANALYSIS A Thesis Presented By Wei Li to The Department of Mechanical & Industrial Engineering in partial fulfillment of the requirements for the degree of Master of Science in the field of Industrial Engineering Northeastern University Boston, Massachusetts August 2020 ACKNOWLEDGEMENTS I would like to express my sincere appreciation to my thesis advisor, Professor Sagar Kamarthi, who has been providing me with guidance, encouragement and patience throughout the duration of this project. I would also like to extend my gratitude to Professor Srinivasan Radhakrishnan for his inspiration and guidance. 3 TABLE OF CONTENTS LIST OF TABLES .............................................................................................................. v LIST OF FIGURES ........................................................................................................... vi ABSTRACT ...................................................................................................................... vii 1. INTRODUCTION .......................................................................................................... 1 1.1 Hard Disk Drive ........................................................................................................ 1 1.2 Self-Monitoring Analysis and Reporting Technology (SMART) ............................... 4 1.3 Research Objective ................................................................................................... 5 2. STATE-OF-THE-ART
    [Show full text]
  • How to Store and Process Big Data: Are Today’S Databases Sufficient? Jaroslav Pokorný
    How to Store and Process Big Data: Are Today’s Databases Sufficient? Jaroslav Pokorný To cite this version: Jaroslav Pokorný. How to Store and Process Big Data: Are Today’s Databases Sufficient?. 13th IFIP International Conference on Computer Information Systems and Industrial Management (CISIM), Nov 2014, Ho Chi Minh City, Vietnam. pp.5-10, 10.1007/978-3-662-45237-0_2. hal-01405547 HAL Id: hal-01405547 https://hal.inria.fr/hal-01405547 Submitted on 30 Nov 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License How to store and process Big Data: are today's databases sufficient? Jaroslav Pokorný Department of Software Engineering, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic [email protected] Abstract. The development and extensive use of highly distributed and scalable systems to process Big Data is widely considered. New data management archi- tectures, e.g. distributed file systems and NoSQL databases, are used in this context. On the other hand, features of Big Data like their complexity and data analytics demands indicate that these tools solve Big Data problems only par- tially.
    [Show full text]
  • Abstract Introduction Relational Database Management Systems (RDBMDS)
    Comparisons of Relational Databases with Big Data: a Teaching Approach Ali Salehnia South Dakota State University Brookings, SD 57007 [email protected] Abstract The paper's objective is to provide classification, characteristics and evaluation of available relational database systems which may be used in Big Data Predictions and Analytics. In addition, it provides a teaching approach from moving relational database to the Big Data environment. In this study, we try to answer the question of why Relational Database Bases Management Systems such as IBM’s, DB2, Oracle, and SAP fail to meet the Big Data Analytical and Prediction Requirements. The paper also compares the structured, semi-structured, and unstructured data as well as dealing with security issues related to these data formats. Finally, the operational issues such as scale, performance and availability of data by utilizing these database systems will be compared. Introduction It has been more than 40 years since the Relational Database Management Systems have been implemented and used by different organizations. This database approach was satisfying the needs of businesses dealing with static, query intensive data sets since these data sets were relatively small in nature [21, 22]. These traditional relational database systems do not answer the requirements of the increased type, volume, velocity and dynamic structure of the new data sets. So, organizations with lots of data, either have to purchase new systems or re-tool what they already have [1, 4, 5, and 8]. Big data deals with volume, velocity, variability and variety. Velocity obviously refers to how quickly the streaming data is captured [1, 6].
    [Show full text]