An Evaluation of Cassandra for Hadoop

Total Page:16

File Type:pdf, Size:1020Kb

An Evaluation of Cassandra for Hadoop An Evaluation of Cassandra for Hadoop Elif Dede, Bedri Sendir, Pinar Kuzlu, Jessica Hartog, Madhusudhan Govindaraju Grid and Cloud Computing Research Laboratory SUNY Binghamton, New York, USA Email:{edede1,bsendir1,pkuzlu1,jhartog1,mgovinda}@binghamton.edu Abstract—In the last decade, the increased use and growth important to evaluate the NoSQL model when used with the of social media, unconventional web technologies, and mobile MapReduce processing paradigm. In this paper, we analyze applications, have all encouraged development of a new breed various considerations when using Cassandra as the data store of database models. NoSQL data stores target the unstructured data, which by nature is dynamic and a key focus area for and Apache Hadoop for processing. Cassandra is an open “Big Data” research. New generation data can prove costly and source non-relational, column oriented distributed database. unpractical to administer with SQL databases due to lack of It is used for storing large amounts of unstructured data. structure, high scalability, and elasticity needs. NoSQL data Apache Hadoop is a well-known platform for data intensive stores such as MongoDB and Cassandra provide a desirable plat- processing. form for fast and efficient data queries. This leads to increased importance in areas such as cloud applications, e-commerce, We first identify and analyze various aspects of Cassandra, social media, bio-informatics, and materials science. In an effort and by extension NoSQL object stores, such as locality, scala- to combine the querying capabilities of conventional database bility, data distribution, load balancing, and I/O performance. systems and the processing power of the MapReduce model, We provide insights on strengths and pitfalls of using Cas- this paper presents a thorough evaluation of the Cassandra sandra as the underlying storage model with Apache Hadoop NoSQL database when used in conjunction with the Hadoop MapReduce engine. We characterize the performance for a wide for typical application loads in a cloud environment. We also range of representative use cases, and then compare, contrast, present and analyze the performance data of running the same and evaluate so that application developers can make informed experiments using Hadoop native, which uses Hadoop Dis- decisions based upon data size, cluster size, replication factor, 1 tributed Hadoop Distributed File System (HDFS) for storage. and partitioning strategy to meet their performance needs. The contributions of this paper are as follows: I. INTRODUCTION • Identify and describe the key NoSQL features required With the advent of the “Big Data” era, the size and for efficient performance with Hadoop. structure of data have become highly dynamic. As application • Discuss how the various features of Cassandra, such as developers deal with a deluge of data from various sources, replication and data partitioning, affect Apache Hadoop’s they face challenges caused by the data’s lack of structure performance. and schema. As such data grows and is constantly modified • Analyze the performance implications of running Hadoop via social media, news feeds, and scientific sensor input, the with Cassandra as the underlying data store. Verify per- requirements from the storage models have also changed. As formance gains and losses by processing application data the unstructured nature of the data limits the applicability typical in cloud environments, as well as classical I/O of the traditional SQL model, NoSQL has emerged as an and memory intensive workloads. alternative paradigm for this new non-relational data schema. NoSQL frameworks, such as DynamoDB [1], MongoDB [6], II. BACKGROUND BigTable [10] and Cassandra [23], address this “Big Data” A. MapReduce and Hadoop challenge by providing horizontal scalability. This, unlike the The MapReduce model proposes splitting a data set to vertical scalability scheme of traditional databases, results in enable its processing in parallel over a cluster of commodity lower maintenance costs. machines, which are called workers. Input distribution, While the NoSQL model provides an easy and intuitive way scheduling, parallelization and machine failures are all handled to store unstructured data, the performance under operations by the framework itself and monitored by a node called common in cloud applications is not well understood. The the master. The idea is to split parallel execution into MapReduce model has evolved as the paradigm of choice for two phases: map and reduce. Map processes a key and “Big Data” processing. However, studies with performance produces a set of intermediate key/value pairs. Reduce uses insights on the applicability of the MapReduce model to the intermediate results to construct the final output. NoSQL offshoots, such as MongoDB and Cassandra, have been lacking. As “modern” data is increasingly produced Apache Hadoop [2] is the most popular open source imple- from various sources, it becomes increasingly unstructured mentation of the model. Hadoop consists of two core compo- while continually growing in size with user interaction; it is nents: The Hadoop MapReduce Framework and the Hadoop Distributed File System (HDFS) [27]. Hadoop MapReduce 1Supported in part by NSF grant CNS-0958501. consists of a JobTracker that runs on the master node and TaskTackers running on each of the workers. The JobTracker 1) Data Model: Figure 1 shows the column oriented data is responsible for determining job specifications (i.e., number model of Cassandra. A column is the smallest component of of mappers, etc), submitting the user job to the cluster and data and it is a tuple of name, value and time stamp. Time monitoring workers and the job status. The TaskTrackers stamps are used for conflict resolution as multiple versions execute the user specified map or reduce tasks. Hadoop of the same record may be present. Columns associated with relies on HDFS for data distribution and input management. It a certain key can be depicted as a row; rows do not have a automatically breaks data into chunks and spreads them over pre-determined structure as each of them may contain several the cluster. Nodes hosting the input splits and replicas are columns. A column family is a collection of rows, like a called DataNodes. Each TaskTacker processes the input chunk table in a relational database. Column families are stored hosted by the local DataNode. This is done to leverage data using separate files, which are sorted in row key order. The locality. The input splits are replicated among the DataNodes placement of rows on the nodes of a Cassandra cluster depends based on a user-set replication-factor. This design on the row key and the partitioning strategy. Keyspaces are prevents data loss and helps with fault tolerance in case of containers for column families just as databases have tables in node failures. RDBMSs. 2) Replication and Consistency: Cassandra has automatic B. YSCB Benchmarking Suite replication to duplicate records throughout the cluster by a user The Yahoo Cloud Serving Benchmark (YCSB) [12] has set replication-factor. This is to ensure that failing been developed by Yahoo! engineers to analyze different data nodes do not result in data loss. Cassandra offers configurable stores under several workloads. YCSB is an open source consistency, which provides the flexibility to consciously make project that features benchmarks for many NoSQL tech- trade-offs between latency and consistency. For each read and nologies like Hbase [3], MongoDB [6], PNUTS [11], and write request, users choose one of the pre-defined consistency Cassandra [23]. The YCSB Core Package features five basic levels: ZERO, ONE, QUORUM, ALL or ANY [23]. In the workloads and each can be extended to produce new ones. In experiments of Section III-A we use the consistency level of Section III-A, we present performance results for running the ONE. This means that a write request is only considered done YSCB benchmarks Workload C on Cassandra under various when at least one server returns success in writing the entry to scenarios. When executed in load mode, this workload inserts its commit log. For the reads level ONE means that consulting a user-specified number of randomly generated records into only one replica node is sufficient to return the client request. the Cassandra database. Each record contains a randomly 3) Data Partitioning: Cassandra offers two main parti- generated key, 10 fields and each field is of 100 bytes. In order tioning strategies: RandomPartitioner and ByteOrderedPar- to compare Cassandra performance under different configura- titioner [19]. The former is the default strategy and it is tions and loads, we use the YCSB default “insertorder” which recommended for use in most cases. A hashing algorithm is inserts the records in hashed order of keys. In the run mode, used to create an md5 hash value of row key. Each Cassandra the user-specified number of records are read. Each record is node has a token value that specifies the range of keys for read as a whole without specifying any columns. which they are responsible. A row is stationed in the cluster based on the hash and range tokens. Distributing the records C. Cassandra evenly throughout the cluster balances the load by spreading Cassandra [23], developed by Facebook, is an open source, out client requests. The ByteOrderedPartitioner simply orders non-relational, column oriented, distributed database, devel- rows lexically by keys, so it may not distribute data evenly. oped for storing large amounts of unstructured data over 4) Read and Write: A client can contact any Cassandra commodity servers. Cassandra is a peer-to-peer model, which node for any operation. The node being connected to serves makes it tolerant against single points of failure and provides as a coordinator. The coordinator forwards the client request horizontal scalability. A Cassandra cluster can be expanded to the replica node(s) owning the data being claimed. on demand by simply starting new servers. These servers only For each write request, first a commit log entry is created.
Recommended publications
  • Introduction to Hbase Schema Design
    Introduction to HBase Schema Design AmANDeeP KHURANA Amandeep Khurana is The number of applications that are being developed to work with large amounts a Solutions Architect at of data has been growing rapidly in the recent past . To support this new breed of Cloudera and works on applications, as well as scaling up old applications, several new data management building solutions using the systems have been developed . Some call this the big data revolution . A lot of these Hadoop stack. He is also a co-author of HBase new systems that are being developed are open source and community driven, in Action. Prior to Cloudera, Amandeep worked deployed at several large companies . Apache HBase [2] is one such system . It is at Amazon Web Services, where he was part an open source distributed database, modeled around Google Bigtable [5] and is of the Elastic MapReduce team and built the becoming an increasingly popular database choice for applications that need fast initial versions of their hosted HBase product. random access to large amounts of data . It is built atop Apache Hadoop [1] and is [email protected] tightly integrated with it . HBase is very different from traditional relational databases like MySQL, Post- greSQL, Oracle, etc . in how it’s architected and the features that it provides to the applications using it . HBase trades off some of these features for scalability and a flexible schema . This also translates into HBase having a very different data model . Designing HBase tables is a different ballgame as compared to relational database systems . I will introduce you to the basics of HBase table design by explaining the data model and build on that by going into the various concepts at play in designing HBase tables through an example .
    [Show full text]
  • Apache Hbase, the Scaling Machine Jean-Daniel Cryans Software Engineer at Cloudera @Jdcryans
    Apache HBase, the Scaling Machine Jean-Daniel Cryans Software Engineer at Cloudera @jdcryans Tuesday, June 18, 13 Agenda • Introduction to Apache HBase • HBase at StumbleUpon • Overview of other use cases 2 Tuesday, June 18, 13 About le Moi • At Cloudera since October 2012. • At StumbleUpon for 3 years before that. • Committer and PMC member for Apache HBase since 2008. • Living in San Francisco. • From Québec, Canada. 3 Tuesday, June 18, 13 4 Tuesday, June 18, 13 What is Apache HBase Apache HBase is an open source distributed scalable consistent low latency random access non-relational database built on Apache Hadoop 5 Tuesday, June 18, 13 Inspiration: Google BigTable (2006) • Goal: Low latency, consistent, random read/ write access to massive amounts of structured data. • It was the data store for Google’s crawler web table, gmail, analytics, earth, blogger, … 6 Tuesday, June 18, 13 HBase is in Production • Inbox • Storage • Web • Search • Analytics • Monitoring 7 Tuesday, June 18, 13 HBase is in Production • Inbox • Storage • Web • Search • Analytics • Monitoring 7 Tuesday, June 18, 13 HBase is in Production • Inbox • Storage • Web • Search • Analytics • Monitoring 7 Tuesday, June 18, 13 HBase is in Production • Inbox • Storage • Web • Search • Analytics • Monitoring 7 Tuesday, June 18, 13 HBase is Open Source • Apache 2.0 License • A Community project with committers and contributors from diverse organizations: • Facebook, Cloudera, Salesforce.com, Huawei, eBay, HortonWorks, Intel, Twitter … • Code license means anyone can modify and use the code. 8 Tuesday, June 18, 13 So why use HBase? 9 Tuesday, June 18, 13 10 Tuesday, June 18, 13 Old School Scaling => • Find a scaling problem.
    [Show full text]
  • Final Report CS 5604: Information Storage and Retrieval
    Final Report CS 5604: Information Storage and Retrieval Solr Team Abhinav Kumar, Anand Bangad, Jeff Robertson, Mohit Garg, Shreyas Ramesh, Siyu Mi, Xinyue Wang, Yu Wang January 16, 2018 Instructed by Professor Edward A. Fox Virginia Polytechnic Institute and State University Blacksburg, VA 24061 1 Abstract The Digital Library Research Laboratory (DLRL) has collected over 1.5 billion tweets and millions of webpages for the Integrated Digital Event Archiving and Library (IDEAL) and Global Event Trend Archive Research (GETAR) projects [6]. We are using a 21 node Cloudera Hadoop cluster to store and retrieve this information. One goal of this project is to expand the data collection to include more web archives and geospatial data beyond what previously had been collected. Another important part in this project is optimizing the current system to analyze and allow access to the new data. To accomplish these goals, this project is separated into 6 parts with corresponding teams: Classification (CLA), Collection Management Tweets (CMT), Collection Management Webpages (CMW), Clustering and Topic Analysis (CTA), Front-end (FE), and SOLR. This report describes the work completed by the SOLR team which improves the current searching and storage system. We include the general architecture and an overview of the current system. We present the part that Solr plays within the whole system with more detail. We talk about our goals, procedures, and conclusions on the improvements we made to the current Solr system. This report also describes how we coordinate with other teams to accomplish the project at a higher level. Additionally, we provide manuals for future readers who might need to replicate our experiments.
    [Show full text]
  • Apache Hbase: the Hadoop Database Yuanru Qian, Andrew Sharp, Jiuling Wang
    Apache HBase: the Hadoop Database Yuanru Qian, Andrew Sharp, Jiuling Wang 1 Agenda ● Motivation ● Data Model ● The HBase Distributed System ● Data Operations ● Access APIs ● Architecture 2 Motivation ● Hadoop is a framework that supports operations on a large amount of data. ● Hadoop includes the Hadoop Distributed File System (HDFS) ● HDFS does a good job of storing large amounts of data, but lacks quick random read/write capability. ● That’s where Apache HBase comes in. 3 Introduction ● HBase is an open source, sparse, consistent distributed, sorted map modeled after Google’s BigTable. ● Began as a project by Powerset to process massive amounts of data for natural language processing. ● Developed as part of Apache’s Hadoop project and runs on top of Hadoop Distributed File System. 4 Big Picture MapReduce Thrift/REST Your Java Hive/Pig Gateway Application Java Client ZooKeeper HBase HDFS 5 An Example Operation The Job: The Table: A MapReduce job row key column 1 column 2 needs to operate on a series of webpages “com.cnn.world” 13 4.5 matching *.cnn.com “com.cnn.tech” 46 7.8 “com.cnn.money” 44 1.2 6 The HBase Data Model 7 Data Model, Groups of Tables RDBMS Apache HBase database namespace table table 8 Data Model, Single Table RDBMS table col1 col2 col3 col4 row1 row2 row3 9 Data Model, Single Table Apache HBase columns are table fam1 fam2 grouped into fam1:col1 fam1:col2 fam2:col1 fam2:col2 Column Families row1 row2 row3 10 Sparse example Row Key fam1:contents fam1:anchor “com.cnn.www” contents:html = “<html>...” contents:html = “<html>...”
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Apache Hadoop Goes Realtime at Facebook
    Apache Hadoop Goes Realtime at Facebook Dhruba Borthakur Joydeep Sen Sarma Jonathan Gray Kannan Muthukkaruppan Nicolas Spiegelberg Hairong Kuang Karthik Ranganathan Dmytro Molkov Aravind Menon Samuel Rash Rodrigo Schmidt Amitanand Aiyer Facebook {dhruba,jssarma,jgray,kannan, nicolas,hairong,kranganathan,dms, aravind.menon,rash,rodrigo, amitanand.s}@fb.com ABSTRACT 1. INTRODUCTION Facebook recently deployed Facebook Messages, its first ever Apache Hadoop [1] is a top-level Apache project that includes user-facing application built on the Apache Hadoop platform. open source implementations of a distributed file system [2] and Apache HBase is a database-like layer built on Hadoop designed MapReduce that were inspired by Googles GFS [5] and to support billions of messages per day. This paper describes the MapReduce [6] projects. The Hadoop ecosystem also includes reasons why Facebook chose Hadoop and HBase over other projects like Apache HBase [4] which is inspired by Googles systems such as Apache Cassandra and Voldemort and discusses BigTable, Apache Hive [3], a data warehouse built on top of the applications requirements for consistency, availability, Hadoop, and Apache ZooKeeper [8], a coordination service for partition tolerance, data model and scalability. We explore the distributed systems. enhancements made to Hadoop to make it a more effective realtime system, the tradeoffs we made while configuring the At Facebook, Hadoop has traditionally been used in conjunction system, and how this solution has significant advantages over the with Hive for storage and analysis of large data sets. Most of this sharded MySQL database scheme used in other applications at analysis occurs in offline batch jobs and the emphasis has been on Facebook and many other web-scale companies.
    [Show full text]
  • Hbase: the Definitive Guide
    HBase: The Definitive Guide HBase: The Definitive Guide Lars George Beijing • Cambridge • Farnham • Köln • Sebastopol • Tokyo HBase: The Definitive Guide by Lars George Copyright © 2011 Lars George. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://my.safaribooksonline.com). For more information, contact our corporate/institutional sales department: (800) 998-9938 or [email protected]. Editors: Mike Loukides and Julie Steele Indexer: Angela Howard Production Editor: Jasmine Perez Cover Designer: Karen Montgomery Copyeditor: Audrey Doyle Interior Designer: David Futato Proofreader: Jasmine Perez Illustrator: Robert Romano Printing History: September 2011: First Edition. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. HBase: The Definitive Guide, the image of a Clydesdale horse, and related trade dress are trademarks of O’Reilly Media, Inc. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O’Reilly Media, Inc., was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and author assume no responsibility for errors or omissions, or for damages resulting from the use of the information con- tained herein. ISBN: 978-1-449-39610-7 [LSI] 1314323116 For my wife Katja, my daughter Laura, and son Leon.
    [Show full text]
  • TR-4744: Secure Hadoop Using Apache Ranger with Netapp In
    Technical Report Secure Hadoop using Apache Ranger with NetApp In-Place Analytics Module Deployment Guide Karthikeyan Nagalingam, NetApp February 2019 | TR-4744 Abstract This document introduces the NetApp® In-Place Analytics Module for Apache Hadoop and Spark with Ranger. The topics covered in this report include the Ranger configuration, underlying architecture, integration with Hadoop, and benefits of Ranger with NetApp In-Place Analytics Module using Hadoop with NetApp ONTAP® data management software. TABLE OF CONTENTS 1 Introduction ........................................................................................................................................... 4 1.1 Overview .........................................................................................................................................................4 1.2 Deployment Options .......................................................................................................................................5 1.3 NetApp In-Place Analytics Module 3.0.1 Features ..........................................................................................5 2 Ranger ................................................................................................................................................... 6 2.1 Components Validated with Ranger ................................................................................................................6 3 NetApp In-Place Analytics Module Design with Ranger..................................................................
    [Show full text]
  • Beyond Macrobenchmarks: Microbenchmark-Based Graph Database Evaluation
    Beyond Macrobenchmarks: Microbenchmark-based Graph Database Evaluation Matteo Lissandrini Martin Brugnara Yannis Velegrakis Aalborg University University of Trento University of Trento [email protected] [email protected] [email protected] ABSTRACT There are two categories of graph management systems Despite the increasing interest in graph databases their re- that address two complementary yet distinct sets of func- quirements and specifications are not yet fully understood tionalities. The first is that of graph processing systems [27, by everyone, leading to a great deal of variation in the sup- 42,44], which analyze graphs to discover characteristic prop- ported functionalities and the achieved performances. In erties, e.g., average connectivity degree, density, and mod- this work, we provide a comprehensive study of the exist- ularity. They also perform batch analytics at large scale, ing graph database systems. We introduce a novel micro- implementing computationally expensive graph algorithms, benchmarking framework that provides insights on their per- such as PageRank [54], SVD [23], strongly connected com- formance that go beyond what macro-benchmarks can of- ponents identification [63], and core identification [12, 20]. fer. The framework includes the largest set of queries and Those are systems like GraphLab, Giraph, Graph Engine, operators so far considered. The graph database systems and GraphX [67]. The second category is that of graph are evaluated on synthetic and real data, from different do- databases (GDB for short) [5]. Their focus is on storage mains, and at scales much larger than any previous work. and querying tasks where the priority is on high-throughput The framework is materialized as an open-source suite and and transactional operations.
    [Show full text]
  • Pentaho EMR46 SHIM 7.1.0.0 Open Source Software Packages
    Pentaho EMR46 SHIM 7.1.0.0 Open Source Software Packages Contact Information: Project Manager Pentaho EMR46 SHIM Hitachi Vantara Corporation 2535 Augustine Drive Santa Clara, California 95054 Name of Product/Product Version License Component An open source Java toolkit for 0.9.0 Apache License Version 2.0 Amazon S3 AOP Alliance (Java/J2EE AOP 1.0 Public Domain standard) Apache Commons BeanUtils 1.9.3 Apache License Version 2.0 Apache Commons CLI 1.2 Apache License Version 2.0 Apache Commons Daemon 1.0.13 Apache License Version 2.0 Apache Commons Exec 1.2 Apache License Version 2.0 Apache Commons Lang 2.6 Apache License Version 2.0 Apache Directory API ASN.1 API 1.0.0-M20 Apache License Version 2.0 Apache Directory LDAP API Utilities 1.0.0-M20 Apache License Version 2.0 Apache Hadoop Amazon Web 2.7.2 Apache License Version 2.0 Services support Apache Hadoop Annotations 2.7.2 Apache License Version 2.0 Name of Product/Product Version License Component Apache Hadoop Auth 2.7.2 Apache License Version 2.0 Apache Hadoop Common - 2.7.2 Apache License Version 2.0 org.apache.hadoop:hadoop-common Apache Hadoop HDFS 2.7.2 Apache License Version 2.0 Apache HBase - Client 1.2.0 Apache License Version 2.0 Apache HBase - Common 1.2.0 Apache License Version 2.0 Apache HBase - Hadoop 1.2.0 Apache License Version 2.0 Compatibility Apache HBase - Protocol 1.2.0 Apache License Version 2.0 Apache HBase - Server 1.2.0 Apache License Version 2.0 Apache HBase - Thrift - 1.2.0 Apache License Version 2.0 org.apache.hbase:hbase-thrift Apache HttpComponents Core
    [Show full text]
  • A Comprehensive Study of Hbase Storage Architecture—A Systematic Literature Review
    S S symmetry Review A Comprehensive Study of HBase Storage Architecture—A Systematic Literature Review Muhammad Umair Hassan 1,*,†, Irfan Yaqoob 2,†, Sidra Zulfiqar 3,† and Ibrahim A. Hameed 1,* 1 Department of ICT and Natural Sciences, Norwegian University of Science and Technology (NTNU), Larsgårdsvegen 2, 6009 Ålesund, Norway 2 School of Information Science and Engineering, University of Jinan, Jinan 250022, China; [email protected] 3 Department of Computer Science, University of Okara, Okara 56300, Pakistan; [email protected] * Correspondence: [email protected] (M.U.H.); [email protected] (I.A.H.) † Authors contributed equally. Abstract: According to research, generally, 2.5 quintillion bytes of data are produced every day. About 90% of the world’s data has been produced in the last two years alone. The amount of data is increasing immensely. There is a fight to use and store this tremendous information effectively. HBase is the top option for storing huge data. HBase has been selected for several purposes, including its scalability, efficiency, strong consistency support, and the capacity to support a broad range of data models. This paper seeks to define, taxonomically classify, and systematically compare existing research on a broad range of storage technologies, methods, and data models based on HBase storage architecture’s symmetry. We perform a systematic literature review on a number of published works proposed for HBase storage architecture. This research synthesis results in a knowledge base that helps understand which big data storage method is an effective one. Keywords: HBase; big data; storage architecture; internet of things Citation: Hassan, M.U.; Yaqoob, I.; Zulfiqar, S.; Hameed, I.A.
    [Show full text]
  • Hortonworks Data Platform Data Access (August 29, 2016)
    Hortonworks Data Platform Data Access (August 29, 2016) docs.cloudera.com HDP Data Access Guide August 29, 2016 Hortonworks Data Platform: Data Access Copyright © 2012-2016 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Hadoop projects including YARN, Hadoop Distributed File System (HDFS), HCatalog, Pig, Hive, HBase, ZooKeeper and Ambari. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included. Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain, free and open source. Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to Contact Us directly to discuss your specific needs. Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 4.0 License. http://creativecommons.org/licenses/by-sa/4.0/legalcode ii HDP Data Access Guide August 29, 2016 Table of Contents 1.
    [Show full text]