Understanding Query Performance in Accumulo

Total Page:16

File Type:pdf, Size:1020Kb

Understanding Query Performance in Accumulo Understanding Query Performance in Accumulo Scott M. Sawyer, B. David O’Gwynn, An Tran, and Tamara Yu MIT Lincoln Laboratory Emails: fscott.sawyer, dogwynn, atran, [email protected] Abstract—Open-source, BigTable-like distributed databases the interim. Also unlike RDBMS tables, BigTables do not provide a scalable storage solution for data-intensive applica- require pre-defined schema, allowing columns to be added to tions. The simple key–value storage schema provides fast record any row at-will for greater flexibility. However, new NoSQL ingest and retrieval, nearly independent of the quantity of databases lack the mature code base and rich feature set of data stored. However, real applications must support non-trivial established RDBMS solutions. Where needed, query planning queries that require careful key design and value indexing. We and optimization must be implemented by the application study an Apache Accumulo–based big data system designed for a network situational awareness application. The application’s developer. storage schema and data retrieval requirements are analyzed. We then characterize the corresponding Accumulo performance As a result, optimizing data retrieval in a NoSQL system bottlenecks. Queries are shown to be communication-bound and can be challenging. Developers must design a row key scheme, server-bound in different situations. Inefficiencies in the open- decide which optimizations to employ, and write queries source communication stack and filesystem limit network and that efficiently scan tables. Because distributed systems are I/O performance, respectively. Additionally, in some situations, complex, bottlenecks can be difficult to predict and identify. parallel clients can contend for server-side resources. Maximizing Many open-source NoSQL databases are implemented in Java data retrieval rates for practical queries requires effective key and use heavy communication middleware, causing inefficien- design, indexing, and client parallelization. cies that are difficult to characterize. Thus, designing a Big Data system that meets challenging data ingest, query, and I. INTRODUCTION scalability requirements represents a very large tradespace. An increasing number of applications now operate on In this paper, we study a practical Big Data application datasets too large to store on a single database server. These using a multi-node Accumulo instance. The Lincoln Labora- Big Data applications face challenges related to the volume, tory Cyber Situational Awareness (LLCySA) system monitors velocity and variety of data. A new class of distributed traffic and events on an enterprise network and uses Big Data databases offers scalable storage and retrieval for many of these analytics to detect cybersecurity threats. We present the storage applications. One such solution, Apache Accumulo [1] is an schema, analyze the retrieval requirements, and experimentally open-source database based on Google’s BigTable design [2]. determine the bottlenecks for different query types. Although BigTable is a tabular key–value store, in which each key is we focus on this particular system, results are applicable to a pair of strings corresponding to a row and column identifier, a wide range of applications that use BigTable-like databases such that records have the format: to manage semi-structured event data. Advanced applications must perform ad hoc, multi-step queries spanning multiple (row, column)!value database servers and clients. Efficient queries must balance server, client and network resources. Records are lexicographically sorted by row key, and rows are distributed across multiple database servers. The sorted records Related work has investigated key design schemes and ensure fast, efficient reads of a row, or a small range of rows, performance issues in BigTable implementations. Although regardless of the total system size. Compared to HBase [3], distributed key–value stores have been widely used by web another open-source BigTable implementation, Accumulo pro- companies for several years, studying these systems is a vides cell-level access control and features an architecture that relatively new area of research. Kepner, et al. propose the leads to higher performance for parallel clients [4]. Dynamic Distributed Dimensional Data Model (D4M), a gen- eral purpose schema suitable for use in Accumulo or other The BigTable design eschews many features of traditional BigTable implementations [5]. Wasi-ur-Rahman, et al. study database systems, making trade-offs between performance, performance of HBase and identify the communication stack scalability, and data consistency. A traditional Relational as the principal bottleneck [6]. The primary contribution of our Database Management System (RDBMS) based on Structured work is the analysis of Accumulo performance bottlenecks in Query Language (SQL) provides atomic transactions and data the context of an advanced Big Data application. consistency, an essential requirement for many applications. On the other hand, BigTable uses a “NoSQL” model that This paper is organized as follows. Section II discusses relaxes these transaction requirements, guaranteeing only even- the data retrieval and server-side processing capabilities of tual consistency while tolerating stale or approximate data in Accumulo. Section III contains a case study of a challenging application using Accumulo. Then, Section IV describes our This work is sponsored by the Assistant Secretary of Defense for Research and Engineering under Air Force contract FA8721-05-C-0002. Opinions, performance evaluation methodology and presents the results. interpretations, conclusions, and recommendations are those of the author and Finally, Section V concludes with specific recommendations are not necessarily endorsed by the United States Government. for optimizing data retrieval performance in Accumulo. Primary Table Src. Dest. Bytes Port 13-01-01_a8c8 Alice Bob 128 13-01-02_c482 Bob Carol 80 13-01-02_7204 Alice Carol 8080 13-01-03_5d86 Carol Bob 55 21 Fig. 1. In a parallel client query, M tablet servers send entries to N Accumulo Index Table clients where results are further processed before being aggregated at a single query client. 13-01-01_a8c8 13-01-02_c482 13-01-02_7204 13-01-03_5d86 Alice|Src. 1 1 Bob|Dest. 1 1 II. DATA RETRIEVAL IN ACCUMULO Bob|Src. 1 A single key–value pair in Accumulo is called an entry. Per Carol|Dest. 1 1 the BigTable design, Accumulo persists entries to a distributed Carol|Src. 1 file system, generally running on commodity spinning-disk hard drives. A contiguous range of sorted rows is called a Fig. 2. In this example, network events are stored in a primary table using a tablet, and each server in the database instance is a tablet datestamp and hash as a unique row identifer. The index table enables efficient server. Accumulo splits tablets into files and stores them on the look-up by attribute value without the need to scan the entire primary table. Hadoop Distributed File System (HDFS) [7], which provides redundancy and high-speed I/O. Tablet servers can perform simple operations on entries, enabling easy parallelization of requires a two-step query. Suppose each row in the primary tasks such as regular expression filtering. Tablet servers will table represents a persisted object, and each column and value also utilize available memory for caching entries to accelerate pair in that row represent an object attribute. In the index table, ingest and query. each row key stores a unique attribute value from the primary table, and each column contains a key pointing to a row in At query time, Accumulo tablet servers process entries the primary table. Figure 2 illustrates a notional example of using an iterator framework. Iterators can be cascaded together this indexing technique. Given these tables, an indexed value to achieve simple aggregation and filtering. The server-side look-up requires a first scan of the index table to retrieve a set iterator framework enables entries to be processed in parallel of row keys, and a second scan to retrieve those rows from the at the tablet servers and filtered before sending entries across primary table. In a RDBMS, this type of query optimization the network to clients [8]. While these iterators make it easy to is handled automatically. implement a parallel processing chain for entries, the iterator programming model has limitations. In particular, iterators must operate on a single row or entry, be stateless between III. CASE STUDY:NETWORK SITUATIONAL AWARENESS iterations, and not communicate with other iterators. Opera- Network Situational Awareness (SA) is an emerging appli- tions that cannot be implemented in the iterator framework cation area that addresses aspects of the cybersecurity problem must be performed at the client. with data-intensive analytics. The goal of network SA is to pro- An Accumulo client initiates data retrieval by requesting vide detailed, current information about the state of a network scans on one or more row ranges. Entries are processed by and how it got that way. The primary information sources are any configured iterators, and the resulting entries are sent the logs of various network services. Enterprise networks are from server to client using Apache Thrift object serialization large and complex, as an organization may have thousands middleware [9], [10]. Clients deserialize one entry at a time of users and devices, dozens of servers, multiple physical using client-side iterators and process the entries according to locations, and
Recommended publications
  • Assessment of Multiple Ingest Strategies for Accumulo Key-Value Store
    Assessment of Multiple Ingest Strategies for Accumulo Key-Value Store by Hai Pham A thesis submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Master of Science Auburn, Alabama May 7, 2016 Keywords: Accumulo, noSQL, ingest Copyright 2016 by Hai Pham Approved by Weikuan Yu, Co-Chair, Associate Professor of Computer Science, Florida State University Saad Biaz, Co-Chair, Professor of Computer Science and Software Engineering, Auburn University Sanjeev Baskiyar, Associate Professor of Computer Science and Software Engineering, Auburn University Abstract In recent years, the emergence of heterogeneous data, especially of the unstructured type, has been extremely rapid. The data growth happens concurrently in 3 dimensions: volume (size), velocity (growth rate) and variety (many types). This emerging trend has opened a new broad area of research, widely accepted as Big Data, which focuses on how to acquire, organize and manage huge amount of data effectively and efficiently. When coping with such Big Data, the traditional approach using RDBMS has been inefficient; because of this problem, a more efficient system named noSQL had to be created. This thesis will give an overview knowledge on the aforementioned noSQL systems and will then delve into a more specific instance of them which is Accumulo key-value store. Furthermore, since Accumulo is not designed with an ingest interface for users, this thesis focuses on investigating various methods for ingesting data, improving the performance and dealing with numerous parameters affecting this process. ii Acknowledgments First and foremost, I would like to express my profound gratitude to Professor Yu who with great kindness and patience has guided me through not only every aspect of computer science research but also many great directions towards my personal issues.
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Licensing Information User Manual Release 4 (4.9) E87604-01
    Oracle® Big Data Appliance Licensing Information User Manual Release 4 (4.9) E87604-01 May 2017 Describes licensing and support for Oracle Big Data Appliance. Oracle Big Data Appliance Licensing Information User Manual, Release 4 (4.9) E87604-01 Copyright © 2011, 2017, Oracle and/or its affiliates. All rights reserved. Primary Author: Frederick Kush This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency- specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]
  • Creación De Un Entorno Para El Análisis De Datos Geográficos Utilizando Técnicas De Big Data, Geomesa Y Apache Spark
    Universidad Politécnica de Madrid Escuela Técnica Superior de Ingenieros de Telecomunicación CREACIÓN DE UN ENTORNO PARA EL ANÁLISIS DE DATOS GEOGRÁFICOS UTILIZANDO TÉCNICAS DE BIG DATA, GEOMESA Y APACHE SPARK TRABAJO FIN DE MÁSTER Alejandro Rodríguez Calzado 2017 Universidad Politécnica de Madrid Escuela Técnica Superior de Ingenieros de Telecomunicación Máster Universitario en Ingeniería de Redes y Servicios Telemáticos TRABAJO FIN DE MÁSTER CREACIÓN DE UN ENTORNO PARA EL ANÁLISIS DE DATOS GEOGRÁFICOS UTILIZANDO TÉCNICAS DE BIG DATA, GEOMESA Y APACHE SPARK Autor Alejandro Rodríguez Calzado Director Borja Bordel Sánchez Ponente Diego Martín de Andrés Departamento de Ingeniería de Sistemas Telemáticos 2017 Resumen Las nuevas tecnologías como el IoT y los smartphones han impulsado las cantidades de datos que generamos y almacenamos. Esto, junto al desarrollo de las técnicas y tecnologías de geolocalización, ha supuesto un aumento significativo de los datos asociados a posiciones geográficas. La utilidad de los datos geográficos es enorme y muy diversa, pero como todos los datos, precisan de un análisis para ser convertidos en información útil. Este análisis se ha llevado a cabo durante mucho tiempo con Sistemas de Información Geográfica (GIS) tradicionales que, debido al gran aumento de la cantidad de datos geográficos, han tenido que evolucionar para adaptarse a las técnicas y tecnologías de Big Data. El objetivo de este Trabajo Fin de Máster es el diseño y la implementación de un entorno para el procesamiento de datos, donde sea posible analizar flujos de información geográfica. Para ello se hará uso de técnicas de Big Data y se emplearán las soluciones tecnológicas Apache Spark y GeoMesa.
    [Show full text]
  • Hortonworks Data Platform Date of Publish: 2018-09-21
    Release Notes 3 Hortonworks Data Platform Date of Publish: 2018-09-21 http://docs.hortonworks.com Contents HDP 3.0.1 Release Notes..........................................................................................3 Component Versions.............................................................................................................................................3 New Features........................................................................................................................................................ 3 Deprecation Notices..............................................................................................................................................4 Terminology.............................................................................................................................................. 4 Removed Components and Product Capabilities.....................................................................................4 Unsupported Features........................................................................................................................................... 4 Technical Preview Features......................................................................................................................4 Upgrading to HDP 3.0.1...................................................................................................................................... 5 Before you begin.....................................................................................................................................
    [Show full text]
  • Big Data Security Analysis and Secure Hadoop Server
    Big Data Security Analysis And Secure Hadoop Server Kripa Shanker Master’s thesis January 2018 Master's Degree in Information Technology 2 ABSTRACT Tampereen Ammattikorkeakoulu Tampere University of Applied Sciences Master's Degree in Information Technology Kripa Shanker Big data security analysis and secure hadoop server Master's thesis 62 pages, appendices 4 pages January 2018 Hadoop is a so influential technology that’s let us to do incredible things but major thing to secure informative data and environment is a big challenge as there are many bad guys (crackers, hackers) are there to harm the society using this data. Hadoop is now used in retail, banking, and healthcare applications; it has attracted the attention of thieves as well. While storing sensitive huge data, security plays an important role to keep it safe. Security was not that much considered when Hadoop was initially designed. Security is an important topic in Hadoop cluster. Plenty of examples are available in open media on data breaches and most recently was RANSOMEWARE which get access in server level which is more dangerous for an organizations. This is best time to only focus on security at any cost and time needed to secure data and platform. Hadoop is designed to run code on a distributed cluster of machines so without proper authentication anyone could submit code and it would be executed. Different projects have started to improve the security of Hadoop. In this thesis, the security of the system in Hadoop version 1, Hadoop version 2 and Hadoop version 3 is evaluated and different security enhancements are proposed, considering security improvements made by the two mentioned projects, Project Apache Knox Gateway, Project Apache Ranger and Apache Sentry, in terms of encryption, authentication, and authorization.
    [Show full text]
  • HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack
    HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack Geoffrey C. Fox, Judy Qiu, Supun Kamburugamuve Shantenu Jha, Andre Luckow School of Informatics and Computing RADICAL Indiana University Rutgers University Bloomington, IN 47408, USA Piscataway, NJ 08854, USA fgcf, xqiu, [email protected] [email protected], [email protected] Abstract—We review the High Performance Computing En- systems as they illustrate key capabilities and often motivate hanced Apache Big Data Stack HPC-ABDS and summarize open source equivalents. the capabilities in 21 identified architecture layers. These The software is broken up into layers so that one can dis- cover Message and Data Protocols, Distributed Coordination, Security & Privacy, Monitoring, Infrastructure Management, cuss software systems in smaller groups. The layers where DevOps, Interoperability, File Systems, Cluster & Resource there is especial opportunity to integrate HPC are colored management, Data Transport, File management, NoSQL, SQL green in figure. We note that data systems that we construct (NewSQL), Extraction Tools, Object-relational mapping, In- from this software can run interoperably on virtualized or memory caching and databases, Inter-process Communication, non-virtualized environments aimed at key scientific data Batch Programming model and Runtime, Stream Processing, High-level Programming, Application Hosting and PaaS, Li- analysis problems. Most of ABDS emphasizes scalability braries and Applications, Workflow and Orchestration. We but not performance and one of our goals is to produce summarize status of these layers focusing on issues of impor- high performance environments. Here there is clear need tance for data analytics. We highlight areas where HPC and for better node performance and support of accelerators like ABDS have good opportunities for integration.
    [Show full text]
  • Code Smell Prediction Employing Machine Learning Meets Emerging Java Language Constructs"
    Appendix to the paper "Code smell prediction employing machine learning meets emerging Java language constructs" Hanna Grodzicka, Michał Kawa, Zofia Łakomiak, Arkadiusz Ziobrowski, Lech Madeyski (B) The Appendix includes two tables containing the dataset used in the paper "Code smell prediction employing machine learning meets emerging Java lan- guage constructs". The first table contains information about 792 projects selected for R package reproducer [Madeyski and Kitchenham(2019)]. Projects were the base dataset for cre- ating the dataset used in the study (Table I). The second table contains information about 281 projects filtered by Java version from build tool Maven (Table II) which were directly used in the paper. TABLE I: Base projects used to create the new dataset # Orgasation Project name GitHub link Commit hash Build tool Java version 1 adobe aem-core-wcm- www.github.com/adobe/ 1d1f1d70844c9e07cd694f028e87f85d926aba94 other or lack of unknown components aem-core-wcm-components 2 adobe S3Mock www.github.com/adobe/ 5aa299c2b6d0f0fd00f8d03fda560502270afb82 MAVEN 8 S3Mock 3 alexa alexa-skills- www.github.com/alexa/ bf1e9ccc50d1f3f8408f887f70197ee288fd4bd9 MAVEN 8 kit-sdk-for- alexa-skills-kit-sdk- java for-java 4 alibaba ARouter www.github.com/alibaba/ 93b328569bbdbf75e4aa87f0ecf48c69600591b2 GRADLE unknown ARouter 5 alibaba atlas www.github.com/alibaba/ e8c7b3f1ff14b2a1df64321c6992b796cae7d732 GRADLE unknown atlas 6 alibaba canal www.github.com/alibaba/ 08167c95c767fd3c9879584c0230820a8476a7a7 MAVEN 7 canal 7 alibaba cobar www.github.com/alibaba/
    [Show full text]
  • Identifying and Evaluating Software Architecture Erosion
    SCUOLA DI DOTTORATO UNIVERSITÀ DEGLI STUDI DI MILANO-BICOCCA Department of Informatics Systems and Communication PhD program Computer Science Cycle XXX IDENTIFYING AND EVALUATING SOFTWARE ARCHITECTURE EROSION Surname Roveda Name Riccardo Registration number 723299 Tutor: Prof. Alberto Ottavio Leporati Supervisor: Prof. Francesca Arcelli Fontana Coordinator: Prof. Stefania Bandini ACADEMIC YEAR 2016/2017 CONTENTS 1 introduction1 1.1 Main contributions of the thesis . 2 1.2 Research Questions . 4 1.3 Publications . 5 1.3.1 Published papers . 5 1.3.2 Submitted papers . 6 1.3.3 To be submitted papers . 6 1.3.4 Published papers not strictly related to the thesis . 6 2 related work8 2.1 Architectural smells definitions . 8 2.2 Architectural smells detection . 19 2.3 Code smells and architectural smells correlations . 20 2.3.1 Code smells correlations . 20 2.3.2 Code smells and architectural smell correlations . 21 2.4 Studies on software quality prediction and evolution . 22 2.4.1 Studies on software quality prediction . 22 2.4.2 Studies on software quality evolution . 23 2.5 Technical Debt Indexes . 23 2.5.1 CAST . 24 2.5.2 inFusion . 25 2.5.3 Sonargraph . 26 2.5.4 SonarQube . 26 2.5.5 Structure101 .......................... 27 2.6 Architectural smell refactoring . 27 3 experience reports on the detection of architectu- ral issues through different tools 29 3.1 Tools for evaluating code and architectural issues . 29 3.2 Tool support for evaluating architectural debt . 31 3.2.1 Evaluating the results inspection of the tools . 32 3.2.2 Evaluating the extracted data by the tools .
    [Show full text]
  • A New Initiative for Tiling, Stitching and Processing Geospatial Big Data in Distributed Computing Environments
    ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-4, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic A NEW INITIATIVE FOR TILING, STITCHING AND PROCESSING GEOSPATIAL BIG DATA IN DISTRIBUTED COMPUTING ENVIRONMENTS A. Olasz a*, B. Nguyen Thai b, D. Kristóf a a Department of Geoinformation, Institute of Geodesy, Cartography and Remote Sensing (FÖMI),5. Bosnyák sqr. Budapest, 1149 (olasz.angela, kristof.daniel)@fomi.hu b Department of Cartography and Geoinformatics, Eötvös Loránd University (ELTE), 1/A Pázmány Péter sétány, Budapest, 1117 Hungary, [email protected] Commission ICWG IV/II KEYWORDS: Distributed computing, GIS processing, raster data tiling, data assimilation, remote sensing data analysis, geospatial big data, spatial big data ABSTRACT Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu).
    [Show full text]
  • Bright Cluster Manager 8.0 Big Data Deployment Manual
    Bright Cluster Manager 8.0 Big Data Deployment Manual Revision: 97e18b7 Date: Wed Sep 29 2021 ©2017 Bright Computing, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Bright Computing, Inc. Trademarks Linux is a registered trademark of Linus Torvalds. PathScale is a registered trademark of Cray, Inc. Red Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc. SUSE is a registered trademark of Novell, Inc. PGI is a registered trademark of NVIDIA Corporation. FLEXlm is a registered trademark of Flexera Software, Inc. ScaleMP is a registered trademark of ScaleMP, Inc. All other trademarks are the property of their respective owners. Rights and Restrictions All statements, specifications, recommendations, and technical information contained herein are current or planned as of the date of publication of this document. They are reliable as of the time of this writing and are presented without warranty of any kind, expressed or implied. Bright Computing, Inc. shall not be liable for technical or editorial errors or omissions which may occur in this document. Bright Computing, Inc. shall not be liable for any damages resulting from the use of this document. Limitation of Liability and Damages Pertaining to Bright Computing, Inc. The Bright Cluster Manager product principally consists of free software that is licensed by the Linux authors free of charge. Bright Computing, Inc. shall have no liability nor will Bright Computing, Inc. provide any warranty for the Bright Cluster Manager to the extent that is permitted by law.
    [Show full text]
  • Koverse Release 2.7
    Koverse Release 2.7 Sep 12, 2018 Contents 1 Quick Start Guide 1 2 Usage Guide 17 3 Administration Guide 75 4 Developer Documentation 113 5 Access Control 169 i ii CHAPTER 1 Quick Start Guide This Quick Start guide for Koverse is intended for users who want to get up and running quickly with Koverse. It steps through the installation of Koverse, ingesting data and executing queries. Check out the Koverse User Guide for complete documentation of all features and installation instructions. 1.1 Recommendations The recommended Operating System is RHEL 6.x or Centos 6.x. Recommended Hadoop Release is Cloudera Manager 5.5 with Accumulo 1.7 Parcel and Service installed. See http: //www.cloudera.com/documentation/other/accumulo/latest/PDF/Apache-Accumulo-Installation-Guide.pdf for more details. Recommended Koverse release can be found at http://repo.koverse.com/latest/csd 1.1.1 Infrastructure and Software Koverse and the open source software it leverages must be run on a system with no less than 10 GB of memory. For workloads beyond simple examples and testing we recommend a properly provisioned Hadoop cluster with five or more nodes. Using the Cloudera QuickStart VM is not recommended. See http://www.koverse.com/question/ using-the-cloudera-quick-start-vim-and-the-koverse-parcel for more information. 1 Koverse, Release 2.7 1.2 Installation 1.2.1 Amazon Web Services Installation Using Koverse with AWS Marketplace The paid AMI available in the AWS marketplace is an easy way to get a Koverse instance up and running if you do not need to install on existing infrastructure.
    [Show full text]