Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline

Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline

Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline Scott M. Sawyer and B. David O’Gwynn MIT Lincoln Laboratory Emails: fscott.sawyer, [email protected] Abstract—Streaming, big data applications face challenges in study [4]. Situational awareness in the cyber domain addresses creating scalable data flow pipelines, in which multiple data the problem of providing network operators and analysts a streams must be collected, stored, queried, and analyzed. These complete picture of activity on their networks in order to data sources are characterized by their volume (in terms of dataset size), velocity (in terms of data rates), and variety (in detect advanced threats and to protect data from loss or leak. terms of fields and types). For many applications, distributed The LLCySA system is a platform for operators and analytics NoSQL databases are effective alternatives to traditional re- developers that focuses on big data capability and cluster lational database management systems. This paper considers computing. This approach poses several challenges involving a cyber situational awareness system that uses the Apache the sheer amount of data, the computational requirements of Accumulo database to provide scalable data warehousing, real- time data ingest, and responsive querying for human users and processing the data, and the interactivity of the system. analytic algorithms. We evaluate Accumulo’s ingestion scalability There is currently considerable related work in big data as a function of number of client processes and servers. We pipeline implementation and evaluation, as well as related also describe a flexible data model with effective techniques for query planning and query batching to deliver responsive results. work in the field of cyber situational awareness. Accumulo Query performance is evaluated in terms of latency of the client has been demonstrated to scale to large cluster sizes [5] and receiving initial result sets. Accumulo performance is measured to achieve over 100 million inserts per second [6]. Query on a database of up to 8 nodes using real cyber data. processing is a noted problem in NoSQL databases, with research focusing on query planning [7] and providing SQL I. INTRODUCTION support [8]. In the area of pipeline frameworks, open source An increasingly common class of big data systems collect, projects for distributed stream processing have emerged, such organize, and analyze data sets in order to gain insights as Apache Storm, which can serve the use case of continu- or solve problems in a variety of domains. Interesting data ous message processing and database inserts [9]. This paper sets are continuously growing, and they must be captured builds on foundational work in the area of data models for and stored in a scalable manner. A big data pipeline is schemaless databases [10]. Additionally, other network mon- a distributed system that manages the collection, parsing, itoring systems have investigated approaches using NoSQL enrichment, storage, and retrieval of one or more data sources. databases [11]. The fundamental challenge of a big data pipeline is keeping up with data rates while supporting necessary queries for In this paper, we describe our application of Accumulo for analytics. a cyber big data pipeline, including our data model, storage To solve this problem, many big data systems use NoSQL schema, parallel ingest process, and query processor design. distributed databases, such as Apache Accumulo [1], which We describe scalability of the ingest process and quantify the is designed to scale to thousands of servers and petabytes of impact of our query processing approach on query respon- arXiv:1407.5661v1 [cs.DB] 21 Jul 2014 data, while retaining the ability to efficiently retrieve entries. siveness (i.e., the latency to receiving initial query results). Accumulo differs from a traditional relational database man- Specifically, we present the following contributions: agement system (RDBMS) due to its flexible schema, relaxed transaction constraints, range-partitioned distribution model, • Characterization of ingest performance in Accumulo as a and lack of Structured Query Language (SQL) support [2]. function of client and server parallelism, This design trades consistency and query performance for • Adaptive query batching algorithm that improves respon- capacity and ingest performance. Compared to other tabular siveness of large data retrievals, and datastores, Accumulo offers proven scalability and unique se- • An approach to query planning in Accumulo with simple, curity features making it attractive for certain applications [3]. effective optimization guidelines. This paper considers the Lincoln Laboratory Cyber Situa- tional Awareness (LLCySA) system as a big data pipeline case This paper is organized as follows. Sec. II describes the data model and ingestion process. Next, Sec. III describes our This work is sponsored by the Assistant Secretary of Defense for Research approach for query planning and batching. Sec. IV presents and Engineering under Air Force contract FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the author and quantitative results for ingest and query experiments, and are not necessarily endorsed by the United States Government. Sec. V discusses conclusions from these results. II. DATA MODEL AND INGEST Event Table Key Value Accumulo is a key–value store in which records comprise a Row ID Column Qualifier key (actually a tuple of strings, including a row ID and column reversed qualifier) and a corresponding value. Since keys contain row shard timestamp hash field name field value and column components, the schema is often thought of as a table. Keys are stored in lexicographically sorted order and Index Table Key distributed across multiple database servers. A master server Value Row ID Column maintains an index of the sorted key splits, which provides a Qualifier field field reversed event table way to identify the location of a given record. Each key–value shard “1” pair is called an entry, and entries are persisted to a distributed name value timestamp row ID filesystem in an indexed sequential access map (ISAM) file, Aggregate Table employing a B-tree index, relative key encoding, and block- Key Value level compression. A set of contiguous entries comprise a Row ID Column tablet, which is stored on one of the database tablet servers. Qualifier Because the entries are sorted by key at ingest time, retrieving shard field name field value time interval count a value given a key can be accomplished very quickly, nearly independently of the total number of records stored in the Fig. 1. The LLCySA storage schema uses three tables per data source with system. carefully designed keys to support highly parallel ingest and efficient retrieval. LLCySA ingests data sources containing information about network events (e.g., authentication, web requests, etc.). The system stores more than 50 event types derived from sources such as web proxy, email, DHCP, intrusion detection systems, and NetFlow record metadata. The raw data is often text (e.g., support for filtering entries by time range, and a short hash JSON, XML or human readable log formats) and must be to prevent collisions. The index table encodes field names parsed into a set of fields and values before being inserted and values in the row ID to allow fast look-ups by column into the database. value. The row ID stored in the index table’s column qualifier Designing an effective key scheme is vital to achieving matches the corresponding row in the event table. Finally, efficient ingest and querying in Accumulo. Our storage schema the aggregate table maintains a count of particular value is adapted from the Dynamic Distributed Dimensional Data occurrences by time interval. These counts are helpful in query Model (D4M) 2.0 schema [12]. The LLCySA schema was planning. Figure 1 shows the key scheme for each of the three originally described in [13]. This section provides a brief table types. summary and includes some extensions to the original design. Earlier stages in the big data pipeline collect and pre-process Entries in Accumulo are sorted by key, and data retrieval is the various data sources, placing files to be ingested on a performed through server-side scans of entries, coordinated by central filesystem. A master ingest process monitors new data the Accumulo Java API. When a scan is initiated on a table, and appends these files to a partitioned queue. Multiple ingest it is given a starting and ending row ID range (e.g. “a” to “b”) worker processes monitor a queue partition for work. Upon and will only return those entries whose row IDs fall within receiving a filename and metadata (i.e., data source and table that range. Column qualifiers can also be provided through the name), the ingest worker reads lines from the file, parsing API to perform projections on returned data. Because entries the data into entries to be stored in the event, index and are sorted first by row ID, queries based primarily on row IDs aggregate tables. Counts for the aggregate table are summed are extremely fast. Therefore, row IDs must be designed such locally by the ingest worker to reduce the number of records that the most common queries can be performed by simply that must be aggregated on the server side using Accumulo’s selecting a range of rows. combiner framework. The master and worker processes are At ingest time, performance can be impacted by hot spots implemented in Python and use the Java Native Interface (JNI) in data distribution. Thus we employ a database sharding to the Accumulo API. Entries are sent to Accumulo using the technique that uniformly and randomly distributes entries BatchWriter API class, which automatically batches and across all tablet servers. Sharding is achieved by prepending sends bulk updates to the database instance for efficiency. the row ID with a random zero-padded shard number between 0 and N − 1, for a total of N shards.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us