Managing Variant Calling Files the Big Data Way Using HDFS and Apache Parquet

Managing Variant Calling Files the Big Data Way Using HDFS and Apache Parquet

Managing Variant Calling Files the Big Data Way Using HDFS and Apache Parquet Aikaterini Boufea Richard Finkers Martijn van Kaauwen Centre for Genomic & Experimental Plant Breeding Group Plant Breeding Group Medicine MRC IGMM Wageningen University & Research Wageningen University & Research University of Edinburgh Wageningen, The Netherlands Wageningen, The Netherlands Edinburgh, UK [email protected] Mark Kramer Ioannis N. Athanasiadis Information Technology Group Information Technology Group Wageningen University & Research Wageningen University & Research Wageningen, The Netherlands Wageningen, The Netherlands [email protected] ABSTRACT 1 INTRODUCTION Big Data has been seen as a remedy for the efficient management The introduction of Next-Generation Sequencing enabled the fast of the ever-increasing genomic data. In this paper, we investigate and cheap sequencing of whole genomes [2, 18]. Today, even small the use of Apache Spark to store and process Variant Calling Files laboratories are able to work on whole-genome sequencing projects (VCF) on a Hadoop cluster. We demonstrate Tomatula, a software and generate vast volumes of genomic data. DNA sequences pro- tool for converting VCF files to Apache Parquet storage format, vide information on transcription, translation, subcellular localiza- and an application to query variant calling datasets. We evaluate tion, phenotypic and genotypic variation, replication, recombina- how the wall time (i.e. time until the query answer is returned to tion, protein - DNA interactions and many more. They may even the user) scales out on a Hadoop cluster storing VCF files, either reveal novel regions of the genome that express previously un- in the original flat-file format, or using the Apache Parquet colum- known genes. This new information can have a great impact in the nar storage format. Apache Parquet can compress the VCF data by knowledge scientists have about specific organisms, and explain around a factor of 10, and supports easier querying of VCF files some of their observable and hidden traits, such as susceptibility as it exposes the field structure. We discuss advantages and disad- to a disease or other cell functions. Thus, research is focused on vantages in terms of storage capacity and querying performance exploring the genome variation among individuals of the same or with both flat VCF files and Apache Parquet using an open plant different species. Because of the large data volume, but also theva- breeding dataset. We conclude that Apache Parquet offers benefits riety of sources of origin and the velocity new data are generated, for reducing storage size and wall time, and scales out with larger we can characterize genomic data as Big Data. However, only a few datasets. genomic frameworks are scalable [14, 16]. Big Data platforms such as Hadoop [3] and Apache Spark [5, CCS CONCEPTS 24] have been seen as a remedy, enabling efficient storage, fast retrieval and processing of genomic data. Big data methods have • Applied computing → Bioinformatics; Agriculture; • Infor- been used for the redesign of bioinformatics algorithms and tools. mation systems → Data management systems; In CloudBurst [20] and Crossbow [13] projects, Hadoop was used to parallelize the procedure of mapping next-generation sequenc- KEYWORDS ing data to a reference genome for genotyping and single-nucleotide Big Data, bioinformatics, variant calling, Hadoop, HDFS, Apache polymorphisms (SNP) discovering. CloudBurst is a parallel version Spark, Apache Parquet, Tomatula of RMAP a seed-and-extend read-mapping algorithm that employs Hadoop, and demonstrated to execute RMAP up to 30 times faster than on a single computing machine [20]. Crossbow deployed a similar-purpose tool using cloud computing services [13]. Hadoop Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed was also used in Myrna, a cloud-computing pipeline for calculat- for profit or commercial advantage and that copies bear this notice and the fullcita- ing differential gene expression from large RNA-seq datasets [12]. tion on the first page. Copyrights for components of this work owned by others than SparkSeq is a general-purpose library for genomic cloud comput- the author(s) must be honored. Abstracting with credit is permitted. To copy other- wise, or republish, to post on servers or to redistribute to lists, requires prior specific ing, that provides methods and tools to build genomic analysis permission and/or a fee. Request permissions from [email protected]. pipelines and query data interactively [23]. VariantSpark makes BDCAT’17, December 5–8, 2017, Austin, TX, USA use of Spark’s machine learning library, MLib, to develop a tool © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associ- ation for Computing Machinery. for clustering variant calling data [19]. A tool for processing ge- ACM ISBN 978-1-4503-5549-0/17/12...$15.00 nomic reads and converting them to variant callings is ADAM [17]. https://doi.org/10.1145/3148055.3148060 Figure 1: Example of a VCF format file. Header lines start with the ‘#’ character and contain the file metadata. VariantCall records follow, with each line corresponding to a variant location on the reference genome. They include both general infor- mation columns and one column for each sequenced sample. ADAM makes use of Apache Parquet to develop a new data colum- Python API for the applications and tested the ability of the sys- nar storage and Spark to parallelize the processing pipeline of trans- tem to scale-out by making available more computing nodes. Fi- forming genomic reads to variant-calling ready reads, accelerating nally, we investigated potential gains from storing variant calling the variant calling procedure. Similarly, Halvade is a framework for data in columnar format using Apache Parquet. For this purpose, executing sequencing pipelines in parallel, and has implemented we developed Tomatula1, an open-source software tool for convert- a DNA sequencing analysis pipeline for variant calling using tra- ing VCF files to the Apache Parquet format. ditional algorithms such as the Burrow-Wheeler Aligner (BWA) To test the different parameters of the distributed filesystem and and the Genome Analysis Toolkit (GATK) [9]. One of the latest evaluate the Apache Parquet columnar storage format, we devel- Hadoop libraries is FASTdoop [10] designed to distribute efficiently oped a demonstrating application that returns the allele frequen- collections of long sequences (FASTA files) instead of short reads cies of variant sites within a requested region on the chromosome. (SAM/BAM) without losing the information of each piece’s origin. The software tool, the demonstration scripts and an example dataset Previous work has also investigated the potential benefits from (intentionally very small for educational purposes) are available as adopting a more generalized storage format than the standard flat- open software repository on GitHub, and as supplementary mate- text files, that enables structured querying. VCF-miner, converts a rial. VCF file to a JSON dictionary as a pre-processing step[11], whereas BioPig [18], SeqPig [21], SBtab [15] and GMQL [16] suggest the tab- 2 MATERIALS AND METHODS ular storage format that can be queried with an SQL-like language. The above efforts focus on applying Big Data techniques for han- 2.1 Big data tools dling sequence data files like FASTA and BAM and parallelizing Apache Spark is a fast and general-purpose cluster computing processing pipelines, such as variant calling. Sequence files are usu- system for in-memory computations, overcoming the Hadoop Map- ally big and require heavy processing. However, once these files Reduce bottleneck of communicating to hard disks [24]. By run- are processed, there is no need to access their content. Studies are ning in-memory computations, Spark can perform up to 100x faster then based on the results of processing, which include VCF files than Hadoop MapReduce operation, allowing also for interactive that contain combined information of genome variability across querying of the data [19]. The master/slave architecture of Apache several samples. Large cohort studies can have thousands of sam- Spark consists of three layers: ples. Enabling fast access and retrieval of information from VCF (a) the Cluster resource manager for managing the filesystem, files can give the opportunity to researchers to ask questions from i.e. Hadoop YARN, Apache Mesos or Standalone Scheduler; the data without being limited by time cost. This can be addition- (b) the Driver program responsible for coordinating the applica- ally supported by the interactive environment of Spark. Thus, in tion tasks and running the main method of the script; and this work, we focus on using Big Data tools for storing and query- (c) the Application layer providing APIs that create and man- ing variant calling data. Specifically, we stored VCF datasets ona age the Resilient Distributed Datasets (RDDs) – the main Hadoop Distributed File System (HDFS) and investigated the ef- programming abstraction of Apache Spark. fects of storage parameters, such as the replication factor, on the performance of querying applications. We used Apache Spark’s 1https://www.github.com/bigdatawur/tomatula/ |-- CHROM: long Spark applications run as multiple independent RDD operations |-- POS: long that can be parallelized on several worker nodes.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us