Gobblin: Unifying Data Ingestion for Hadoop

Total Page:16

File Type:pdf, Size:1020Kb

Gobblin: Unifying Data Ingestion for Hadoop Gobblin: Unifying Data Ingestion for Hadoop Lin Qiao, Yinan Li, Sahil Takiar, Ziyang Liu, Narasimha Veeramreddy, Min Tu, Ying Dai, Issac Buenrostro, Kapil Surlaker, Shirshanka Das, Chavdar Botev LinkedIn Inc. Mountain View, CA, USA flqiao,ynli,stakiar,ziliu,nveeramreddy,mitu,ydai,ibuenros,ksurlaker,sdas,[email protected] ABSTRACT With first hand experience on big data ingestion and inte- Data ingestion is an essential part of companies and organi- gration pain points, we built Gobblin, a unified data inges- zations that collect and analyze large volumes of data. This tion framework. The development of Gobblin was mainly paper describes Gobblin, a generic data ingestion frame- driven by the fact that LinkedIn's data sources have become work for Hadoop and one of LinkedIn's latest open source increasingly heterogeneous. Data is constantly obtained and products. At LinkedIn we need to ingest data from various written into online data storage systems or streaming sys- sources such as relational stores, NoSQL stores, streaming tems, including Espresso [23], Kafka [6], Voldemort [25], systems, REST endpoints, filesystems, etc. into our Hadoop Oracle, MySQL, RocksDB [17], as well as external sites, clusters. Maintaining independent pipelines for each source including S3, Salesforce, Google Analytics, etc. Such data can lead to various operational problems. Gobblin aims to include member profiles, connections, posts and many other solve this issue by providing a centralized data ingestion external and internal activities. These data sources crop framework that makes it easy to support ingesting data from daily terabytes worth of data, most of which needs to be a variety of sources. loaded into our Hadoop clusters to feed business- or consumer- Gobblin distinguishes itself from similar frameworks by oriented analysis. We used to develop a separate data inges- focusing on three core principles: generality, extensibility, tion pipeline for each data source, and at one point we were and operability. Gobblin supports a mixture of data sources running over a dozen different pipelines. out-of-the-box and can be easily extended for more. This Having this many different data ingestion pipelines is like enables an organization to use a single framework to handle re-implementing the HashMap every time we need to use different data ingestion needs, making it easy and inexpen- HashMap with a different type argument. Moreover, these sive to operate. Moreover, with an end-to-end metrics col- pipelines were developed by several different teams. It is not lection and reporting module, Gobblin makes it simple and hard to imagine the non-scalability of this approach, and the efficient to identify issues in production. issues it brought in terms of maintenance, operability and data quality. Similar pains have been shared with us from engineers at other companies. Gobblin aims to eventually replace most or all of these ingestion pipelines with a generic 1. INTRODUCTION data ingestion framework, which is easily configurable to A big data system is often referred to for the sheer volume ingest data from several different types of sources (covering of datasets it handles as well as the large processing power a large number of real use cases), and easily extensible for and new processing paradigms it is associated with. These new data sources and use cases. big data challenges have fostered significant innovations on The challenges Gobblin addresses are five-fold: large scale computation platforms [2, 4, 5, 8, 14, 15, 19, 22, 24]. However, another important aspect of the complexity of { Source integration: The framework provides out- a big data system comes from the coexistence of heteroge- of-the-box adaptors for all of our commonly accessed nous data sources and sinks. In reality, data integration data sources such as MySQL, Kafka, Google Analytics, starts to cause big pain points a lot of times before devel- Salesforce and S3, etc. opers or data engineers are able to solve traditional ETL or data processing at scale. It becomes more and more critical { Processing paradigm: Gobblin supports both stan- for a big data system to address the challenges of large data dalone and scalable platforms, including Hadoop and ingestion and integration with high velocity and quality. Yarn. Integration with Yarn provides the ability to run continuous ingestion in addition to scheduled batches. This work is licensed under the Creative Commons Attribution- { Extensibility: Data pipeline developers can integrate NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this their own adaptors with the framework, and make it license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain leverageable for other developers in the community. permission prior to any use beyond those covered by the license. Con- tact copyright holder by emailing [email protected]. Articles from this volume { Self-service: Data pipeline developers can compose were invited to present their results at the 41st International Conference on a data ingestion and transformation flow in a self- Very Large Data Bases, August 31st - September 4th 2015, Kohala Coast, serviced manner, test locally using standalone mode Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 12 and deploy the flow in production using scale-out mode Copyright 2015 VLDB Endowment 2150-8097/15/08. without code change. 1764 constructs an Extractor per workunit. This resembles Hadoop's InputFormat, which partitions the input into Splits source extractor converter and specifies a RecordReader for each split. The algo- rithm for generating workunits should generally divide the work as evenly as possible to the workunits. In quality writer publisher checker our implementations we mainly use hash based or bin packing based approaches. m m workunits (tasks) o { An Extractor does the work specified in a workunit, e n i.e., extracting certain data records. The workunit t runtime i should have the information of where to pull data from a t (e.g., which Kafka partition, which DB table, etc.) as s task task state o well as what portion of the data should be pulled. t executor tracker r Gobblin uses a watermark object to tell an extrac- o i tor what the start record (low watermark) and end r job job n record (high watermark) are. For example, for Kafka e launcher scheduler g jobs the watermarks can be offsets of a partition, and for DB jobs the watermarks can be values of a col- deployment umn. A watermark can be of any type, not necessarily numeric, as long as there is a way for the Extractor to single hadoop know where to start and where to finish. yarn node mr {A Converter does data transformation on the fly while data is being pulled. There are often use cases that compaction require such conversions, e.g., some fields in the schema need to be projected out for privacy reasons; or data pulled from the source are byte arrays or JSON objects Figure 1: Gobblin Architecture and need to be converted to the desired output for- mats. A converter can convert one input record into zero or more records. Converters are pluggable and { Data quality assurance: The framework exposes chainable, and it is straightforward to implement one, data metrics collectors and data quality checkers as where all it takes is to implement two methods for first class citizens which can be used to power contin- converting schema and data records, respectively. uous data validation. Since we started the project in 2014, Gobblin has been {A Quality Checker determines whether the extracted launched in production and gradually replacing a number of records are in good shape and can be published. There ad-hoc data ingestion pipelines at LinkedIn. Gobblin was are two types of quality checking policies in terms open sourced on Github as of February 2015. We aim to of scope: record-level policies and task-level policies, provide the community with a solid framework that meets which check the integrity of a single record and the the needs of many real world use cases, with desirable fea- entire output of a task, respectively. There are also tures that make the ingestion process as pleasant as eating two types of quality checking policies in terms of neces- a tasty data buffet. sity: mandatory policies and optional policies. Viola- tion of a mandatory policy will result in the record or task output being discarded or written to an outlier 2. ARCHITECTURE AND OVERVIEW folder, while violation of an optional policy will result in warnings. 2.1 System Architecture and Components {A Writer writes records extracted by a task to the The architecture of Gobblin is presented in Figure 1. A appropriate sinks. It first writes records that pass Gobblin job ingests data from a data source (e.g., Espresso) mandatory record-level policies to a staging directory. into a sink (e.g., HDFS). A job may consist of multiple After the task successfully completes and all records workunits, or tasks, each of which represents a unit of work of the task have been written to the staging direc- to be done, e.g., extracting data from an Espresso table par- tory, the writer moves them to a writer output direc- tition or a Kafka topic partition. tory, which are pending audit against task-level quality checking policies and are to be published by the pub- 2.1.1 Job Constructs lisher. Gobblin's data writer can be extended to pub- A job is formulated around a set of constructs (ellipses). lish data to different sinks such as HDFS, Kafka, S3, Gobblin provides an interface for each of the constructs, thus etc., or publish data in different formats such as Avro, a job can be customized by implementing these interfaces, Parquet, CSV, etc., or even publish data in different or extending Gobblin's out-of-the-box implementations.
Recommended publications
  • An Enterprise Knowledge Network
    Fogbeam Labs Cut Through The Information Fog http://www.fogbeam.com An Enterprise Knowledge Network Knowledge exists in many forms inside your organization – ranging from tacit knowledge which exists only in the minds of the users who possess it, to codified knowledge stored in databases and document repositories. Unfortunately while knowledge exists throughout the organization, it is often not easy (if even possible) to locate, use, share, and reuse existing knowledge. This results in a situation often described as “the left hand doesn't know what the right hand is doing” and damages morale as employees spend their days frustrated and complaining that “nobody knows what is going on around here”. The obstacles that hinder access to existing knowledge can be cultural, geographical, social, and/or technological. And while no technological solution can guarantee perfect knowledge-sharing, tools drawn from big data, data mining / machine learning, deep learning, and artificial intelligence techniques can improve an organization's power to generate, capture, use, share and reuse knowledge. Using technologies developed as part of the semantic web initiative, and applying the principles of linked data within the enterprise, the Fogbeam Labs Enterprise Knowledge Network approach can help your firm integrate and aggregate knowledge which is spread across your existing enterprise applications, content repositories and Intranet. An Enterprise Knowledge Network enables your firm's capabilities to: • engage in high levels of knowledge transfer and
    [Show full text]
  • Return of Organization Exempt from Income
    OMB No. 1545-0047 Return of Organization Exempt From Income Tax Form 990 Under section 501(c), 527, or 4947(a)(1) of the Internal Revenue Code (except black lung benefit trust or private foundation) Open to Public Department of the Treasury Internal Revenue Service The organization may have to use a copy of this return to satisfy state reporting requirements. Inspection A For the 2011 calendar year, or tax year beginning 5/1/2011 , and ending 4/30/2012 B Check if applicable: C Name of organization The Apache Software Foundation D Employer identification number Address change Doing Business As 47-0825376 Name change Number and street (or P.O. box if mail is not delivered to street address) Room/suite E Telephone number Initial return 1901 Munsey Drive (909) 374-9776 Terminated City or town, state or country, and ZIP + 4 Amended return Forest Hill MD 21050-2747 G Gross receipts $ 554,439 Application pending F Name and address of principal officer: H(a) Is this a group return for affiliates? Yes X No Jim Jagielski 1901 Munsey Drive, Forest Hill, MD 21050-2747 H(b) Are all affiliates included? Yes No I Tax-exempt status: X 501(c)(3) 501(c) ( ) (insert no.) 4947(a)(1) or 527 If "No," attach a list. (see instructions) J Website: http://www.apache.org/ H(c) Group exemption number K Form of organization: X Corporation Trust Association Other L Year of formation: 1999 M State of legal domicile: MD Part I Summary 1 Briefly describe the organization's mission or most significant activities: to provide open source software to the public that we sponsor free of charge 2 Check this box if the organization discontinued its operations or disposed of more than 25% of its net assets.
    [Show full text]
  • Projects – Other Than Hadoop! Created By:-Samarjit Mahapatra [email protected]
    Projects – other than Hadoop! Created By:-Samarjit Mahapatra [email protected] Mostly compatible with Hadoop/HDFS Apache Drill - provides low latency ad-hoc queries to many different data sources, including nested data. Inspired by Google's Dremel, Drill is designed to scale to 10,000 servers and query petabytes of data in seconds. Apache Hama - is a pure BSP (Bulk Synchronous Parallel) computing framework on top of HDFS for massive scientific computations such as matrix, graph and network algorithms. Akka - a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM. ML-Hadoop - Hadoop implementation of Machine learning algorithms Shark - is a large-scale data warehouse system for Spark designed to be compatible with Apache Hive. It can execute Hive QL queries up to 100 times faster than Hive without any modification to the existing data or queries. Shark supports Hive's query language, metastore, serialization formats, and user-defined functions, providing seamless integration with existing Hive deployments and a familiar, more powerful option for new ones. Apache Crunch - Java library provides a framework for writing, testing, and running MapReduce pipelines. Its goal is to make pipelines that are composed of many user-defined functions simple to write, easy to test, and efficient to run Azkaban - batch workflow job scheduler created at LinkedIn to run their Hadoop Jobs Apache Mesos - is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, MPI, Hypertable, Spark, and other applications on a dynamically shared pool of nodes.
    [Show full text]
  • Graft: a Debugging Tool for Apache Giraph
    Graft: A Debugging Tool For Apache Giraph Semih Salihoglu, Jaeho Shin, Vikesh Khanna, Ba Quan Truong, Jennifer Widom Stanford University {semih, jaeho.shin, vikesh, bqtruong, widom}@cs.stanford.edu ABSTRACT optional master.compute() function is executed by the Master We address the problem of debugging programs written for Pregel- task between supersteps. like systems. After interviewing Giraph and GPS users, we devel- We have tackled the challenge of debugging programs written oped Graft. Graft supports the debugging cycle that users typically for Pregel-like systems. Despite being a core component of pro- go through: (1) Users describe programmatically the set of vertices grammers’ development cycles, very little work has been done on they are interested in inspecting. During execution, Graft captures debugging in these systems. We interviewed several Giraph and the context information of these vertices across supersteps. (2) Us- GPS programmers (hereafter referred to as “users”) and studied vertex.compute() ing Graft’s GUI, users visualize how the values and messages of the how they currently debug their functions. captured vertices change from superstep to superstep,narrowing in We found that the following three steps were common across users: suspicious vertices and supersteps. (3) Users replay the exact lines (1) Users add print statements to their code to capture information of the vertex.compute() function that executed for the sus- about a select set of potentially “buggy” vertices, e.g., vertices that picious vertices and supersteps, by copying code that Graft gener- are assigned incorrect values, send incorrect messages, or throw ates into their development environments’ line-by-line debuggers.
    [Show full text]
  • Reflexión Académica En Diseño & Comunicación
    ISSN 1668-1673 XXXII • 2017 Año XVIII. Vol 32. Noviembre 2017. Buenos Aires. Argentina Reflexión Académica en Diseño & Comunicación IV Congreso de Creatividad, Diseño y Comunicación para Profesores y Autoridades de Nivel Medio. `Interfaces Palermo´ Reflexión Académica en Diseño y Comunicación Comité Editorial Universidad de Palermo. Lucia Acar. Universidade Estácio de Sá. Brasil. Facultad de Diseño y Comunicación. Gonzalo Javier Alarcón Vital. Universidad Autónoma Metropolitana. México. Centro de Estudios en Diseño y Comunicación. Mercedes Alfonsín. Universidad de Buenos Aires. Argentina. Mario Bravo 1050. Fernando Alberto Alvarez Romero. Pontificia Universidad Católica del C1175ABT. Ciudad Autónoma de Buenos Aires, Argentina. Ecuador. Ecuador. www.palermo.edu Gonzalo Aranda Toro. Universidad Santo Tomás. Chile. [email protected] Christian Atance. Universidad de Lomas de Zamora. Argentina. Mónica Balabani. Universidad de Palermo. Argentina. Director Alberto Beckers Argomedo. Universidad Santo Tomás. Chile. Oscar Echevarría Renato Antonio Bertao. Universidade Positivo. Brasil. Allan Castelnuovo. Market Research Society. Reino Unido. Coordinadora de la Publicación Jorge Manuel Castro Falero. Universidad de la Empresa. Uruguay. Diana Divasto Raúl Castro Zuñeda. Universidad de Palermo. Argentina. Michael Dinwiddie. New York University. USA. Mario Rubén Dorochesi Fernandois. Universidad Técnica Federico Santa María. Chile. Adriana Inés Echeverria. Universidad de la Cuenca del Plata. Argentina. Universidad de Palermo Jimena Mariana García Ascolani. Universidad Comunera. Paraguay. Rector Marcelo Ghio. Instituto San Ignacio. Perú. Ricardo Popovsky Clara Lucia Grisales Montoya. Academia Superior de Artes. Colombia. Haenz Gutiérrez Quintana. Universidad Federal de Santa Catarina. Brasil. Facultad de Diseño y Comunicación José Korn Bruzzone. Universidad Tecnológica de Chile. Chile. Decano Zulema Marzorati. Universidad de Buenos Aires. Argentina. Oscar Echevarría Denisse Morales.
    [Show full text]
  • Apache Giraph 3
    Other Distributed Frameworks Shannon Quinn Distinction 1. General Compute Engines – Hadoop 2. User-facing APIs – Cascading – Scalding Alternative Frameworks 1. Apache Mahout 2. Apache Giraph 3. GraphLab 4. Apache Storm 5. Apache Tez 6. Apache Flink Alternative Frameworks 1. Apache Mahout 2. Apache Giraph 3. GraphLab 4. Apache Storm 5. Apache Tez 6. Apache Flink Apache Mahout • A Tale of Two Frameworks 1. Distributed machine learning on Hadoop – 0.1 to 0.9 2. “Samsara” – New in 0.10+ Machine learning on Hadoop • Born out of the Apache Lucene project • Built on Hadoop (all in Java) • Pragmatic machine learning at scale 1: Recommendation 2: Classification 3: Clustering Other MapReduce algorithms • Dimensionality reduction – Lanczos – SSVD – LDA • Regression – Logistic – Linear – Random Forest • Evolutionary algorithms Mahout-Samsara • Programming “environment” for distributed machine learning • R-like syntax • Interactive shell (like Spark) • Under-the-hood algebraic optimizer • Engine-agnostic – Spark – H2O – Flink – ? Mahout-Samsara Mahout • 3 main components Engine-agnostic environment for Engine-specific Legacy MapReduce building scalable ML algorithms (Spark, algorithms algorithms H2O) (Samsara) Mahout • v0.10.0 released April 11 (as in, 5 days ago!) • 0.10.1 – More base linear algebra functionality • 0.11.0 – Compatible with Spark 1.3 • 1.0 – ? Mahout features by engine Mahout features by engine Mahout features by engine • No engine-agnostic clustering algorithms yet – Still the domain of legacy MapReduce • H2O and especially Flink
    [Show full text]
  • Chapter 2 Introduction to Big Data Technology
    Chapter 2 Introduction to Big data Technology Bilal Abu-Salih1, Pornpit Wongthongtham2 Dengya Zhu3 , Kit Yan Chan3 , Amit Rudra3 1The University of Jordan 2 The University of Western Australia 3 Curtin University Abstract: Big data is no more “all just hype” but widely applied in nearly all aspects of our business, governments, and organizations with the technology stack of AI. Its influences are far beyond a simple technique innovation but involves all rears in the world. This chapter will first have historical review of big data; followed by discussion of characteristics of big data, i.e. from the 3V’s to up 10V’s of big data. The chapter then introduces technology stacks for an organization to build a big data application, from infrastructure/platform/ecosystem to constructional units and components. Finally, we provide some big data online resources for reference. Keywords Big data, 3V of Big data, Cloud Computing, Data Lake, Enterprise Data Centre, PaaS, IaaS, SaaS, Hadoop, Spark, HBase, Information retrieval, Solr 2.1 Introduction The ability to exploit the ever-growing amounts of business-related data will al- low to comprehend what is emerging in the world. In this context, Big Data is one of the current major buzzwords [1]. Big Data (BD) is the technical term used in reference to the vast quantity of heterogeneous datasets which are created and spread rapidly, and for which the conventional techniques used to process, analyse, retrieve, store and visualise such massive sets of data are now unsuitable and inad- equate. This can be seen in many areas such as sensor-generated data, social media, uploading and downloading of digital media.
    [Show full text]
  • Study Materials for Big Data Processing Tools
    MASARYK UNIVERSITY FACULTY OF INFORMATICS Study materials for Big Data processing tools BACHELOR'S THESIS Martin Durkac Brno, Spring 2021 MASARYK UNIVERSITY FACULTY OF INFORMATICS Study materials for Big Data processing tools BACHELOR'S THESIS Martin Durkáč Brno, Spring 2021 This is where a copy of the official signed thesis assignment and a copy of the Statement of an Author is located in the printed version of the document. Declaration Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Martin Durkäc Advisor: RNDr. Martin Macák i Acknowledgements I would like to thank my supervisor RNDr. Martin Macak for all the support and guidance. His constant feedback helped me to improve and finish it. I would also like to express my gratitude towards my colleagues at Greycortex for letting me use their resources to develop the practical part of the thesis. ii Abstract This thesis focuses on providing study materials for the Big Data sem• inar. The thesis is divided into six chapters plus a conclusion, where the first chapter introduces Big Data in general, four chapters contain information about Big Data processing tools and the sixth chapter describes study materials provided in this thesis. For each Big Data processing tool, the thesis contains a practical demonstration and an assignment for the seminar in the attachments. All the assignments are provided in both with and without solution forms.
    [Show full text]
  • PDF Download Scaling Big Data with Hadoop and Solr
    SCALING BIG DATA WITH HADOOP AND SOLR - PDF, EPUB, EBOOK Hrishikesh Vijay Karambelkar | 166 pages | 30 Apr 2015 | Packt Publishing Limited | 9781783553396 | English | Birmingham, United Kingdom Scaling Big Data with Hadoop and Solr - PDF Book The default duration between two heartbeats is 3 seconds. Some other SQL-based distributed query engines to certainly bear in mind and consider for your use cases are:. What Can We Help With? Check out some of the job opportunities currently listed that match the professional profile, many of which seek experience with Search and Solr. This mode can be turned off manually by running the following command:. Has the notion of parent-child document relationships These exist as separate documents within the index, limiting their aggregation functionality in deeply- nested data structures. This step will actually create an authorization key with ssh, bypassing the passphrase check as shown in the following screenshot:. Fields may be split into individual tokens and indexed separately. Any key starting with a will go in the first region, with c the third region and z the last region. Now comes the interesting part. After the jobs are complete, the results are returned to the remote client via HiveServer2. Finally, Hadoop can accept data in just about any format, which eliminates much of the data transformation involved with the data processing. The difference in ingestion performance between Solr and Rocana Search is striking. Aptude has been working with our team for the past four years and we continue to use them and are satisfied with their work Warren E. These tables support most of the common data types that you know from the relational database world.
    [Show full text]
  • Classifying, Evaluating and Advancing Big Data Benchmarks
    Classifying, Evaluating and Advancing Big Data Benchmarks Dissertation zur Erlangung des Doktorgrades der Naturwissenschaften vorgelegt beim Fachbereich 12 Informatik der Johann Wolfgang Goethe-Universität in Frankfurt am Main von Todor Ivanov aus Stara Zagora Frankfurt am Main 2019 (D 30) vom Fachbereich 12 Informatik der Johann Wolfgang Goethe-Universität als Dissertation angenommen. Dekan: Prof. Dr. Andreas Bernig Gutachter: Prof. Dott. -Ing. Roberto V. Zicari Prof. Dr. Carsten Binnig Datum der Disputation: 23.07.2019 Abstract The main contribution of the thesis is in helping to understand which software system parameters mostly affect the performance of Big Data Platforms under realistic workloads. In detail, the main research contributions of the thesis are: 1. Definition of the new concept of heterogeneity for Big Data Architectures (Chapter 2); 2. Investigation of the performance of Big Data systems (e.g. Hadoop) in virtual- ized environments (Section 3.1); 3. Investigation of the performance of NoSQL databases versus Hadoop distribu- tions (Section 3.2); 4. Execution and evaluation of the TPCx-HS benchmark (Section 3.3); 5. Evaluation and comparison of Hive and Spark SQL engines using benchmark queries (Section 3.4); 6. Evaluation of the impact of compression techniques on SQL-on-Hadoop engine performance (Section 3.5); 7. Extensions of the standardized Big Data benchmark BigBench (TPCx-BB) (Section 4.1 and 4.3); 8. Definition of a new benchmark, called ABench (Big Data Architecture Stack Benchmark), that takes into account the heterogeneity of Big Data architectures (Section 4.5). The thesis is an attempt to re-define system benchmarking taking into account the new requirements posed by the Big Data applications.
    [Show full text]
  • Harp: Collective Communication on Hadoop
    HHarp:arp: CollectiveCollective CommunicationCommunication onon HadooHadoopp Judy Qiu, Indiana University SA OutlineOutline • Machine Learning on Big Data • Big Data Tools • Iterative MapReduce model • MDS Demo • Harp SA MachineMachine LearningLearning onon BigBig DataData • Mahout on Hadoop • https://mahout.apache.org/ • MLlib on Spark • http://spark.apache.org/mllib/ • GraphLab Toolkits • http://graphlab.org/projects/toolkits.html • GraphLab Computer Vision Toolkit SA World Data DAG Model MapReduce Model Graph Model BSP/Collective Mode Hadoop MPI HaLoop Giraph Twister Hama or GraphLab ions/ ions/ Spark GraphX ning Harp Stratosphere Dryad/ Reef DryadLIN Q Pig/PigLatin Hive Query Tez Drill Shark MRQL S4 Storm or ming Samza Spark Streaming BigBig DataData ToolsTools forfor HPCHPC andand SupercomputingSupercomputing • MPI(Message Passing Interface, 1992) • Provide standardized function interfaces for communication between parallel processes. • Collective communication operations • Broadcast, Scatter, Gather, Reduce, Allgather, Allreduce, Reduce‐scatter. • Popular implementations • MPICH (2001) • OpenMPI (2004) • http://www.open‐mpi.org/ SA MapReduceMapReduce ModelModel Google MapReduce (2004) • Jeffrey Dean et al. MapReduce: Simplified Data Processing on Large Clusters. OSDI 2004. Apache Hadoop (2005) • http://hadoop.apache.org/ • http://developer.yahoo.com/hadoop/tutorial/ Apache Hadoop 2.0 (2012) • Vinod Kumar Vavilapalli et al. Apache Hadoop YARN: Yet Another Resource Negotiator, SO 2013. • Separation between resource management and computation
    [Show full text]
  • Technology Overview
    Big Data Technology Overview Term Description See Also Big Data - the 5 Vs Everyone Must Volume, velocity and variety. And some expand the definition further to include veracity 3 Vs Know and value as well. 5 Vs of Big Data From Wikipedia, “Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. Agile The Agile Manifesto It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen tight iterations throughout the development cycle.” A data serialization system. From Wikepedia, Avro Apache Avro “It is a remote procedure call and serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.” BigInsights Enterprise Edition provides a spreadsheet-like data analysis tool to help Big Insights IBM Infosphere Biginsights organizations store, manage, and analyze big data. A scalable multi-master database with no single points of failure. Cassandra Apache Cassandra It provides scalability and high availability without compromising performance. Cloudera Inc. is an American-based software company that provides Apache Hadoop- Cloudera Cloudera based software, support and services, and training to business customers. Wikipedia - Data Science Data science The study of the generalizable extraction of knowledge from data IBM - Data Scientist Coursera Big Data Technology Overview Term Description See Also Distributed system developed at Google for interactively querying large datasets. Dremel Dremel It empowers business analysts and makes it easy for business users to access the data Google Research rather than having to rely on data engineers.
    [Show full text]