Apache Storm a Framework for Parallel Data Stream Processing Storm

Total Page:16

File Type:pdf, Size:1020Kb

Apache Storm a Framework for Parallel Data Stream Processing Storm Apache Storm A framework for Parallel Data Stream Processing Storm • Storm is a distributed real-;me computaon plaorm • Provides abstrac;ons for implemen;ng event- based computaons on a cluster of physical nodes • Performs parallel computaons on data streams • Manages high throughput data streams • It can be used to design complex event-driven applicaons on intense streams of data Introduc;on • Began as a project of BackType, a marke;ng intelligence company bought by TwiFer in 2011 • TwiFer open-sourced the project and became an Apache project in 2014 • Storm = the Hadoop for Real-Time processing "Storm makes it easy to reliably process unbounded streams of data, doing for real8me processing what Hadoop did for batch processing.” • Has been designed for massive scalability, supports fault-tolerance with a “fail fast, auto restart” approach to processes, and provides the guarantee that every data of the stream will be processed. • Its default is “at least once” processing seman;cs, but offers the ability to implement also the “exactly once” processing seman;cs (transac;onal) Design Goals • Guaranteed Data processing – no data is lost • Imperave descrip;on of a streaming workflow (through stream manipulaon classes) • Horizontal Scalability • Fault- Tolerance • Programmable in different languages Main Concepts: Spouts and Bolts • Any Storm processing is defined as a Directed Acyclic Graph (DAG) of Spouts and Bolts, which is called a topology. • In the topology, Spouts and Bolts produce and consume a streams of tuples. • Tuple:: are generic objects without any schema, but can have named fields • Spouts:: are the tuple input modules; – can be “unreliable” (fire-and-forget) or “reliable” (replay failed tuples) • Bolts:: are the tuple processing or output modules, – consume streams and poten;ally produce new streams • Stream:: a poten;ally infinite sequence of Tuple objects that Storm serializes and passes to the next bolts in the topology. • Complex stream transformaons o]en require mul;ple steps (a chain of mul;ple bolts) • Storm topologies run on clusters and the Storm scheduler distributes work to nodes around the cluster, based on the topology configuraon. Applicaon represented as a topology Source: Heinze, Aniello, Querzoni, Jerzak, Cloud-based Data Stream Processing, DEBS 2014 • Unlike Map-Reduce jobs, topologies run forever or un;l manually terminated. • Spouts: – bring data into the system and hand the data off to bolts (which may in turn hand data to subsequent bolts) • Bolts: – do the processing on the stream. – may write data out to a database or file system, – send a message to another external system, or – make the results of the computaon available to the users. Typical Bolts • Func;ons – tuple transformaons • Filters • Aggregaon • Joins • Storage/retrieval from persistent stores Applicaon represented as a topology • Storm developer may set “parallelism hints” at elements of the topology. Source: Heinze, Aniello, Querzoni, Jerzak, Cloud-based Data Stream Processing, DEBS 2014 Storm strengths • a rich array of available spouts specialized for receiving data from all types of sources (e.g. from the TwiFer streaming API to Apache Kaa to JMS brokers, etc.) • it is straigh<orward to integrate with HDFS file systems, meaning Storm can easily interoperate with Hadoop, if needed. • Storm has support for mul;-language programming, and spouts and bolts can be wriFen in almost any language. • Storm is a very scalable, fast, fault-tolerant open source system for distributed computaon, with a special focus on calculang rolling metrics in real ;me over streams of data. Data Par;;oning Schemes • When a tuple is emiFed, to which task does it go? • Storm offers some flexibility to define the data par;;oning/ shuffling method • Stream groupings define the data flow in the topology • This is set for every spout and bold through the …grouping method when defining the topology Topology view Task view Types of Stream Grouping • Shuffle grouping - random distribu;on of tuples to the next downstream bolt tasks • Fields grouping – uses one/more named elements of the tuples to determine the des;naon task (by mod hashing) • All grouping – sends all tuples to all all tasks • Global grouping – all tuples go to the bolt task with the lowest Id • Direct grouping – explicit defini;on of the target bolt • Custom grouping – define a custom grouping method by implemen;ng the CustomStreamGrouping interface • LocalOrShuffle grouping: if the target bolt has >1 tasks in the same worker process, tuples will be shuffled to just those in-process tasks. Otherwise, it is the same as normal shuffle Topology with Grouping op;ons [“id1”, “id2”] shuffle bolt bolt spout global [“url”] bolt bolt all A Prac;cal Example: Word Count • Word count: the HelloWorld • Input: stream of text (e.g. from documents) • Output: number of appearance for each word A Prac;cal Example: Hello Storm A simple word count The Strom Topology Topology descripon • Using the Topologybuilder class and its methods setSpout() and setBolt() the spouts and bolts are declared and instan;ated. • setBolt returns an InputDeclarer object that is used to define the inputs to the bolt. With this a bolt explicitly subscribes to a specific stream of another component (spout or bolt), and … • chooses the data shuffling/par;;oning op;on • the parallelizaon hint for spouts and bolts is op;onal • The cluster class (its submitTopology method) is then used to map the topology to a cluster HelloStorm: contains the topology definion IRichSpout IRichSpout: is the interface that any spout must implement. • open method:: allows the spout to configure any connec;ons to the outside world (e.g. connec;ons to queue servers) and to receive the SpoutOutputCollector) • nextTuple method:: will emit (send) the next tuple downstream the topology, it is called repeatedly by the Storm infra-structure • declareOutputFields defies the fields of the tuples of the output streams • Methods ack and fail are called when Storm detects that a tuple emiFed from the Spout either successfully completed the topology, or failed to be completed. LineReaderSpout: reads docs and creates tuples BaseRichBolt Extend the abstract class BaseRichBolt or implement the iRichBolt interface • Prepare method:: passes to the bolt informaon about the topology. The Outputcollector object manages the interac;on between the bolt and the topology (e.g. transming and acknowledging tuples) • Execute method:: does the processing of incoming tuples • The collector.emit() method is used to send the transformed/new tuple to the next bolt. • Through collector.ack() and collector.fail() the bolt can no;fy Storm if the processing of the tuple was successful or if it failed, and for which reason (collector.reportError()) • declareOutputFields method:: is used do declare the fields of the output tuples or to define new named output streams. BaseRichBolt • Bolts can emit more than one stream. To make use of this, declare mul;ple named streams using the declareStream method of OutputFieldsDeclarer interface Name of the stream public void declareOutputFields (OutputFieldsDeclarer d) { ! !d.declare (new Fields (“first””, “second”, “third”))! Name of the fields !d.declareStream(“car”, new Fields(“first”));! !d.declareStream(“cdr”, new Fields(“second”,”third”))! }! • And then specify the named output streams using the emit method on SpoutOutputCollector! public void execute(Tuple input) {! List<Object> objs = input.select( new Fields(“first”,”second”,”third”) );! !collector emit(objs);! !collector emit(“car”, new Values(objs.get(0)));! !collector.emit(“cdr”, new Values(objs.get(1), objs.get(2)));! !collector.ack(input);! }! Access to the tuple fields WordSpliFerBolt: cuts lines into words WordCounterBolt: counts word occurrences Topology Execuon • A Topology processes tuples forever (un;l you kill it). It consists of many worker processes spread across many machines (managed by a supervisor) • A machine in a Cluster may run one or more worker processes. It is either idle or being used by a single topology. Each worker node may run one or more tasks of the same component. • Storm’s default scheduler applies a simple round- robin strategy to assign tasks to worker processes Architecture of a Storm Cluster • Nimbus: – distributes code around the cluster – Assigns tasks to machines/supervisors (i.e. allocates the execu;on of components - spouts – and bolts) - to the worker processes – Failure monitoring – Is fail-fast and stateless • Zookeeper: – Keeps the informaon of which supervisor machines are execu;ng (for discovery and coordinaon purposes) and if Nimbus machine is up. • Supervisor: – Listens to work assigned to its machine – Starts and stops worker processes based on Nimbus commands – Is fast-fail and stateless Tuple Tree Storm considers a tuple coming off a spout "fully processed" when the tuple tree has been exhausted and every message in the tree has been processed. A tuple is considered failed when its tree of messages fails to be fully processed within a specified ;meout. This ;meout can be configured (default is 30 seconds) Tuple emiFed by a spout The tuple tree generated by the processing of a sentence Anchoring • A tuple tree is defined by specifying the input tuple as the first argument of emit. • If the new tuple fails to be processed downstream, the root tuple can be idenfied. At-least-once processing guarantee • With anchoring, Storm can guarantee at-least-once seman;cs (in the presence of failures reported by bolts) without using intermediate queues. • Instead of retrying from the point that a failure has been reported, retries happen from the root of the tuple tree - spouts will simply re-emit the root tuple again. • Intermediate stages of bolt processing that had been completed successfully will be re-done. • This is a waste of processing, … • But has the advantage is there is no need to synchronize the processing of the tuples by the parallel tasks. • And if the operaon of the bolts is idempotent (no side effects) the re-processing actually defines exactly-once processing guarantee. Transac;onal Exactly-once processing guarantee But bolts may not do idempotent processing and processing may require exactly-once seman;cs: • e.g.
Recommended publications
  • Working with Storm Topologies Date of Publish: 2018-08-13
    Apache Storm 3 Working with Storm Topologies Date of Publish: 2018-08-13 http://docs.hortonworks.com Contents Packaging Storm Topologies................................................................................... 3 Deploying and Managing Apache Storm Topologies............................................4 Configuring the Storm UI.................................................................................................................................... 4 Using the Storm UI.............................................................................................................................................. 5 Monitoring and Debugging an Apache Storm Topology......................................6 Enabling Dynamic Log Levels.............................................................................................................................6 Setting and Clearing Log Levels Using the Storm UI.............................................................................6 Setting and Clearing Log Levels Using the CLI..................................................................................... 7 Enabling Topology Event Logging......................................................................................................................7 Configuring Topology Event Logging.....................................................................................................8 Enabling Event Logging...........................................................................................................................8
    [Show full text]
  • Apache Flink™: Stream and Batch Processing in a Single Engine
    Apache Flink™: Stream and Batch Processing in a Single Engine Paris Carboney Stephan Ewenz Seif Haridiy Asterios Katsifodimos* Volker Markl* Kostas Tzoumasz yKTH & SICS Sweden zdata Artisans *TU Berlin & DFKI parisc,[email protected][email protected][email protected] Abstract Apache Flink1 is an open-source system for processing streaming and batch data. Flink is built on the philosophy that many classes of data processing applications, including real-time analytics, continu- ous data pipelines, historic data processing (batch), and iterative algorithms (machine learning, graph analysis) can be expressed and executed as pipelined fault-tolerant dataflows. In this paper, we present Flink’s architecture and expand on how a (seemingly diverse) set of use cases can be unified under a single execution model. 1 Introduction Data-stream processing (e.g., as exemplified by complex event processing systems) and static (batch) data pro- cessing (e.g., as exemplified by MPP databases and Hadoop) were traditionally considered as two very different types of applications. They were programmed using different programming models and APIs, and were exe- cuted by different systems (e.g., dedicated streaming systems such as Apache Storm, IBM Infosphere Streams, Microsoft StreamInsight, or Streambase versus relational databases or execution engines for Hadoop, including Apache Spark and Apache Drill). Traditionally, batch data analysis made up for the lion’s share of the use cases, data sizes, and market, while streaming data analysis mostly served specialized applications. It is becoming more and more apparent, however, that a huge number of today’s large-scale data processing use cases handle data that is, in reality, produced continuously over time.
    [Show full text]
  • DSP Frameworks DSP Frameworks We Consider
    Università degli Studi di Roma “Tor Vergata” Dipartimento di Ingegneria Civile e Ingegneria Informatica DSP Frameworks Corso di Sistemi e Architetture per Big Data A.A. 2017/18 Valeria Cardellini DSP frameworks we consider • Apache Storm (with lab) • Twitter Heron – From Twitter as Storm and compatible with Storm • Apache Spark Streaming (lab) – Reduce the size of each stream and process streams of data (micro-batch processing) • Apache Flink • Apache Samza • Cloud-based frameworks – Google Cloud Dataflow – Amazon Kinesis Streams Valeria Cardellini - SABD 2017/18 1 Apache Storm • Apache Storm – Open-source, real-time, scalable streaming system – Provides an abstraction layer to execute DSP applications – Initially developed by Twitter • Topology – DAG of spouts (sources of streams) and bolts (operators and data sinks) Valeria Cardellini - SABD 2017/18 2 Stream grouping in Storm • Data parallelism in Storm: how are streams partitioned among multiple tasks (threads of execution)? • Shuffle grouping – Randomly partitions the tuples • Field grouping – Hashes on a subset of the tuple attributes Valeria Cardellini - SABD 2017/18 3 Stream grouping in Storm • All grouping (i.e., broadcast) – Replicates the entire stream to all the consumer tasks • Global grouping – Sends the entire stream to a single task of a bolt • Direct grouping – The producer of the tuple decides which task of the consumer will receive this tuple Valeria Cardellini - SABD 2017/18 4 Storm architecture • Master-worker architecture Valeria Cardellini - SABD 2017/18 5 Storm
    [Show full text]
  • Apache Storm Tutorial
    Apache Storm Apache Storm About the Tutorial Storm was originally created by Nathan Marz and team at BackType. BackType is a social analytics company. Later, Storm was acquired and open-sourced by Twitter. In a short time, Apache Storm became a standard for distributed real-time processing system that allows you to process large amount of data, similar to Hadoop. Apache Storm is written in Java and Clojure. It is continuing to be a leader in real-time analytics. This tutorial will explore the principles of Apache Storm, distributed messaging, installation, creating Storm topologies and deploy them to a Storm cluster, workflow of Trident, real-time applications and finally concludes with some useful examples. Audience This tutorial has been prepared for professionals aspiring to make a career in Big Data Analytics using Apache Storm framework. This tutorial will give you enough understanding on creating and deploying a Storm cluster in a distributed environment. Prerequisites Before proceeding with this tutorial, you must have a good understanding of Core Java and any of the Linux flavors. Copyright & Disclaimer © Copyright 2014 by Tutorials Point (I) Pvt. Ltd. All the content and graphics published in this e-book are the property of Tutorials Point (I) Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish any contents or a part of contents of this e-book in any manner without written consent of the publisher. We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors.
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Hdf® Stream Developer 3 Days
    TRAINING OFFERING | DEV-371 HDF® STREAM DEVELOPER 3 DAYS This course is designed for Data Engineers, Data Stewards and Data Flow Managers who need to automate the flow of data between systems as well as create real-time applications to ingest and process streaming data sources using Hortonworks Data Flow (HDF) environments. Specific technologies covered include: Apache NiFi, Apache Kafka and Apache Storm. The course will culminate in the creation of a end-to-end exercise that spans this HDF technology stack. PREREQUISITES Students should be familiar with programming principles and have previous experience in software development. First-hand experience with Java programming and developing within an IDE are required. Experience with Linux and a basic understanding of DataFlow tools and would be helpful. No prior Hadoop experience required. TARGET AUDIENCE Developers, Data & Integration Engineers, and Architects who need to automate data flow between systems and/or develop streaming applications. FORMAT 50% Lecture/Discussion 50% Hands-on Labs AGENDA SUMMARY Day 1: Introduction to HDF Components, Apache NiFi dataflow development Day 2: Apache Kafka, NiFi integration with HDF/HDP, Apache Storm architecture Day 3: Storm management options, multi-language support, Kafka integration DAY 1 OBJECTIVES • Introduce HDF’s components; Apache NiFi, Apache Kafka, and Apache Storm • NiFi architecture, features, and characteristics • NiFi user interface; processors and connections in detail • NiFi dataflow assembly • Processor Groups and their elements
    [Show full text]
  • ADMI Cloud Computing Presentation
    ECSU/IU NSF EAGER: Remote Sensing Curriculum ADMI Cloud Workshop th th Enhancement using Cloud Computing June 10 – 12 2016 Day 1 Introduction to Cloud Computing with Amazon EC2 and Apache Hadoop Prof. Judy Qiu, Saliya Ekanayake, and Andrew Younge Presented By Saliya Ekanayake 6/10/2016 1 Cloud Computing • What’s Cloud? Defining this is not worth the time Ever heard of The Blind Men and The Elephant? If you still need one, see NIST definition next slide The idea is to consume X as-a-service, where X can be Computing, storage, analytics, etc. X can come from 3 categories Infrastructure-as-a-S, Platform-as-a-Service, Software-as-a-Service Classic Cloud Computing Computing IaaS PaaS SaaS My washer Rent a washer or two or three I tell, Put my clothes in and My bleach My bleach comforter dry clean they magically appear I wash I wash shirts regular clean clean the next day 6/10/2016 2 The Three Categories • Software-as-a-Service Provides web-enabled software Ex: Google Gmail, Docs, etc • Platform-as-a-Service Provides scalable computing environments and runtimes for users to develop large computational and big data applications Ex: Hadoop MapReduce • Infrastructure-as-a-Service Provide virtualized computing and storage resources in a dynamic, on-demand fashion. Ex: Amazon Elastic Compute Cloud 6/10/2016 3 The NIST Definition of Cloud Computing? • “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” On-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf • However, formal definitions may not be very useful.
    [Show full text]
  • Installing and Configuring Apache Storm Date of Publish: 2018-08-30
    Apache Kafka 3 Installing and Configuring Apache Storm Date of Publish: 2018-08-30 http://docs.hortonworks.com Contents Installing Apache Storm.......................................................................................... 3 Configuring Apache Storm for a Production Environment.................................7 Configuring Storm for Supervision......................................................................................................................8 Configuring Storm Resource Usage.....................................................................................................................9 Apache Kafka Installing Apache Storm Installing Apache Storm Before you begin • HDP cluster stack version 2.5.0 or later. • (Optional) Ambari version 2.4.0 or later. Procedure 1. Click the Ambari "Services" tab. 2. In the Ambari "Actions" menu, select "Add Service." This starts the Add Service Wizard, displaying the Choose Services screen. Some of the services are enabled by default. 3. Scroll down through the alphabetic list of components on the Choose Services page, select "Storm", and click "Next" to continue: 3 Apache Kafka Installing Apache Storm 4 Apache Kafka Installing Apache Storm 4. On the Assign Masters page, review node assignments for Storm components. If you want to run Storm with high availability of nimbus nodes, select more than one nimbus node; the Nimbus daemon automatically starts in HA mode if you select more than one nimbus node. Modify additional node assignments if desired, and click "Next". 5. On the Assign Slaves and Clients page, choose the nodes that you want to run Storm supervisors and clients: Storm supervisors are nodes from which the actual worker processes launch to execute spout and bolt tasks. Storm clients are nodes from which you can run Storm commands (jar, list, and so on). 6. Click Next to continue. 7. Ambari displays the Customize Services page, which lists a series of services: 5 Apache Kafka Installing Apache Storm For your initial configuration you should use the default values set by Ambari.
    [Show full text]
  • Perform Data Engineering on Microsoft Azure Hdinsight (775)
    Perform Data Engineering on Microsoft Azure HDInsight (775) www.cognixia.com Administer and Provision HDInsight Clusters Deploy HDInsight clusters Create a cluster in a private virtual network, create a cluster that has a custom metastore, create a domain-joined cluster, select an appropriate cluster type based on workload considerations, customize a cluster by using script actions, provision a cluster by using Portal, provision a cluster by using Azure CLI tools, provision a cluster by using Azure Resource Manager (ARM) templates and PowerShell, manage managed disks, configure vNet peering Deploy and secure multi-user HDInsight clusters Provision users who have different roles; manage users, groups, and permissions through Apache Ambari, PowerShell, and Apache Ranger; configure Kerberos; configure service accounts; implement SSH tunneling; restrict access to data Ingest data for batch and interactive processing Ingest data from cloud or on-premises data; store data in Azure Data Lake; store data in Azure Blob Storage; perform routine small writes on a continuous basis using Azure CLI tools; ingest data in Apache Hive and Apache Spark by using Apache Sqoop, Application Development Framework (ADF), AzCopy, and AdlCopy; ingest data from an on-premises Hadoop cluster Configure HDInsight clusters Manage metastore upgrades; view and edit Ambari configuration groups; view and change service configurations through Ambari; access logs written to Azure Table storage; enable heap dumps for Hadoop services; manage HDInsight configuration, use
    [Show full text]
  • A Performance Comparison of Open-Source Stream Processing Platforms
    A Performance Comparison of Open-Source Stream Processing Platforms Martin Andreoni Lopez, Antonio Gonzalez Pastana Lobato, Otto Carlos M. B. Duarte Universidade Federal do Rio de Janeiro - GTA/COPPE/UFRJ - Rio de Janeiro, Brazil Email: fmartin, antonio, [email protected] Abstract—Distributed stream processing platforms are a new processing models have been proposed and received attention class of real-time monitoring systems that analyze and extract from researchers. knowledge from large continuous streams of data. These type Real-time distributed stream processing models can benefit of systems are crucial for providing high throughput and low latency required by Big Data or Internet of Things monitoring traffic monitoring applications for cyber security threats detec- applications. This paper describes and analyzes three main open- tion [4]. Current intrusion detection and prevention systems source distributed stream-processing platforms: Storm, Flink, are not effective, because 85% of threats take weeks to be and Spark Streaming. We analyze the system architectures and detected and up to 123 hours for a reaction after detection to we compare their main features. We carry out two experiments be performed [5]. New distributed real-time stream processing concerning threats detection on network traffic to evaluate the throughput efficiency and the resilience to node failures. Results models for security critical applications is required and in show that the performance of native stream processing systems, the future with the advancement of the Internet of Things, Storm and Flink, is up to 15 times higher than the micro-batch their use will be imperative. To respond to these needs, processing system, Spark Streaming.
    [Show full text]
  • Technology Overview
    Big Data Technology Overview Term Description See Also Big Data - the 5 Vs Everyone Must Volume, velocity and variety. And some expand the definition further to include veracity 3 Vs Know and value as well. 5 Vs of Big Data From Wikipedia, “Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. Agile The Agile Manifesto It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen tight iterations throughout the development cycle.” A data serialization system. From Wikepedia, Avro Apache Avro “It is a remote procedure call and serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.” BigInsights Enterprise Edition provides a spreadsheet-like data analysis tool to help Big Insights IBM Infosphere Biginsights organizations store, manage, and analyze big data. A scalable multi-master database with no single points of failure. Cassandra Apache Cassandra It provides scalability and high availability without compromising performance. Cloudera Inc. is an American-based software company that provides Apache Hadoop- Cloudera Cloudera based software, support and services, and training to business customers. Wikipedia - Data Science Data science The study of the generalizable extraction of knowledge from data IBM - Data Scientist Coursera Big Data Technology Overview Term Description See Also Distributed system developed at Google for interactively querying large datasets. Dremel Dremel It empowers business analysts and makes it easy for business users to access the data Google Research rather than having to rely on data engineers.
    [Show full text]
  • Apache Beam: Portable and Evolutive Data-Intensive Applications
    Apache Beam: portable and evolutive data-intensive applications Ismaël Mejía - @iemejia Talend Who am I? @iemejia Software Engineer Apache Beam PMC / Committer ASF member Integration Software Big Data / Real-Time Open Source / Enterprise 2 New products We are hiring ! 3 Introduction: Big data state of affairs 4 Before Big Data (early 2000s) The web pushed data analysis / infrastructure boundaries ● Huge data analysis needs (Google, Yahoo, etc) ● Scaling DBs for the web (most companies) DBs (and in particular RDBMS) had too many constraints and it was hard to operate at scale. Solution: We need to go back to basics but in a distributed fashion 5 MapReduce, Distributed Filesystems and Hadoop ● Use distributed file systems (HDFS) to scale data storage horizontally ● Use Map Reduce to execute tasks in parallel (performance) ● Ignore strict model (let representation loose to ease scaling e.g. KV stores). (Prepare) Great for huge dataset analysis / transformation but… Map (Shuffle) ● Too low-level for many tasks (early frameworks) ● Not suited for latency dependant analysis Reduce (Produce) 6 The distributed database Cambrian explosion … and MANY others, all of them with different properties, utilities and APIs 7 Distributed databases API cycle NewSQL let's reinvent NoSQL, because our own thing SQL is too limited SQL is back, because it is awesome 8 (yes it is an over-simplification but you get it) The fundamental problems are still the same or worse (because of heterogeneity) … ● Data analysis / processing from systems with different semantics ● Data integration from heterogeneous sources ● Data infrastructure operational issues Good old Extract-Transform-Load (ETL) is still an important need 9 The fundamental problems are still the same "Data preparation accounts for about 80% of the work of data scientists" [1] [2] 1 Cleaning Big Data: Most Time-Consuming, Least Enjoyable Data Science Task 2 Sculley et al.: Hidden Technical Debt in Machine Learning Systems 10 and evolution continues ..
    [Show full text]