Hadoop Big Data. | 1

Total Page:16

File Type:pdf, Size:1020Kb

Hadoop Big Data. | 1 hadoop big data. | 1 how to learn hadoop. *this is meant as a study guide that was created from lecture videos and is used to help you gain an understanding of how does ai work. How to Learn Hadoop for Data Science Hadoop is open source and has been around for over 10 years. There are current innovations with Hadoop in the cloud sector. Hadoop is being implemented for IoT devices and Genomics data. Hadoop is made up of three components: Storage (HDFS), Compute (MapReduce, Spark) and Management (YARN, Mesos). Apache Spark is used for computing processes in memory of the worked nodes, which allows for quicker processing speeds of Hadoop jobs. HDFS is used for batch, extract transformation and load jobs that was included in the Hadoop core distribution. Data lakes are examples such as Amazon S3 or Google Cloud Storage and they are cloud- based file storages. These options are typically cheaper than using HDFS locally or on the cloud itself. Cloudera and Databricks are the current leading commercial distributions of Hadoop. Public cloud distributions are Google Cloud dataproc, AWS EMR and Microsoft Azure HDInsight. As you add Hadoop libraries into your production environment, ensure you are matching the mind movement machine. hadoop big data. | 2 proper versions so your code runs. Hadoop on GCP Sign Up for GCP -> Login to GCP Console -> In left side menu, Big Data: Dataproc -> Create cluster to create a managed Hadoop cluster -> name the cluster and fill out additional information -> click Create -> you can then create a Job, types of Dataproc jobs are Hadoop, Spark, PySpark, Hive, SparkSql and Pig. -> Submit Job Preemptible worker nodes are provided in Google Cloud dataproc production instances and are used to speed up batch jobs as these worker nodes are very cheaper. Apache Spark is made from Databricks and they have a commercial distribution. Databricks with Apache Spark comes with Jupyter notebooks for data scientists. Setup IDE in Databricks Apache Spark Download your preferred IDE for Python, Scala or Java (which are the programming languages used to program against the Spark API) -> go into databricks and signup for Community Edition (Google Chrome works best for databricks) Adding Hadoop Libraries to databricks (example library is Apache Avro) add Hadoop libraries to databricks by going into databricks workspace -> Shared -> Create: Library -> Source: Maven Coordinate ->Coordinate: avro -> Click Search Spark Packages -> switch to Maven Central once in Search Packages -> type in avro_tools__1.8.1 -> Create Library mind movement machine. hadoop big data. | 3 Setting up clusters in databricks Databricks UI -> Clusters -> Create Cluster -> select Spark version and go with default settings -> Create Cluster Loading data into tables in databricks Databricks UI -> Tables -> Create Table -> select Data Source such as a csv or from a Cloud Bucket -> name table -> Create Table Hadoop Batch Processing Use MapReduce for Batch Processing. YARN is used for scaling when moving from development to production. YARN is scalable and fault tolerant. It handles memory scheduling in a familiar type way of Hadoop schedulers. Apache Mesos is a newer alternative to YARN and is designed for Hadoop workloads. Apache Mesos was designed as a cluster OS framework and handles both job scheduling and container group management. YARN vs Spark Standalone Hadoop Use Cases: Stream Data Processing Use Cases mind movement machine. hadoop big data. | 4 Streaming ETL is used to take data coming in, process it and respond to it Data enrichment is taking data streaming in and combining it with other data sources Trigger event detection is used for devices such as credit card services Machine Learning Use Cases Sentiment Analysis which can be explained easiest through the Twitter sentiment example. Fog Computing Use Cases IoT device messages is used to get information from devices and the edge layer to respond quickly Type of Big Data Streaming Streaming Options How to Learn Hadoop with Apache Spark Open source cluster computing framework that uses in-memory objects for distributed computing. Apache Spark is faster than MapReduce over 100 times. The objects that Spark uses are called Resilient distributed datasets or RDD. Spark has many utility libraries. Apache Spark Libraries Learn more about Spark by visiting Apache. mind movement machine. hadoop big data. | 5 Spark Data Interfaces Resilient Distributed Datasets are low level interface to a sequence of data object in memory. DataFrames are a collection of distributed row types and are similar to what you find with Python data frames. Datasets are new for Spark 2.0 and are distributed collections that combine DataFrames and RDD functionality. Spark Programming Languages Python is easiest, Scala is functional and scalable, Java is for enterprise and R is geared towards machine learning. SparkSession Objects In Spark 1.x, SparkSession Objects are called SQLContext or sc. In Spark 2.0, SparkSession Objects are called spark. Spark shell Connect Spark with your Created Cluster in databricks- Databricks UI -> Create new cluster -> Create New Notebook -> in the notebook, type sc OR spark OR sqlContext and click execute mind movement machine. hadoop big data. | 6 Databricks UI Visit Databricks to setup your account. Workspace is the container for stored information. Tables will provide the tables that have been created. Clusters will show the clusters created. Within Clusters: Spark UI, you can see the Jobs, Storage, Environment and Execution. Executors will show the RDD blocks, which is where your data stores for Spark. Working with Sparks notebook in databricks Databricks UI -> Workspace -> Users -> select notebook To add markdown language, use %md This is a markdown note example. You can add notebooks into dashboards. You can see revision history and link to github. Import and export Spark notebooks in databricks File Types to Export- DBC Archive is databricks archive, Source File is the file exported in the language used (.py for Python), iPython notebook and HTML, which is a static web page. Exporting as a Source File and importing into a code editor is a good option. mind movement machine. hadoop big data. | 7 To import a file, in the databricks UI select Workspace: Import and drag the file to import. Example wordcount in databricks Spark using Scala Databricks Spark Data Sources Transformations and Actions Spark Transformation operations do not show results without a Spark Action operation. Example of Spark Transformations are select, distinct, groupby, sum, orderby, filter and limit. Examples of Spark Action operations are show, count, collect and save. Take is a Spark action that selects a certain number of elements from the beginning of a dataframe. Pipelining in Spark is the separation of operations of Transformations and Actions. DAG A DAG will show you a visualization of the detailed process of what is occurring, the overhead, how long it is taking, how many executors there are and etc. View the specific Spark Job to see the DAG. Spark SQL Using Spark SQL in databricks- Workspace -> Import Notebook -> read the csv data into Spark using spark.read mind movement machine. hadoop big data. | 8 -> display data with display -> use createOrReplaceTempView to register the imported csv data into Spark SQL Tables -> you can then you SQL with %SQL -> SparkR Easiest way to use SparkR is to make a local R dataframe and turn it into a SparkR DataFrame. Use the createDataFrame method to do this. Spark ML with SparkR in databricks Workspace -> Import Notebook -> Create a cluster -> load (csv) data with spark.read -> prepare data for model by cleaning it with appropriate Spark commands Example of Pyspark Spark ML Model (Linear Regression) in databricks Spark with GraphX GraphX is a library that helps with graph databases and it is an API that can be on top of Spark. It is used for graph-parallel computation. Spark with ADAM is used for genomics. Spark Streaming Streaming handles data coming in either through batch ingesting or stream ingesting. Amazon Kinesis and Google Cloud Pub/Sub are examples of proprietary solutions. mind movement machine. hadoop big data. | 9 Streaming in real time when the data is being processed can be batch with MapReduce or streaming with Apache Spark or Apache Storm. Using PySpark for Batch Processing in databricks- Databricks UI -> create a notebook -> verify you have spark by typing in spark in the notebook -> load data -> *example code Using PySpark for Stream Processing in databricks- Databricks UI -> create a notebook -> verify you have spark by typing in spark in the notebook -> load data -> *example code How to Learn Hadoop with Streaming Services Apache Kafka, Amazon Kinesis, Google Cloud Pub/Sub and Microsoft Azure Service Bus. Streaming with Google Cloud Pub/Sub Go to Google Cloud to setup an account and get started. Login to GCP Console -> Create New Project -> Launch Cloud Shell -> Create a Topic, which is a named resource which you send messages, in gcloud shell -> Create a subscription, which is how long messages will be retained Apache Kafka Apache Kafka is used for massive streaming ingest of data. It is a set of servers. The server cluster stores the record streams into topics. Each record has a key, value and a timestamp. The four API’s used with Kafka are Producer API: used to publish to topics, Consumer API: used to subscribe to topics, Streams API: used to control the input and output of streams and mind movement machine. hadoop big data. | 10 Connector API: used to connect to your systems such as a database. Kafka features include Pub/Sub, in-order messages, streaming and is fault-tolerant. Apache Storm An alternative streaming processor to Apache Spark. Apache Storm is now being referenced as Apache Heron. It is a real time stream processor that is a record-at-a-time ingest pipeline. Apache Storm core concepts are Topologies: real-time application logic, Streams: unbound sequence of tuples, Sputs: source of a stream in a topology, Bolts: performs processes on stream data and Stream grouping: section/partition of topology.
Recommended publications
  • Network Traffic Profiling and Anomaly Detection for Cyber Security
    Network traffic profiling and anomaly detection for cyber security Laurens D’hooge Student number: 01309688 Supervisors: Prof. dr. ir. Filip De Turck, dr. ir. Tim Wauters Counselors: Prof. dr. Bruno Volckaert, dr. ir. Tim Wauters A dissertation submitted to Ghent University in partial fulfilment of the requirements for the degree of Master of Science in Information Engineering Technology Academic year: 2017-2018 Acknowledgements This thesis is the result of 4 months work and I would like to express my gratitude towards the people who have guided me throughout this process. First and foremost I’d like to thank my thesis advisors prof. dr. Bruno Volckaert and dr. ir. Tim Wauters. By virtue of their knowledge and clear communication, I was able to maintain a clear target. Secondly I would like to thank prof. dr. ir. Filip De Turck for providing me the opportunity to conduct research in this field with the IDLab research group. Special thanks to Andres Felipe Ocampo Palacio and dr. Marleen Denert are in order as well. Mr. Ocampo’s Phd research into big data processing for network traffic and the resulting framework are an integral part of this thesis. Ms. Denert has been the go-to member of the faculty staff for general advice and administrative dealings. The final token of gratitude I’d like to extend to my family and friends for their continued support during this process. Laurens D’hooge Network traffic profiling and anomaly detection for cyber security Laurens D’hooge Supervisor(s): prof. dr. ir. Filip De Turck, dr. ir. Tim Wauters Abstract— This article is a short summary of the research findings of a creation of APT2.
    [Show full text]
  • Optimizing Resource Utilization in Distributed Computing Systems For
    THESE` DE DOCTORAT DE L’ETABLISSEMENT´ UNIVERSITE´ BOURGOGNE FRANCHE-COMTE´ PREPAR´ EE´ A` L’UNIVERSITE´ DE FRANCHE-COMTE´ Ecole´ doctorale n°37 Sciences Pour l’Ingenieur´ et Microtechniques Doctorat d’Informatique par ANTHONY NASSAR Optimizing Resource Utilization in Distributed Computing Systems for Automotive Applications Optimisation de l’utilisation des ressources dans les systemes` informatiques distribues´ pour les applications automobiles These` present´ ee´ et soutenue publiquement le 04-02-2021 a` Belfort, devant le Jury compose´ de : MR CERIN CHRISTOPHE Professeur a` l’Universite´ Sorbonne Paris Nord President´ MR CHBEIR RICHARD Professeur a` l’Universite´ de Pau et des Pays de l’Adour Rapporteur MME BENBERNOU SALIMA Professeur a` l’Universite´ Paris-Descartes Rapporteur MR MOSTEFAOUI AHMED Maˆıtre de conferences´ a` l’Universite´ de Franche-Comte´ Directeur de these` MR DESSABLES FRANC¸ OIS Ingenieur´ chez Groupe PSA Codirecteur de these` DOCTORAL THESIS OF THE UNIVERSITY BOURGOGNE FRANCHE-COMTE´ INSTITUTION PREPARED AT UNIVERSITE´ DE FRANCHE-COMTE´ Doctoral school n°37 Engineering Sciences and Microtechnologies Computer Science Doctorate by ANTHONY NASSAR Optimizing Resource Utilization in Distributed Computing Systems for Automotive Applications Optimisation de l’utilisation des ressources dans les systemes` informatiques distribues´ pour les applications automobiles Thesis presented and publicly defended in Belfort, on 04-02-2021 Composition of the Jury : CERIN CHRISTOPHE Professor at Universite´ Sorbonne Paris Nord President
    [Show full text]
  • Evaluating the Impact of Streaming Systems Design on Application Performance Alessio Pagliari
    Evaluating the impact of streaming systems design on application performance Alessio Pagliari To cite this version: Alessio Pagliari. Evaluating the impact of streaming systems design on application performance. Data Structures and Algorithms [cs.DS]. Université Côte d’Azur, 2021. English. NNT : 2021COAZ4011. tel-03273377 HAL Id: tel-03273377 https://tel.archives-ouvertes.fr/tel-03273377 Submitted on 29 Jun 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSE DE DOCTORAT Évaluer l'impact de la conception des systèmes de streaming sur la performance des applications Alessio PAGLIARI Laboratoire d’Informatique, Signaux et Systèmes de Sophia Antipolis (I3S) Présentée en vue de l’obtention Devant le jury, composé de : du grade de docteur en Informatique Jean-Marc Pierson, Professeur, Université Paul Sabatier Toulouse 3 d’Université Côte d’Azur Guillaume Pierre, Professeur, Université de Rennes 1 Pietro Michiardi, Professeur, Eurecom Dirigée par : Fabrice Huet / Fabrice Huet, Professeur, Université Côte d’Azur Guillaume Urvoy-Keller, Professeur,
    [Show full text]
  • Apache Samza
    Apache Samza Martin Kleppmann Definition vehicles, or the writes of records to a database. Apache Samza is an open source frame- Stream processing jobs are long- work for distributed processing of high- running processes that continuously volume event streams. Its primary design consume one or more event streams, goal is to support high throughput for a invoking some application logic on wide range of processing patterns, while every event, producing derived output providing operational robustness at the streams, and potentially writing output massive scale required by Internet com- to databases for subsequent querying. panies. Samza achieves this goal through While a batch process or a database a small number of carefully designed ab- query typically reads the state of a stractions: partitioned logs for messag- dataset at one point in time, and then ing, fault-tolerant local state, and cluster- finishes, a stream processor is never based task scheduling. finished: it continually awaits the arrival of new events, and it only shuts down when terminated by an administrator. Many tasks can be naturally ex- Overview pressed as stream processing jobs, for example: Stream processing is playing an increas- • aggregating occurrences of events, ingly important part of the data man- e.g., counting how many times a agement needs of many organizations. particular item has been viewed; Event streams can represent many kinds • computing the rate of certain events, of data, for example, the activity of users e.g., for system diagnostics, report- on a website, the movement of goods or ing, and abuse prevention; 1 2 Martin Kleppmann • enriching events with information the scalability of Samza is directly at- from a database, e.g., extending user tributable to the choice of these founda- click events with information about tional abstractions.
    [Show full text]
  • A Novel Cloud Broker-Based Resource Elasticity Management and Pricing for Big Data Streaming Applications
    A Novel Cloud Broker-based Resource Elasticity Management and Pricing for Big Data Streaming Applications by Olubisi A. Runsewe Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electronic Business School of Electrical Engineering and Computer Science Faculty of Engineering University of Ottawa c Olubisi A. Runsewe, Ottawa, Canada, 2019 Abstract The pervasive availability of streaming data from various sources is driving todays' enterprises to acquire low-latency big data streaming applications (BDSAs) for extracting useful information. In parallel, recent advances in technology have made it easier to collect, process and store these data streams in the cloud. For most enterprises, gaining insights from big data is immensely important for maintaining competitive advantage. However, majority of enterprises have difficulty managing the multitude of BDSAs and the complex issues cloud technologies present, giving rise to the incorporation of cloud service brokers (CSBs). Generally, the main objective of the CSB is to maintain the heterogeneous quality of service (QoS) of BDSAs while minimizing costs. To achieve this goal, the cloud, al- though with many desirable features, exhibits major challenges | resource prediction and resource allocation | for CSBs. First, most stream processing systems allocate a fixed amount of resources at runtime, which can lead to under- or over-provisioning as BDSA demands vary over time. Thus, obtaining optimal trade-off between QoS violation and cost requires accurate demand prediction methodology to prevent waste, degradation or shutdown of processing. Second, coordinating resource allocation and pricing decisions for self-interested BDSAs to achieve fairness and efficiency can be complex.
    [Show full text]
  • Configuring Data Stream Processing
    Numéro National de Thèse : 2019LYSEN050 THESE de DOCTORAT DE L’UNIVERSITE DE LYON opérée par l’École Normale Supérieure de Lyon École Doctorale N◦512 Ecole Doctorale en Informatique et Mathématiques de Lyon Discipline : Informatique Soutenue publiquement le 23/09/2019, par : Alexandre DA SILVA VEITH Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Mécanismes prenant en compte la qualité de service pour la (re)configuration d’applications de traitement de flux de données sur une infrastructure hautement distribuée Devant le jury composé de : Omer RANA Professeur Université de Cardiff (UK) Rapporteur Patricia STOLF Maître de Conférences Université Paul Sabatier Rapporteure Hélène COULLON Maître de Conférences IMT Atlantique Examinatrice Frédéric DESPREZ Directeur de Recherche Inria Examinateur Guillaume PIERRE Professeur Université Rennes 1 Examinateur Laurent LEFEVRE Chargé de Recherche Inria Directeur Marcos DIAS DE ASSUNCAO Chargé de Recherche Inria Co-encadrant ii Contents Acknowledgments vii French Abstract ix 1 Introduction 1 1.1 Challenges in Data Stream Processing (DSP) Applications Deployment . .3 1.1.1 Challenges in Edge Computing . .4 1.1.2 Challenges in Cloud Computing . .4 1.1.3 Challenges in DSP Operator Placement . .4 1.1.4 Challenges in DSP Application Reconfiguration . .4 1.2 Research Problem and Objectives . .5 1.3 Evaluation Methodology . .5 1.4 Thesis Contribution . .6 1.5 Thesis Organisation . .6 2 State-of-the-art and Positioning 9 2.1 Introduction . .9 2.2 Data Stream Processing Architecture and Elasticity . 10 2.2.1 Online Data Processing Architecture . 10 2.2.2 Data Streams and Models .
    [Show full text]
  • A Comprehensive Solution for Research-Oriented Cloud Computing
    A Comprehensive Solution for Research-Oriented Cloud Computing B Mevlut A. Demir( ), Weslyn Wagner, Divyaansh Dandona, and John J. Prevost The University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249, USA {mevlut.demir,jeff.prevost}@utsa.edu, {weslyn.wagner,ubm700}@my.utsa.edu Abstract. Cutting edge research today requires researchers to perform computationally intensive calculations and/or create models and simu- lations using large sums of data in order to reach research-backed con- clusions. As datasets, models, and calculations increase in size and scope they present a computational and analytical challenge to the researcher. Advances in cloud computing and the emergence of big data analytic tools are ideal to aid the researcher in tackling this challenge. Although researchers have been using cloud-based software services to propel their research, many institutions have not considered harnessing the Infras- tructure as a Service model. The reluctance to adopt Infrastructure as a Service in academia can be attributed to many researchers lacking the high degree of technical experience needed to design, procure, and manage custom cloud-based infrastructure. In this paper, we propose a comprehensive solution consisting of a fully independent cloud automa- tion framework and a modular data analytics platform which will allow researchers to create and utilize domain specific cloud solutions irrespec- tive of their technical knowledge, reducing the overall effort and time required to complete research. Keywords: Automation · Cloud computing · HPC Scientific computing · SolaaS 1 Introduction The cloud is an ideal data processing platform because of its modularity, effi- ciency, and many possible configurations. Nodes in the cloud can be dynamically spawned, configured, modified, and destroyed as needed by the application using pre-defined application programming interfaces (API).
    [Show full text]
  • Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Alexandre Da Silva Veith
    Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Alexandre da Silva Veith To cite this version: Alexandre da Silva Veith. Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure. Distributed, Parallel, and Cluster Com- puting [cs.DC]. Université de Lyon, 2019. English. NNT : 2019LYSEN050. tel-02385744v2 HAL Id: tel-02385744 https://hal.inria.fr/tel-02385744v2 Submitted on 20 Jan 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Numéro National de Thèse : 2019LYSEN050 THESE de DOCTORAT DE L’UNIVERSITE DE LYON opérée par l’École Normale Supérieure de Lyon École Doctorale N◦512 Ecole Doctorale en Informatique et Mathématiques de Lyon Discipline : Informatique Soutenue publiquement le 23/09/2019, par : Alexandre DA SILVA VEITH Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Mécanismes prenant en compte la qualité de service
    [Show full text]
  • A Survey of State Management in Big Data Processing Systems
    A Survey of State Management in Big Data Processing Systems Quoc-Cuong To1 Juan Soto1,2 Volker Markl1,2 1 German Research Center for 2 Technische Universität Berlin Artificial Intelligence (DFKI) FG DIMA, Sekr. EN-7 Raum 728, Alt-Moabit 91c Einsteinufer 17 10559 Berlin, Germany 10587 Berlin, Germany [email protected] [email protected] [email protected] ABSTRACT graphs or trees (in the absence of iterations or shared results). From The concept of state and its applications vary widely across big data this perspective, the analysis results are the roots, operators are the processing systems. This is evident in both the research literature intermediate nodes, and data are the leaves. Each operator node and existing systems, such as Apache Flink, Apache Heron, Apache performs an operation that transforms inputs flowing through it into Samza, Apache Spark, and Apache Storm. Given the pivotal role outputs. Data flows from the leaves through the operator nodes to that state management plays, particularly, for iterative batch and the roots. stream processing, in this survey, we present examples of state as Operators come in two varieties. Stateless operators are an enabler, discuss the alternative approaches used to handle and purely functional and they produce output, solely based on their implement state, capture the many facets of state management, and input. Examples of stateless operators include relational selection, highlight new research directions. Our aim is to provide insight into relational projection without duplicate elimination, or merging two disparate state management techniques, motivate others to pursue inputs. In contrast, stateful operators compute their output on a research in this area, and draw attention to open problems.
    [Show full text]
  • The Evolution of Smart Stream Processing Architecture Enhancing the Kappa Architecture for Stateful Streams
    WHITEPAPER The Evolution of Smart Stream Processing Architecture Enhancing the Kappa Architecture for Stateful Streams Background Real world streaming use cases are pushing the boundaries of traditional data analytics. Up until now, the shift from bounded to unbounded was occurring at a much slower pace. 5G, however, is promising to revolutionize the connectivity of not just people but also things such as industrial machinery, residential appliances, medical devices, utility meters and more. By the year 2021, Gartner estimates that a total of 25.1 billion edge devices will be installed worldwide. With sub-millisecond latency guarantees and over 1MM devices/km2 stipulated in the 5G standard, companies are being forced to operate in a hyper-connected world. The capabilities of 5G will enable a new set of real-time applications that will now leverage the low latency ubiquitous networking to create many new use cases around Industrial & Medical IoT, AR/VR, immersive gaming and more. All this will put a massive load on traditional data management architectures. The primary reason for the break in existing architecture patterns stems from the fact that we are moving from managing and analyzing bounded data sets, to processing and making decisions on unbounded data streams. Typically, bound data has a known ending point and is relatively fixed. An example of bounded data would be subscriber account data at a Communications Service Provider (CSP). Analyzing and acting on bounded datasets is relatively easy. Traditional relational databases can solve most bounded data problems. On the other hand, unbound data is infinite and not always sequential. For example, monitoring temperature and pressure at various points in a manufac- turing facility is unbounded data.
    [Show full text]
  • A Quick and Flexible Stream Processing Application Prototype Generator Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller
    NAMB: A Quick and Flexible Stream Processing Application Prototype Generator Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller To cite this version: Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller. NAMB: A Quick and Flexible Stream Pro- cessing Application Prototype Generator. The 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, May 2020, Melbourne, Australia. hal-02483008 HAL Id: hal-02483008 https://hal.archives-ouvertes.fr/hal-02483008 Submitted on 18 Feb 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. NAMB: A Quick and Flexible Stream Processing Application Prototype Generator Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller Universite Cote d’Azur, CNRS, I3S falessio.pagliari, fabrice.huet, [email protected] Abstract—The importance of Big Data is nowadays established, implementation designs may be costly and time-consuming. both in industry and research fields, especially stream processing Current works commonly use test applications as benchmarks for its capability to analyze continuous data streams and provide or mocks of production applications. Most of the available test statistics in real-time. Several data stream processing (DSP) platforms exist like the Storm, Flink, Spark Streaming and applications tend to be bounded to the platform or scenario Heron Apache projects, or industrial products such as Google they are evaluated on.
    [Show full text]
  • DEPENDABLE CLOUD RESOURCES for BIG-DATA BATCH PROCESSING & STREAMING FRAMEWORKS by Bara Abusalah
    DEPENDABLE CLOUD RESOURCES FOR BIG-DATA BATCH PROCESSING & STREAMING FRAMEWORKS by Bara Abusalah A Dissertation Submitted to the Faculty of Purdue University In Partial Fulfillment of the Requirements for the degree of Doctor of Philosophy School of Electrical and Computer Engineering West Lafayette, Indiana May 2021 THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF COMMITTEE APPROVAL Prof. Arif Ghafoor, Chair Electrical and Computer Engineering Department Prof. Patrick Eugster Computer Science Department Prof. Samuel Midkiff Electrical and Computer Engineering Department Prof. Walid Aref Computer Science Department Approved by: Prof. Dimitrios Peroulis 2 TABLE OF CONTENTS LIST OF FIGURES .................................... 7 ABSTRACT ........................................ 9 1 INTRODUCTION ................................... 11 1.1 Failures in Cloud Computing & Big Data Applications ............ 12 1.2 Observations/Motivations ............................ 13 Streaming & Batch Frameworks Recovery Time ............ 13 Limitations of Checkpointing ...................... 13 Redundant Work ............................. 14 Byzantine vs Crash Failures ....................... 15 Models of Consistency in Streaming Frameworks ........... 15 Cheaper Hardware & Idle Machines ................... 16 1.3 Objectives based on Observations ........................ 17 1.3.1 Multi Framework Approach ....................... 17 1.3.2 Minimum Recovery Time Possible ................... 18 1.3.3 Flexibility, Customizability & Reusability ............... 20
    [Show full text]