Performance Modelling of Distributed Stream Processing Topologies

Total Page:16

File Type:pdf, Size:1020Kb

Performance Modelling of Distributed Stream Processing Topologies Performance Modelling of Distributed Stream Processing Topologies Thomas Cooper Submitted for the degree of Doctor of Philosophy in the School of Computing, Newcastle University August 2020 © 2020, Thomas Cooper For Amanda and Gillian Abstract Distributed stream processing systems (like Apache Storm, Heron and Flink) allow the processing of massive amounts of data with low latency and high throughput. Part of the power of these systems is their ability to scale the separate sections (operators) of a stream processing query to adapt to changes in incoming workload and maintain end-to-end latency or throughput requirements. Whilst many of the popular stream processing systems provide the functionality to scale their queries, none of them suggest to the user how best to do this to ensure better performance. The user is required to deploy the query, wait for it to stabilise, assess its performance, alter its configuration and repeat this loop until the required performance is achieved. This scaling decision loop is intensely time consuming and for large deployments the process can take days to complete. The time and effort involved also discourage users from changing the configuration once one is found to satisfy peak load. This leads to the over-provisioning of resources. To solve these issues a performance modelling system for stream processing queries is required. This would allow the performance effect of changes to a query’s configuration to be assessed before they are deployed, reducing the time spent within the scaling decision loop. Previous research on auto-scaling systems using performance models has focused either; on queueing theory based approaches, which require small amounts of historical performance data but suffer from poor accuracy; or on machine learning based approaches which have better accuracy, but require large amounts of historical performance data to produce their predictions. The research detailed in this thesis is focused on showing that an approach based on queueing theory and discrete event simulation can be used, with only a relatively small amount of performance data, to provide accurate performance modelling results for stream processing queries. We have analysed the many aspects involved in the operation of modern stream processing systems, in particular Apache Storm. From this we have created processes and models to predict end-to-end latencies for proposed configuration changes to running streaming queries. These predictions are validated against a diverse range of example streaming queries, under various workload environments and configurations. The evaluations show that for most query configurations, our approach can produce performance predictions with similar or better accuracy than the best performing machine learning based approaches, over a more diverse range of query types, using only a fraction of the performance data those approaches require. iii iv Declaration I declare that this thesis is my own work unless otherwise stated. No part of this thesis has previously been submitted for a degree or any other qualification at Newcastle University or any other institution. Thomas Cooper August 2020 v vi Acknowledgements I would like to start by thanking my supervisor, Dr. Paul Ezhilchelvan, for his advice, patience, good humor and honesty (even if sometimes I didn’t want to hear it). It has been a journey. Thanks to my fellow MSc Computer Science students for making my introduction to the field so much fun and for their continued friendship and support. Special thanks to Soo Darcy for her excellent proof reading skills. My thanks to the staff of the Newcastle University Centre for Doctoral Training (CDT) in Cloud Computing for Big Data: to Drs. Paul Watson and Darren Wilkinson for leading the CDT and taking a chance on a masters student with some funny ideas; to Drs. Matt Foreshaw and Sarah Heaps for giving me a great start and support along the way; and thanks especially to Oonagh McGee and Jen Wood, without whom chaos would have reigned. Thanks to Karthik Ramasamy, Ning Wang and all the Twitter Real Time Compute team for giving me a chance to test my ideas and learn all I could from them. Thanks to my fellow CDT PhD students for sharing their knowledge, experience and friendship. Special mentions go to Naomi Hannaford and Lauren Roberts for being far more patient with the loud guy sitting next to them than they needed to be; and to Dr. Hugo Firth who is a good friend and fellow shaver of yaks. Thank you to my family, who have supported me throughout my (multiple) academic endeavours: to my father and grandfathers, for giving me my love of technology; and to my mother Amanda, who gave me the means to start my journey toward this PhD and who never wavered in her belief in me. Finally, love and thanks to Gillian, without whom none of this would have been possible and who already knows everything I want to say. vii viii Contents Glossary 1 1 Introduction 7 1.1 Distributed Stream Processing . .8 1.2 Topology Scaling Decisions . .9 1.3 Model-Based Scaling Decisions . 11 1.3.1 Faster convergence on a valid plan . 12 1.3.2 Feedback for scheduling decisions . 13 1.3.3 Pre-emptive scaling . 13 1.3.4 N-version scaling . 14 1.4 Summary . 15 1.4.1 Research aims . 15 1.5 Thesis Structure . 16 1.6 Related Publications . 16 2 Distributed Stream Processing Architecture 19 2.1 Choosing an Example System . 19 2.2 Apache Storm Overview . 20 2.2.1 Background . 20 2.2.2 Overview . 21 2.3 Storm Elements . 22 2.3.1 Storm cluster . 22 2.3.2 Topology . 22 2.4 Parallelism in Storm . 23 2.4.1 Executors . 24 2.4.2 Tasks . 25 2.4.3 Worker processes . 26 2.5 Stream Groupings . 27 2.6 Topology Plans . 28 2.6.1 Query plan . 28 2.6.2 Configuration . 28 2.6.3 Logical plan . 30 2.6.4 Physical plan . 30 2.7 Internal Queues . 30 2.7.1 Overview . 31 2.7.2 Queue input . 32 2.7.3 Arrival into the queue . 32 2.7.4 Timer flush interval completes . 32 2.7.5 Service completes . 35 ix x CONTENTS 2.8 Tuple Flow . 36 2.8.1 Executor tuple flow . 36 2.8.2 Worker process tuple flow . 38 2.9 Guaranteed Message Processing . 38 2.9.1 Acker . 40 2.9.2 Tuple tree . 40 2.10 Topology Scheduling . 42 2.11 Rebalancing . 42 2.11.1 Worker processes . 43 2.11.2 Executors . 43 2.11.3 Tasks . 43 2.12 Windowing . 43 2.12.1 Tumbling windows . 43 2.12.2 Sliding windows . 44 2.12.3 Windowing in Apache Storm . 44 2.13 Storm Metrics . 45 2.13.1 Accessing metrics . 45 2.13.2 Component metrics . 46 2.13.3 Queue metrics . 49 2.13.4 Custom metrics . 50 2.13.5 Metrics sample rates . 50 2.14 Summary . 51 3 Related Work 53 3.1 Threshold Based Auto-scaling . 53 3.2 Performance Model Based Auto-scaling . 56 3.2.1 Queueing theory . 56 3.2.2 Machine learning . 59 3.2.3 Other approaches . 62 3.3 Summary . 64 4 Topology Performance Modelling 67 4.1 Performance Modelling Procedure . 67 4.1.1 Modelling the topology . 67 4.1.2 Tuple flow plan . 70 4.1.3 Elements to be modelled . 71 4.2 Executor Latency Modelling . 73 4.2.1 Queue simulation . 74 4.2.2 Executor simulator . 75 4.3 Incoming Workload . 75 4.4 Routing Probabilities . 76 4.4.1 Stream routing probability . 77 4.4.2 Global routing probabilities . 80 4.5 Predicting Routing Probabilities . 81 4.5.1 Predicting stream routing probabilities . 81 4.5.2 Predicting global routing probabilities . 89 4.6 Input to Output Ratios . 89 4.6.1 Calculating input to output coefficients for source physical plans . 91 4.7 Predicting Input to Output Ratios . 91 4.7.1 Input streams containing only shuffle groupings . 91 CONTENTS xi 4.7.2 Input streams containing at least one fields grouping . 92 4.8 Arrival Rates . 93 4.8.1 Predicting executor arrival rates . 93 4.8.2 Predicting worker process arrival rates . 94 4.9 Service Times . 94 4.9.1 Executor co-location effects on service time . 95 4.10 Predicting Service Times . ..
Recommended publications
  • Network Traffic Profiling and Anomaly Detection for Cyber Security
    Network traffic profiling and anomaly detection for cyber security Laurens D’hooge Student number: 01309688 Supervisors: Prof. dr. ir. Filip De Turck, dr. ir. Tim Wauters Counselors: Prof. dr. Bruno Volckaert, dr. ir. Tim Wauters A dissertation submitted to Ghent University in partial fulfilment of the requirements for the degree of Master of Science in Information Engineering Technology Academic year: 2017-2018 Acknowledgements This thesis is the result of 4 months work and I would like to express my gratitude towards the people who have guided me throughout this process. First and foremost I’d like to thank my thesis advisors prof. dr. Bruno Volckaert and dr. ir. Tim Wauters. By virtue of their knowledge and clear communication, I was able to maintain a clear target. Secondly I would like to thank prof. dr. ir. Filip De Turck for providing me the opportunity to conduct research in this field with the IDLab research group. Special thanks to Andres Felipe Ocampo Palacio and dr. Marleen Denert are in order as well. Mr. Ocampo’s Phd research into big data processing for network traffic and the resulting framework are an integral part of this thesis. Ms. Denert has been the go-to member of the faculty staff for general advice and administrative dealings. The final token of gratitude I’d like to extend to my family and friends for their continued support during this process. Laurens D’hooge Network traffic profiling and anomaly detection for cyber security Laurens D’hooge Supervisor(s): prof. dr. ir. Filip De Turck, dr. ir. Tim Wauters Abstract— This article is a short summary of the research findings of a creation of APT2.
    [Show full text]
  • Optimizing Resource Utilization in Distributed Computing Systems For
    THESE` DE DOCTORAT DE L’ETABLISSEMENT´ UNIVERSITE´ BOURGOGNE FRANCHE-COMTE´ PREPAR´ EE´ A` L’UNIVERSITE´ DE FRANCHE-COMTE´ Ecole´ doctorale n°37 Sciences Pour l’Ingenieur´ et Microtechniques Doctorat d’Informatique par ANTHONY NASSAR Optimizing Resource Utilization in Distributed Computing Systems for Automotive Applications Optimisation de l’utilisation des ressources dans les systemes` informatiques distribues´ pour les applications automobiles These` present´ ee´ et soutenue publiquement le 04-02-2021 a` Belfort, devant le Jury compose´ de : MR CERIN CHRISTOPHE Professeur a` l’Universite´ Sorbonne Paris Nord President´ MR CHBEIR RICHARD Professeur a` l’Universite´ de Pau et des Pays de l’Adour Rapporteur MME BENBERNOU SALIMA Professeur a` l’Universite´ Paris-Descartes Rapporteur MR MOSTEFAOUI AHMED Maˆıtre de conferences´ a` l’Universite´ de Franche-Comte´ Directeur de these` MR DESSABLES FRANC¸ OIS Ingenieur´ chez Groupe PSA Codirecteur de these` DOCTORAL THESIS OF THE UNIVERSITY BOURGOGNE FRANCHE-COMTE´ INSTITUTION PREPARED AT UNIVERSITE´ DE FRANCHE-COMTE´ Doctoral school n°37 Engineering Sciences and Microtechnologies Computer Science Doctorate by ANTHONY NASSAR Optimizing Resource Utilization in Distributed Computing Systems for Automotive Applications Optimisation de l’utilisation des ressources dans les systemes` informatiques distribues´ pour les applications automobiles Thesis presented and publicly defended in Belfort, on 04-02-2021 Composition of the Jury : CERIN CHRISTOPHE Professor at Universite´ Sorbonne Paris Nord President
    [Show full text]
  • Evaluating the Impact of Streaming Systems Design on Application Performance Alessio Pagliari
    Evaluating the impact of streaming systems design on application performance Alessio Pagliari To cite this version: Alessio Pagliari. Evaluating the impact of streaming systems design on application performance. Data Structures and Algorithms [cs.DS]. Université Côte d’Azur, 2021. English. NNT : 2021COAZ4011. tel-03273377 HAL Id: tel-03273377 https://tel.archives-ouvertes.fr/tel-03273377 Submitted on 29 Jun 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSE DE DOCTORAT Évaluer l'impact de la conception des systèmes de streaming sur la performance des applications Alessio PAGLIARI Laboratoire d’Informatique, Signaux et Systèmes de Sophia Antipolis (I3S) Présentée en vue de l’obtention Devant le jury, composé de : du grade de docteur en Informatique Jean-Marc Pierson, Professeur, Université Paul Sabatier Toulouse 3 d’Université Côte d’Azur Guillaume Pierre, Professeur, Université de Rennes 1 Pietro Michiardi, Professeur, Eurecom Dirigée par : Fabrice Huet / Fabrice Huet, Professeur, Université Côte d’Azur Guillaume Urvoy-Keller, Professeur,
    [Show full text]
  • Apache Samza
    Apache Samza Martin Kleppmann Definition vehicles, or the writes of records to a database. Apache Samza is an open source frame- Stream processing jobs are long- work for distributed processing of high- running processes that continuously volume event streams. Its primary design consume one or more event streams, goal is to support high throughput for a invoking some application logic on wide range of processing patterns, while every event, producing derived output providing operational robustness at the streams, and potentially writing output massive scale required by Internet com- to databases for subsequent querying. panies. Samza achieves this goal through While a batch process or a database a small number of carefully designed ab- query typically reads the state of a stractions: partitioned logs for messag- dataset at one point in time, and then ing, fault-tolerant local state, and cluster- finishes, a stream processor is never based task scheduling. finished: it continually awaits the arrival of new events, and it only shuts down when terminated by an administrator. Many tasks can be naturally ex- Overview pressed as stream processing jobs, for example: Stream processing is playing an increas- • aggregating occurrences of events, ingly important part of the data man- e.g., counting how many times a agement needs of many organizations. particular item has been viewed; Event streams can represent many kinds • computing the rate of certain events, of data, for example, the activity of users e.g., for system diagnostics, report- on a website, the movement of goods or ing, and abuse prevention; 1 2 Martin Kleppmann • enriching events with information the scalability of Samza is directly at- from a database, e.g., extending user tributable to the choice of these founda- click events with information about tional abstractions.
    [Show full text]
  • A Novel Cloud Broker-Based Resource Elasticity Management and Pricing for Big Data Streaming Applications
    A Novel Cloud Broker-based Resource Elasticity Management and Pricing for Big Data Streaming Applications by Olubisi A. Runsewe Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electronic Business School of Electrical Engineering and Computer Science Faculty of Engineering University of Ottawa c Olubisi A. Runsewe, Ottawa, Canada, 2019 Abstract The pervasive availability of streaming data from various sources is driving todays' enterprises to acquire low-latency big data streaming applications (BDSAs) for extracting useful information. In parallel, recent advances in technology have made it easier to collect, process and store these data streams in the cloud. For most enterprises, gaining insights from big data is immensely important for maintaining competitive advantage. However, majority of enterprises have difficulty managing the multitude of BDSAs and the complex issues cloud technologies present, giving rise to the incorporation of cloud service brokers (CSBs). Generally, the main objective of the CSB is to maintain the heterogeneous quality of service (QoS) of BDSAs while minimizing costs. To achieve this goal, the cloud, al- though with many desirable features, exhibits major challenges | resource prediction and resource allocation | for CSBs. First, most stream processing systems allocate a fixed amount of resources at runtime, which can lead to under- or over-provisioning as BDSA demands vary over time. Thus, obtaining optimal trade-off between QoS violation and cost requires accurate demand prediction methodology to prevent waste, degradation or shutdown of processing. Second, coordinating resource allocation and pricing decisions for self-interested BDSAs to achieve fairness and efficiency can be complex.
    [Show full text]
  • Configuring Data Stream Processing
    Numéro National de Thèse : 2019LYSEN050 THESE de DOCTORAT DE L’UNIVERSITE DE LYON opérée par l’École Normale Supérieure de Lyon École Doctorale N◦512 Ecole Doctorale en Informatique et Mathématiques de Lyon Discipline : Informatique Soutenue publiquement le 23/09/2019, par : Alexandre DA SILVA VEITH Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Mécanismes prenant en compte la qualité de service pour la (re)configuration d’applications de traitement de flux de données sur une infrastructure hautement distribuée Devant le jury composé de : Omer RANA Professeur Université de Cardiff (UK) Rapporteur Patricia STOLF Maître de Conférences Université Paul Sabatier Rapporteure Hélène COULLON Maître de Conférences IMT Atlantique Examinatrice Frédéric DESPREZ Directeur de Recherche Inria Examinateur Guillaume PIERRE Professeur Université Rennes 1 Examinateur Laurent LEFEVRE Chargé de Recherche Inria Directeur Marcos DIAS DE ASSUNCAO Chargé de Recherche Inria Co-encadrant ii Contents Acknowledgments vii French Abstract ix 1 Introduction 1 1.1 Challenges in Data Stream Processing (DSP) Applications Deployment . .3 1.1.1 Challenges in Edge Computing . .4 1.1.2 Challenges in Cloud Computing . .4 1.1.3 Challenges in DSP Operator Placement . .4 1.1.4 Challenges in DSP Application Reconfiguration . .4 1.2 Research Problem and Objectives . .5 1.3 Evaluation Methodology . .5 1.4 Thesis Contribution . .6 1.5 Thesis Organisation . .6 2 State-of-the-art and Positioning 9 2.1 Introduction . .9 2.2 Data Stream Processing Architecture and Elasticity . 10 2.2.1 Online Data Processing Architecture . 10 2.2.2 Data Streams and Models .
    [Show full text]
  • A Comprehensive Solution for Research-Oriented Cloud Computing
    A Comprehensive Solution for Research-Oriented Cloud Computing B Mevlut A. Demir( ), Weslyn Wagner, Divyaansh Dandona, and John J. Prevost The University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249, USA {mevlut.demir,jeff.prevost}@utsa.edu, {weslyn.wagner,ubm700}@my.utsa.edu Abstract. Cutting edge research today requires researchers to perform computationally intensive calculations and/or create models and simu- lations using large sums of data in order to reach research-backed con- clusions. As datasets, models, and calculations increase in size and scope they present a computational and analytical challenge to the researcher. Advances in cloud computing and the emergence of big data analytic tools are ideal to aid the researcher in tackling this challenge. Although researchers have been using cloud-based software services to propel their research, many institutions have not considered harnessing the Infras- tructure as a Service model. The reluctance to adopt Infrastructure as a Service in academia can be attributed to many researchers lacking the high degree of technical experience needed to design, procure, and manage custom cloud-based infrastructure. In this paper, we propose a comprehensive solution consisting of a fully independent cloud automa- tion framework and a modular data analytics platform which will allow researchers to create and utilize domain specific cloud solutions irrespec- tive of their technical knowledge, reducing the overall effort and time required to complete research. Keywords: Automation · Cloud computing · HPC Scientific computing · SolaaS 1 Introduction The cloud is an ideal data processing platform because of its modularity, effi- ciency, and many possible configurations. Nodes in the cloud can be dynamically spawned, configured, modified, and destroyed as needed by the application using pre-defined application programming interfaces (API).
    [Show full text]
  • Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Alexandre Da Silva Veith
    Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Alexandre da Silva Veith To cite this version: Alexandre da Silva Veith. Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure. Distributed, Parallel, and Cluster Com- puting [cs.DC]. Université de Lyon, 2019. English. NNT : 2019LYSEN050. tel-02385744v2 HAL Id: tel-02385744 https://hal.inria.fr/tel-02385744v2 Submitted on 20 Jan 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Numéro National de Thèse : 2019LYSEN050 THESE de DOCTORAT DE L’UNIVERSITE DE LYON opérée par l’École Normale Supérieure de Lyon École Doctorale N◦512 Ecole Doctorale en Informatique et Mathématiques de Lyon Discipline : Informatique Soutenue publiquement le 23/09/2019, par : Alexandre DA SILVA VEITH Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure Mécanismes prenant en compte la qualité de service
    [Show full text]
  • A Survey of State Management in Big Data Processing Systems
    A Survey of State Management in Big Data Processing Systems Quoc-Cuong To1 Juan Soto1,2 Volker Markl1,2 1 German Research Center for 2 Technische Universität Berlin Artificial Intelligence (DFKI) FG DIMA, Sekr. EN-7 Raum 728, Alt-Moabit 91c Einsteinufer 17 10559 Berlin, Germany 10587 Berlin, Germany [email protected] [email protected] [email protected] ABSTRACT graphs or trees (in the absence of iterations or shared results). From The concept of state and its applications vary widely across big data this perspective, the analysis results are the roots, operators are the processing systems. This is evident in both the research literature intermediate nodes, and data are the leaves. Each operator node and existing systems, such as Apache Flink, Apache Heron, Apache performs an operation that transforms inputs flowing through it into Samza, Apache Spark, and Apache Storm. Given the pivotal role outputs. Data flows from the leaves through the operator nodes to that state management plays, particularly, for iterative batch and the roots. stream processing, in this survey, we present examples of state as Operators come in two varieties. Stateless operators are an enabler, discuss the alternative approaches used to handle and purely functional and they produce output, solely based on their implement state, capture the many facets of state management, and input. Examples of stateless operators include relational selection, highlight new research directions. Our aim is to provide insight into relational projection without duplicate elimination, or merging two disparate state management techniques, motivate others to pursue inputs. In contrast, stateful operators compute their output on a research in this area, and draw attention to open problems.
    [Show full text]
  • The Evolution of Smart Stream Processing Architecture Enhancing the Kappa Architecture for Stateful Streams
    WHITEPAPER The Evolution of Smart Stream Processing Architecture Enhancing the Kappa Architecture for Stateful Streams Background Real world streaming use cases are pushing the boundaries of traditional data analytics. Up until now, the shift from bounded to unbounded was occurring at a much slower pace. 5G, however, is promising to revolutionize the connectivity of not just people but also things such as industrial machinery, residential appliances, medical devices, utility meters and more. By the year 2021, Gartner estimates that a total of 25.1 billion edge devices will be installed worldwide. With sub-millisecond latency guarantees and over 1MM devices/km2 stipulated in the 5G standard, companies are being forced to operate in a hyper-connected world. The capabilities of 5G will enable a new set of real-time applications that will now leverage the low latency ubiquitous networking to create many new use cases around Industrial & Medical IoT, AR/VR, immersive gaming and more. All this will put a massive load on traditional data management architectures. The primary reason for the break in existing architecture patterns stems from the fact that we are moving from managing and analyzing bounded data sets, to processing and making decisions on unbounded data streams. Typically, bound data has a known ending point and is relatively fixed. An example of bounded data would be subscriber account data at a Communications Service Provider (CSP). Analyzing and acting on bounded datasets is relatively easy. Traditional relational databases can solve most bounded data problems. On the other hand, unbound data is infinite and not always sequential. For example, monitoring temperature and pressure at various points in a manufac- turing facility is unbounded data.
    [Show full text]
  • A Quick and Flexible Stream Processing Application Prototype Generator Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller
    NAMB: A Quick and Flexible Stream Processing Application Prototype Generator Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller To cite this version: Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller. NAMB: A Quick and Flexible Stream Pro- cessing Application Prototype Generator. The 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, May 2020, Melbourne, Australia. hal-02483008 HAL Id: hal-02483008 https://hal.archives-ouvertes.fr/hal-02483008 Submitted on 18 Feb 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. NAMB: A Quick and Flexible Stream Processing Application Prototype Generator Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller Universite Cote d’Azur, CNRS, I3S falessio.pagliari, fabrice.huet, [email protected] Abstract—The importance of Big Data is nowadays established, implementation designs may be costly and time-consuming. both in industry and research fields, especially stream processing Current works commonly use test applications as benchmarks for its capability to analyze continuous data streams and provide or mocks of production applications. Most of the available test statistics in real-time. Several data stream processing (DSP) platforms exist like the Storm, Flink, Spark Streaming and applications tend to be bounded to the platform or scenario Heron Apache projects, or industrial products such as Google they are evaluated on.
    [Show full text]
  • DEPENDABLE CLOUD RESOURCES for BIG-DATA BATCH PROCESSING & STREAMING FRAMEWORKS by Bara Abusalah
    DEPENDABLE CLOUD RESOURCES FOR BIG-DATA BATCH PROCESSING & STREAMING FRAMEWORKS by Bara Abusalah A Dissertation Submitted to the Faculty of Purdue University In Partial Fulfillment of the Requirements for the degree of Doctor of Philosophy School of Electrical and Computer Engineering West Lafayette, Indiana May 2021 THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF COMMITTEE APPROVAL Prof. Arif Ghafoor, Chair Electrical and Computer Engineering Department Prof. Patrick Eugster Computer Science Department Prof. Samuel Midkiff Electrical and Computer Engineering Department Prof. Walid Aref Computer Science Department Approved by: Prof. Dimitrios Peroulis 2 TABLE OF CONTENTS LIST OF FIGURES .................................... 7 ABSTRACT ........................................ 9 1 INTRODUCTION ................................... 11 1.1 Failures in Cloud Computing & Big Data Applications ............ 12 1.2 Observations/Motivations ............................ 13 Streaming & Batch Frameworks Recovery Time ............ 13 Limitations of Checkpointing ...................... 13 Redundant Work ............................. 14 Byzantine vs Crash Failures ....................... 15 Models of Consistency in Streaming Frameworks ........... 15 Cheaper Hardware & Idle Machines ................... 16 1.3 Objectives based on Observations ........................ 17 1.3.1 Multi Framework Approach ....................... 17 1.3.2 Minimum Recovery Time Possible ................... 18 1.3.3 Flexibility, Customizability & Reusability ............... 20
    [Show full text]