Streaming Temporal Graphs

Total Page:16

File Type:pdf, Size:1020Kb

Streaming Temporal Graphs Streaming Temporal Graphs by Eric L. Goodman B.S. Computer Science and Math, Brigham Young University, 2003 M.S. Computer Science, Brigham Young University, 2005 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science 2019 This thesis entitled: Streaming Temporal Graphs written by Eric L. Goodman has been approved for the Department of Computer Science Dr. Dirk Grunwald Dr. Daniel Massey Dr. Eric Keller Dr. Qin Lv Date The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. iii Goodman, Eric L. (Ph.D., Computer Science) Streaming Temporal Graphs Thesis directed by Dr. Dirk Grunwald We provide a domain specific language called the Streaming Analytics Language (SAL) to write concise but expressive analyses of streaming temporal graphs. We target problems where the data comes as an infinite stream and where the volume is prohibitive, requiring a single pass over the data and tight spatial and temporal complexity constraints. Also, each item in the stream can be thought of as an edge in a graph, and each edge has an associated timestamp and duration. A real-world problem that is a streaming temporal graph is cyber security data. Machines communi- cate with each other within a network, forming a streaming sequence of edges with temporal information. As such, we elucidate the value of SAL by applying it to a large range of cyber-related problems. With a combination of vertex-centric computations that create features per vertex, and subgraph matching to find communication patterns of interest,D we cover a wide spectrum of important cyber use cases. As an example, we discuss Verizon's Data Breach Investigations Report, and show how SAL can be used to capture most of the nine different categories of cyber breaches. Also, we apply SAL to discovering botnet activity within network traffic in 13 different scenarios, with an average area under the curve (AUC) of the receiver operating characteristic (ROC) of 0.87. Besides SAL as a language, as another contribution we present an implementation we call the Stream- ing Analytics Machine (SAM). With SAM, we can run SAL programs in parallel on a cluster, achieving rates of a million netflows per second, and scaling to 128 nodes or 2560 cores. We compare SAM to another streaming framework, Apache Flink, and find that Flink cannot scale past 32 nodes for the problem of finding triangles (a subgraph of three interconnected nodes) within the streaming graph. Also, SAM excels when the subgraphs are frequent, continuing to find the expected number of subgraphs, while Flink performance degrades and under-reports. Together, SAL and SAM provide an expressive and scalable infrastructure for performing analyses on streaming temporal graphs. iv Acknowledgements A big thanks goes to my advisor, Dirk Grunwald, who helped me hone this dissertation over a lengthy period of time. The path I took was shaped by him and allowed me to create a much better product and a more relevant research contribution. I also thank the committee members for their time and input. This includes not only the current committee of Daniel Massey, Eric Keller, Sangtae Ha, and Qin Lv, but also former members John Black and Sriram Sankaranarayana. Sandia National Laboratories funded my doctoral degree through their university programs, for which I am grateful. I especially appreciate the unwavering support and patience from my managers, Judy Spomer and Michael Haass. My parents were instrumental in fostering my curiousity and desire to learn, which lead me down this path of educational advancement. I still haven't invented the transmat beam, but that will be my next project. v Contents Chapter 1 Introduction 1 2 The Streaming Analytics Language 5 2.1 Case Study . 14 2.2 Static Vertex Attributes . 22 2.3 Temporal Subgraph Matching . 23 3 SAL as a Query Language for Cyber Problems 29 3.1 Crimeware . 30 3.2 Cyber-Espionage . 33 3.3 Denial of Service . 34 3.4 Point of Sale . 36 3.5 Privilege Misuse . 37 3.6 Web Applications . 39 3.7 Summary . 40 4 Related Work 41 4.1 Vertex-centric Computing . 42 4.2 Graph Domain Specific Languages . 47 4.2.1 Green-Marl . 48 vi 4.2.2 Ligra . 49 4.2.3 Galois . 49 4.2.4 Gemini . 50 4.2.5 Grazelle . 51 4.2.6 GraphIt . 51 4.3 Linear Algebra-based Graph Systems . 52 4.3.1 Combinatorial BLAS . 53 4.3.2 Knowledge Discovery Toolbox . 53 4.3.3 Pegasus . 53 4.3.4 GraphMAT . 54 4.4 Graph APIs . 54 4.5 GPU Graph Systems . 55 4.6 SPARQL . 56 4.7 Machine Learning . 59 4.8 Data Stream Management Systems . 59 4.9 Network Telemetry . 61 5 Implementation 63 5.1 Partitioning the Data . 64 5.2 Vertex-centric Computations . 66 5.3 Subgraph Matching . 68 5.3.1 Data Structures . 70 5.3.2 Algorithm . 75 5.4 Summary . 82 6 Vertex Centric Computations 83 6.1 Classifier Results . 83 6.2 Scaling . 86 vii 6.3 Summary . 91 7 Temporal Subgraph Matching 92 7.1 SAM Parameter Exploration . 94 7.2 Apache Flink . 96 7.3 Comparison Results: SAM vs Flink . 98 7.4 Simulating other Constraints . 103 7.5 Summary . 104 8 Conclusions 106 Bibliography 108 Appendix A SAL and Flink Programs 125 A.1 Machine Learning Example . 125 A.2 Apache Flink Triangle Implementation . 126 viii Tables Table 4.1 A table of vertex-centric frameworks and various attributes. 46 5.1 This table shows the conciseness of the SAL language. For the Disclosure pipeline (discussed in Section 2.1), the machine learning pipeline (discussed in Section6.1), and the temporal triangle query (discussed in Chapter 7) the amount of code needed is reduced between 10-25 times. 63 5.2 A listing of the major classes in SAM and how they map to language features in SAL. We also make the distinction between consumers, producers, and feature creators. Producers all share a parallelFeed method that sends the data to registered con- sumers, which is how parallelization is achieved. Each of the consumers receive data from producers. If a consumer is also a feature creator, it performs a computation on the received data, generating a feature, which is then added to a thread-safe feature map.............................................. 67 6.1 A more detailed look at the AUC results. The second column is the AUC of the ROC after training on the first half of the data and then testing on the second half. The third column is reversed: trained on the second half and tested on the first half. 87 ix Figures Figure 1.1 This dissertation is an intersection of four different research areas and presents a domain specific language (DSL) to express streaming temporal graph and machine learning queries. .2 2.1 This figure represents a common pattern for vertex-centric computations in SAL programs. A stream of edges is partitioned into N vertices that are then operated on by o operators that compute over a sliding window of length n items. .6 2.2 Illustration showing the subgraph nature of a Watering Hole Attack. The target machine accesses a popular site, Bait. Shortly after accessing Bait, Target begins communicating to a new machine, the Controller ................... 10 2.3 This figure demonstrates the use of the STREAM BY statement to logically divide a single stream into multiple streams. The STREAM BY statement is following by a FOREACH GENERATE statement that creates estimates on the top two most frequent destination ports and their frequencies. The FILTER statement uses this information to downselect to IPs where the top two ports account for 90% of the traffic. 16 2.4 This figure demonstrates the use of the TRANSFORM and COLLAPSE BY statements to define parts of the Disclosure pipeline. 19 x 3.1 A subgraph showing the flow of messages for data exfiltration. A controller sends a small control message to an infected machine, which then sends large amounts of data to a drop box. 33 3.2 A subgraph showing an XSS attack. 39 4.1 SPARQL Query number two from the Lehigh University Benchmark. 57 5.1 Architecture of the system: data comes in over a socket layer and is then distributed via ZeroMQ. 64 5.2 The push pull object. 65 5.3 Traditional Compressed Sparse Row (CSR) data structure. 72 5.4 Modified Compressed Sparse Row (CSR) data structure. 72 5.5 An overview of the algorithm employed to detect subgraphs of interest. Asyn- chronous threads are launched to consume each edge. The consume threads then performs five different operations. Another set of threads are pulling requests from other nodes for edges within the Request Communicator. Finally, one more set of threads pull edges from other nodes in reply to edge requests. 75 6.1 This graphic shows the AUC of the ROC curve for each of the CTU scenarios. We first train on the first half of the data and test on the second half, and then switch. 87 6.2 Weak Scaling Results: For each run with n nodes, a total of n million netflows were run through the system with a million contiguous netflows per node randomly selected from the CTU dataset. There were two types of runs, with renamed IPs (to create the illusion of a larger network) or with randomized IPs. For the renamed IPs, there continues to be improved throughput until 61 nodes/ 976 cores.
Recommended publications
  • Apache Flink™: Stream and Batch Processing in a Single Engine
    Apache Flink™: Stream and Batch Processing in a Single Engine Paris Carboney Stephan Ewenz Seif Haridiy Asterios Katsifodimos* Volker Markl* Kostas Tzoumasz yKTH & SICS Sweden zdata Artisans *TU Berlin & DFKI parisc,[email protected][email protected][email protected] Abstract Apache Flink1 is an open-source system for processing streaming and batch data. Flink is built on the philosophy that many classes of data processing applications, including real-time analytics, continu- ous data pipelines, historic data processing (batch), and iterative algorithms (machine learning, graph analysis) can be expressed and executed as pipelined fault-tolerant dataflows. In this paper, we present Flink’s architecture and expand on how a (seemingly diverse) set of use cases can be unified under a single execution model. 1 Introduction Data-stream processing (e.g., as exemplified by complex event processing systems) and static (batch) data pro- cessing (e.g., as exemplified by MPP databases and Hadoop) were traditionally considered as two very different types of applications. They were programmed using different programming models and APIs, and were exe- cuted by different systems (e.g., dedicated streaming systems such as Apache Storm, IBM Infosphere Streams, Microsoft StreamInsight, or Streambase versus relational databases or execution engines for Hadoop, including Apache Spark and Apache Drill). Traditionally, batch data analysis made up for the lion’s share of the use cases, data sizes, and market, while streaming data analysis mostly served specialized applications. It is becoming more and more apparent, however, that a huge number of today’s large-scale data processing use cases handle data that is, in reality, produced continuously over time.
    [Show full text]
  • Apache Apex: Next Gen Big Data Analytics
    Apache Apex: Next Gen Big Data Analytics Thomas Weise <[email protected]> @thweise PMC Chair Apache Apex, Architect DataTorrent Apache Big Data Europe, Sevilla, Nov 14th 2016 Stream Data Processing Data Delivery Transform / Analytics Real-time visualization, … Declarative SQL API Data Beam Beam SAMOA Operator SAMOA DAG API Sources Library Events Logs Oper1 Oper2 Oper3 Sensor Data Social Databases CDC (roadmap) 2 Industries & Use Cases Financial Services Ad-Tech Telecom Manufacturing Energy IoT Real-time Call detail record customer facing (CDR) & Supply chain Fraud and risk Smart meter Data ingestion dashboards on extended data planning & monitoring analytics and processing key performance record (XDR) optimization indicators analysis Understanding Reduce outages Credit risk Click fraud customer Preventive & improve Predictive assessment detection behavior AND maintenance resource analytics context utilization Packaging and Improve turn around Asset & Billing selling Product quality & time of trade workforce Data governance optimization anonymous defect tracking settlement processes management customer data HORIZONTAL • Large scale ingest and distribution • Enforcing data quality and data governance requirements • Real-time ELTA (Extract Load Transform Analyze) • Real-time data enrichment with reference data • Dimensional computation & aggregation • Real-time machine learning model scoring 3 Apache Apex • In-memory, distributed stream processing • Application logic broken into components (operators) that execute distributed in a cluster •
    [Show full text]
  • The Cloud‐Based Demand‐Driven Supply Chain
    The Cloud-Based Demand-Driven Supply Chain Wiley & SAS Business Series The Wiley & SAS Business Series presents books that help senior-level managers with their critical management decisions. Titles in the Wiley & SAS Business Series include: The Analytic Hospitality Executive by Kelly A. McGuire Analytics: The Agile Way by Phil Simon Analytics in a Big Data World: The Essential Guide to Data Science and Its Applications by Bart Baesens A Practical Guide to Analytics for Governments: Using Big Data for Good by Marie Lowman Bank Fraud: Using Technology to Combat Losses by Revathi Subramanian Big Data Analytics: Turning Big Data into Big Money by Frank Ohlhorst Big Data, Big Innovation: Enabling Competitive Differentiation through Business Analytics by Evan Stubbs Business Analytics for Customer Intelligence by Gert Laursen Business Intelligence Applied: Implementing an Effective Information and Communications Technology Infrastructure by Michael Gendron Business Intelligence and the Cloud: Strategic Implementation Guide by Michael S. Gendron Business Transformation: A Roadmap for Maximizing Organizational Insights by Aiman Zeid Connecting Organizational Silos: Taking Knowledge Flow Management to the Next Level with Social Media by Frank Leistner Data-Driven Healthcare: How Analytics and BI Are Transforming the Industry by Laura Madsen Delivering Business Analytics: Practical Guidelines for Best Practice by Evan Stubbs ii Demand-Driven Forecasting: A Structured Approach to Forecasting, Second Edition by Charles Chase Demand-Driven Inventory
    [Show full text]
  • Evaluation of SPARQL Queries on Apache Flink
    applied sciences Article SPARQL2Flink: Evaluation of SPARQL Queries on Apache Flink Oscar Ceballos 1 , Carlos Alberto Ramírez Restrepo 2 , María Constanza Pabón 2 , Andres M. Castillo 1,* and Oscar Corcho 3 1 Escuela de Ingeniería de Sistemas y Computación, Universidad del Valle, Ciudad Universitaria Meléndez Calle 13 No. 100-00, Cali 760032, Colombia; [email protected] 2 Departamento de Electrónica y Ciencias de la Computación, Pontificia Universidad Javeriana Cali, Calle 18 No. 118-250, Cali 760031, Colombia; [email protected] (C.A.R.R.); [email protected] (M.C.P.) 3 Ontology Engineering Group, Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, 28660 Madrid, Spain; ocorcho@fi.upm.es * Correspondence: [email protected] Abstract: Existing SPARQL query engines and triple stores are continuously improved to handle more massive datasets. Several approaches have been developed in this context proposing the storage and querying of RDF data in a distributed fashion, mainly using the MapReduce Programming Model and Hadoop-based ecosystems. New trends in Big Data technologies have also emerged (e.g., Apache Spark, Apache Flink); they use distributed in-memory processing and promise to deliver higher data processing performance. In this paper, we present a formal interpretation of some PACT transformations implemented in the Apache Flink DataSet API. We use this formalization to provide a mapping to translate a SPARQL query to a Flink program. The mapping was implemented in a prototype used to determine the correctness and performance of the solution. The source code of the Citation: Ceballos, O.; Ramírez project is available in Github under the MIT license.
    [Show full text]
  • Pohorilyi Magistr.Pdf
    НАЦІОНАЛЬНИЙ ТЕХНІЧНИЙ УНІВЕРСИТЕТ УКРАЇНИ «КИЇВСЬКИЙ ПОЛІТЕХНІЧНИЙ ІНСТИТУТ імені ІГОРЯ СІКОРСЬКОГО» Факультет інформатики та обчислювальної техніки (повна назва інституту/факультету) Кафедра автоматики та управління в технічних системах (повна назва кафедри) «На правах рукопису» «До захисту допущено» УДК ______________ Завідувач кафедри __________ _____________ (підпис) (ініціали, прізвище) “___”_____________20__ р. Магістерська дисертація зі спеціальності (спеціалізації)126 Інформаційні системи та технології на тему: Система збору та аналізу тексових даних з соціальних мереж________ ____________________________________________________________________ Виконав: студент __6__ курсу, групи ___ІА-з82мп______ (шифр групи) Погорілий Богдан Анатолійович ____________________________ __________ (прізвище, ім’я, по батькові) (підпис) Науковий керівник: завідувач кафедри, д.т.н., професор Ролік О. І. __________ (посада, науковий ступінь, вчене звання, прізвище та ініціали) (підпис) Консультант: _______________________________ __________ (назва розділу) (науковий ступінь, вчене звання, , прізвище, ініціали) (підпис) Рецензент: _______________________________________________ __________ (посада, науковий ступінь, вчене звання, науковий ступінь, прізвище та ініціали) (підпис) Засвідчую, що у цій магістерській дисертації немає запозичень з праць інших авторів без відповідних посилань. Студент _____________ (підпис) Київ – 2019 року 3 Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського» Факультет (інститут)
    [Show full text]
  • Portable Stateful Big Data Processing in Apache Beam
    Portable stateful big data processing in Apache Beam Kenneth Knowles Apache Beam PMC Software Engineer @ Google https://s.apache.org/ffsf-2017-beam-state [email protected] / @KennKnowles Flink Forward San Francisco 2017 Agenda 1. What is Apache Beam? 2. State 3. Timers 4. Example & Little Demo What is Apache Beam? TL;DR (Flink draws it more like this) 4 DAGs, DAGs, DAGs Apache Beam Apache Flink Apache Cloud Hadoop Apache Apache Dataflow Spark Samza MapReduce Apache Apache Apache (paper) Storm Gearpump Apex (incubating) FlumeJava (paper) Heron MillWheel (paper) Dataflow Model (paper) 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 Apache Flink local, on-prem, The Beam Vision cloud Cloud Dataflow: Java fully managed input.apply( Apache Spark Sum.integersPerKey()) local, on-prem, cloud Sum Per Key Apache Apex Python local, on-prem, cloud input | Sum.PerKey() Apache Gearpump (incubating) ⋮ ⋮ 6 Apache Flink local, on-prem, The Beam Vision cloud Cloud Dataflow: Python fully managed input | KakaIO.read() Apache Spark local, on-prem, cloud KafkaIO Apache Apex ⋮ local, on-prem, cloud Apache Java Gearpump (incubating) class KafkaIO extends UnboundedSource { … } ⋮ 7 The Beam Model PTransform Pipeline PCollection (bounded or unbounded) 8 The Beam Model What are you computing? (read, map, reduce) Where in event time? (event time windowing) When in processing time are results produced? (triggers) How do refinements relate? (accumulation mode) 9 What are you computing? Read ParDo Grouping Composite Parallel connectors to Per element Group
    [Show full text]
  • Apache Flink Fast and Reliable Large-Scale Data Processing
    Apache Flink Fast and Reliable Large-Scale Data Processing Fabian Hueske @fhueske 1 What is Apache Flink? Distributed Data Flow Processing System • Focused on large-scale data analytics • Real-time stream and batch processing • Easy and powerful APIs (Java / Scala) • Robust execution backend 2 What is Flink good at? It‘s a general-purpose data analytics system • Real-time stream processing with flexible windows • Complex and heavy ETL jobs • Analyzing huge graphs • Machine-learning on large data sets • ... 3 Flink in the Hadoop Ecosystem Libraries Dataflow Table API Table ML Library ML Gelly Library Gelly Apache MRQL Apache SAMOA DataSet API (Java/Scala) DataStream API (Java/Scala) Flink Core Optimizer Stream Builder Runtime Environments Embedded Local Cluster Yarn Apache Tez Data HDFS Hadoop IO Apache HBase Apache Kafka Apache Flume Sources HCatalog JDBC S3 RabbitMQ ... 4 Flink in the ASF • Flink entered the ASF about one year ago – 04/2014: Incubation – 12/2014: Graduation • Strongly growing community 120 100 80 60 40 20 0 Nov.10 Apr.12 Aug.13 Dec.14 #unique git committers (w/o manual de-dup) 5 Where is Flink moving? A "use-case complete" framework to unify batch & stream processing Data Streams • Kafka Analytical Workloads • RabbitMQ • ETL • ... • Relational processing Flink • Graph analysis • Machine learning “Historic” data • Streaming data analysis • HDFS • JDBC • ... Goal: Treat batch as finite stream 6 Programming Model & APIs HOW TO USE FLINK? 7 Unified Java & Scala APIs • Fluent and mirrored APIs in Java and Scala • Table API
    [Show full text]
  • Apache Calcite: a Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources
    Apache Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources Edmon Begoli Jesús Camacho-Rodríguez Julian Hyde Oak Ridge National Laboratory Hortonworks Inc. Hortonworks Inc. (ORNL) Santa Clara, California, USA Santa Clara, California, USA Oak Ridge, Tennessee, USA [email protected] [email protected] [email protected] Michael J. Mior Daniel Lemire David R. Cheriton School of University of Quebec (TELUQ) Computer Science Montreal, Quebec, Canada University of Waterloo [email protected] Waterloo, Ontario, Canada [email protected] ABSTRACT argued that specialized engines can offer more cost-effective per- Apache Calcite is a foundational software framework that provides formance and that they would bring the end of the “one size fits query processing, optimization, and query language support to all” paradigm. Their vision seems today more relevant than ever. many popular open-source data processing systems such as Apache Indeed, many specialized open-source data systems have since be- Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite’s ar- come popular such as Storm [50] and Flink [16] (stream processing), chitecture consists of a modular and extensible query optimizer Elasticsearch [15] (text search), Apache Spark [47], Druid [14], etc. with hundreds of built-in optimization rules, a query processor As organizations have invested in data processing systems tai- capable of processing a variety of query languages, an adapter ar- lored towards their specific needs, two overarching problems have chitecture designed for extensibility, and support for heterogeneous arisen: data models and stores (relational, semi-structured, streaming, and • The developers of such specialized systems have encoun- geospatial). This flexible, embeddable, and extensible architecture tered related problems, such as query optimization [4, 25] is what makes Calcite an attractive choice for adoption in big- or the need to support query languages such as SQL and data frameworks.
    [Show full text]
  • The Ubiquity of Large Graphs and Surprising Challenges of Graph Processing: Extended Survey
    The VLDB Journal https://doi.org/10.1007/s00778-019-00548-x SPECIAL ISSUE PAPER The ubiquity of large graphs and surprising challenges of graph processing: extended survey Siddhartha Sahu1 · Amine Mhedhbi1 · Semih Salihoglu1 · Jimmy Lin1 · M. Tamer Özsu1 Received: 21 January 2019 / Revised: 9 May 2019 / Accepted: 13 June 2019 © Springer-Verlag GmbH Germany, part of Springer Nature 2019 Abstract Graph processing is becoming increasingly prevalent across many application domains. In spite of this prevalence, there is little research about how graphs are actually used in practice. We performed an extensive study that consisted of an online survey of 89 users, a review of the mailing lists, source repositories, and white papers of a large suite of graph software products, and in-person interviews with 6 users and 2 developers of these products. Our online survey aimed at understanding: (i) the types of graphs users have; (ii) the graph computations users run; (iii) the types of graph software users use; and (iv) the major challenges users face when processing their graphs. We describe the participants’ responses to our questions highlighting common patterns and challenges. Based on our interviews and survey of the rest of our sources, we were able to answer some new questions that were raised by participants’ responses to our online survey and understand the specific applications that use graph data and software. Our study revealed surprising facts about graph processing in practice. In particular, real-world graphs represent a very diverse range of entities and are often very large, scalability and visualization are undeniably the most pressing challenges faced by participants, and data integration, recommendations, and fraud detection are very popular applications supported by existing graph software.
    [Show full text]
  • Integrazioa Hizkuntzaren Prozesamenduan Anotazio-Eskemak Eta Elkarreragingarritasuna. Testuen Prozesatze Masiboa, Datu Handien T
    EUSKAL HERRIKO UNIBERTSITATEA Lengoaia eta Sistema Informatikoak Doktorego-tesia Integrazioa hizkuntzaren prozesamenduan Anotazio-eskemak eta elkarreragingarritasuna. Testuen prozesatze masiboa, datu handien teknikak erabiliz. Zuhaitz Beloki Leitza Donostia, 2017 EUSKAL HERRIKO UNIBERTSITATEA Lengoaia eta Sistema Informatikoak Integrazioa hizkuntzaren prozesamenduan Anotazio-eskemak eta elkarreragingarritasuna. Testuen prozesatze masiboa, datu handien teknikak erabiliz. Zuhaitz Beloki Leitzak Xabier Artola Zubillagaren eta Aitor Soroa Etxaberen zuzendaritzapean egindako tesiaren txoste- na, Euskal Herriko Unibertsitatean Doktore titulua eskuratzeko aurkeztua. Donostia, 2017. i ii Laburpena Tesi-lan honetan hizkuntzaren prozesamenduko tresnen integrazioa landu du- gu, datu handien teknikei arreta berezia eskainiz. Tresnen integrazioa, izatez, bi mailatan landu dugu: anotazio-eskemen mailan eta prozesuen mailan. Anotazio-eskemen mailako integrazioan tresnen arteko elkarreragingarritasu- na lortzeko lehenbiziko pausoak aurkeztea izan dugu helburu. Horrekin lotu- ta, bi anotazio-eskema aurkeztu ditugu: Anotazio-Amaraunen Arkitektura (AWA, Annotation Web Architecture) eta NLP Annotation Format (NAF). AWA tesi-lan honekin hasi aurretik sortua izan zen, eta orain formalizazio- lan bat egin dugu berarekin, elkarreragingarritasunari arreta berezia jarriz. NAF, bere aldetik, eskema praktikoa eta sinplea izateko helburuekin sortu dugu. Bi anotazio-eskema horietatik abiatuz, eskemarekiko independentea den eredu abstraktu bat diseinatu dugu. Abstrakzio
    [Show full text]
  • Dzone-Guide-To-Big-Data.Pdf
    THE 2018 DZONE GUIDE TO Big Data STREAM PROCESSING, STATISTICS, & SCALABILITY VOLUME V BROUGHT TO YOU IN PARTNERSHIP WITH THE DZONE GUIDE TO BIG DATA: STREAM PROCESSING, STATISTICS, AND SCALABILITY Dear Reader, Table of Contents I first heard the term “Big Data” almost a decade ago. At that time, it Executive Summary looked like it was nothing new, and our databases would just be up- BY MATT WERNER_______________________________ 3 graded to handle some more data. No big deal. But soon, it became Key Research Findings clear that traditional databases were not designed to handle Big Data. BY G. RYAN SPAIN _______________________________ 4 The term “Big Data” has more dimensions than just “some more data.” It encompasses both structured and unstructured data, fast moving Take Big Data to the Next Level with Blockchain Networks BY ARJUNA CHALA ______________________________ 6 and historical data. Now, with these elements added to the data, some of the other problems such as data contextualization, data validity, Solving Data Integration at Stitch Fix noise, and abnormality in the data became more prominent. Since BY LIZ BENNETT _______________________________ 10 then, Big Data technologies has gone through several phases of devel- Checklist: Ten Tips for Ensuring Your Next Data Analytics opment and transformation, and they are gradually maturing. A term Project is a Success BY WOLF RUZICKA, ______________________________ that was considered as a fad and a technology ecosystem that was 13 considered a luxury are slowly establishing themselves as necessary Infographic: Big Data Realization with Sanitation ______ needs for today’s business activities. Big Data is the new competitive 14 advantage and it matters for our businesses.
    [Show full text]
  • Introduction to the Flank Stack
    Introduction to the FLaNK Stack Timothy Spann Principal DataFlow Field Engineer Cloudera @PaasDev Tim Spann Who am I? Cloudera Principal DataFlow Field Engineer DZone Zone Leader and Big Data MVB Future of Data Meetup Leader ex-Pivotal Field Engineer https://github.com/tspannhw https://www.datainmotion.dev/ @PaasDev © 2020 Cloudera, Inc. All rights reserved. 2 Welcome to Future of Data - Princeton - Virtual https://www.meetup.com/futureofdata-princeton/ From Big Data to AI to Streaming to Containers to Cloud to Analytics to Cloud Storage to Fast Data to Machine Learning to Microservices to ... @PaasDev © 2020 Cloudera, Inc. All rights reserved. 3 Where Can I Run Edge AI Easily? Web service hosted and managed by Cloudera Flow Management Streams Messaging Streaming Analytics Hosted in the your cloud environment, but managed by the CDP Management Console Shared Data Experience (SDX) technologies form a secure and governed data lake backed by object storage (S3, ADLS, GCS) CDP services are optimized for the elastic compute & ‘always-on’ storage services provided by any cloud provider 4 Edge AI for Big Data Engineers Multiple users, frameworks, languages, devices, data sources & clusters CLOUD DATA ENGINEER CAT AI / Deep Learning / ML / DS • Experience in ETL/ELT • Expert in ETL (Eating, Ties • Can run in Apache NiFi and Laziness) • Coding skills in Python or • Can run in Kafka Streams Java • Edge Camera Interaction • Typical User • Can run in Apache Flink • Knowledge of database • No Coding Skills query languages such as • Can run in MiNiFi SQL • Can use NiFi • Can run in Cloudera • Experience with Streaming • Questions your cloud Machine Learning • Knowledge of Cloud Tools spend • Use Native ML/DL in Jetson, Movidius, Coral, TPU/GPU on Edge Devices Streaming Data Pipelines with Apache NiFi + Kafka + Flink © 2020 Cloudera, Inc.
    [Show full text]