Quality of Analytics Management of Data Pipelines for Retail Forecasting

Total Page:16

File Type:pdf, Size:1020Kb

Quality of Analytics Management of Data Pipelines for Retail Forecasting Krists Kreics Quality of analytics manage ment of data pipelinesfor retailforecasting School of Science Thesis sub mittedfor exa minationfor the degree of Master of Sciencein Technology. Espoo 29.07.2019 Thesis supervisor: Prof. Hong-Linh Truong Thesis advisors: Dr.Sci. ( Tech.) Mikko Ervasti M.Sc. Teppo Luukkonen aalto university abstract of the school of science master’s thesis Author: Krists Kreics Title: Quality of analytics manage ment of data pipelinesfor retailforecasting Date: 29.07.2019 Language: English Numberof pages: 54+3 Degree progra m me: Master’s Progra m meinI C TInnovation Major: Data Science Code: S CI3095 Supervisor: Prof. Hong-Linh Truong Advisors: Dr.Sci. ( Tech.) Mikko Ervasti, M.Sc. Teppo Luukkonen This thesis presents a fra mework for managing quality of analytics in data pipelines. The main research question of this thesisis the trade-off manage ment bet ween cost, ti me and data qualityin retail forcasting. Generally this trade-off in data analyticsis defined as quality of analytics. The challenge is addressed by introducing a proof of concept fra me work that collects real ti me metrics about the data quality, resource consu mption and other relevant metrics fro m tasks within a data pipeline. The data pipelines within thefra me work are developed using Apache Airflow that orchestrates Dockerized tasks. Different metrics of each task are monitored and stored to ElasticSearch. Cross-task co m municationis enabled by using an event driven architecture that utilizes a Rabbit M Q as the message queue and custo m consu meri mages written in python. With the help of these consu mers the syste m can control the result with respect to quality of analytics. E mpiricaltesting ofthe finalsyste m withretail datasetssho wedthatthis approach can aid data science tea ms to provide better services on de mand with bounded resources especially when dealing with big data. Key words: machinelearning, o fflinelearning, data pipelines, quality of analytics, apache airflo w 3 Ackno wledge ments I wouldliketothank my wife Ka millaforsupporting meinthelong hoursthat went into this work and allowing me tofullyfocus on writing. I really cannot believein the way we flow. I would alsolike to thank my professor Hong-Linh Truongfor providinginter- esting discussion points andfeedback. FinallyI wouldlike to thank my advisors Teppo Luukkonen and Mikko Ervasti and allthe wonderful people at Sellforte who provided me with afun and challenging working environ ment that truly was one of a kind. Otanie mi, 29.07.2019 Krists Kreics 4 C o nt e nts A b s t r a c t 2 Ackno wledge ments 3 C o nt e nt s 4 Abbreviations and Acrony ms 6 1 Introduction 7 1.1 Contributions ............................... 7 1.2 Structureofthethesis .......................... 8 2 Background 9 2.1 Casecompany............................... 9 2.2 Problemstatement ............................ 9 2.3 Systemrequirements ........................... 10 2.4 Testcase.................................. 11 3 Literature revie w 1 2 3.1 Machinelearning pipeline manage ment ................. 12 3.2 Managing qualityofanalytics ...................... 13 4 Overvie w of existingfra me works 1 6 4.1 Comparisonattributes .......................... 16 4.2 Machinelearningframeworks ...................... 16 4.2.1 AWSSagemaker ......................... 16 4.2.2 Azure MLService......................... 17 4.2.3 Google ML Engine ........................ 18 4.2.4 Pachyderm ............................ 18 4.2.5 Apache PredictionIO ....................... 18 4.2.6 Valohai .............................. 19 4.2.7 Kubeflow ............................. 19 4.3 Evaluation of machinelearningfra meworks ............... 20 4.4 Experiment managementframeworks .................. 21 4.4.1 DVC ................................ 21 4.4.2 Polyaxon.............................. 21 4.4.3 Summary ............................. 23 5 Overvie w of related technologies 2 4 5.1 Taskorchestration ............................ 24 5.1.1 A WSStep Functions ....................... 24 5.1.2 Luigi ................................ 24 5.1.3 Netflix Conductor......................... 25 5.1.4 Apache Airflow .......................... 25 5.2 Taskorganization ............................. 25 5 5.3 Datastorage................................ 26 5.4 Modelserving ............................... 26 5.5 Resource monitoring ........................... 28 5.5.1 Cadvisor .............................. 28 5.5.2 DockerStats API ......................... 28 5.5.3 Psutil ............................... 28 6 Technical solution 2 9 6.1 Architectureoverview........................... 29 6.1.1 Taskrunner ............................ 29 6.1.2 Datastorage............................ 30 6.1.3 Modelserving ........................... 30 6.1.4 Messageconsumers ........................ 31 6.1.5 Taskdesign ............................ 32 6.2 Usingtheframework ........................... 34 6.2.1 Setup ............................... 35 6.2.2 Setting acusto mcostfunction .................. 35 6.2.3 Pushingcustom metrics ..................... 36 6.2.4 Settingcusto mcontrolrules ................... 37 6.2.5 Changingthestorage ....................... 37 6.3 Summary ................................. 38 7 Fra me work evaluation 4 0 7.1 Descriptionofdata ............................ 40 7.2 Description of QoA manage mentstrategies ............... 40 7.3 Descriptionof metrics .......................... 42 7.4 Descriptionof pipelines .......................... 42 7.5 Adjust ment actionstrategyevaluation ................. 43 7.6 Resourcecontrolstrategyevaluation .................. 44 7.7 Auxillaryevaluationresults ....................... 45 7.8 Summary ................................. 46 8 Discussion 4 7 8.1 Meetingtherequirements ........................ 47 8.2 Future work ................................ 48 9 Conclusion 4 9 6 Abbreviations and Acrony ms API Application progra m minginterface A WS A mazon Web Services DA G Directed acyclic graph ECS A mazon Elastic Cloud Service EC2 A mazon Elastic Co mpute Cloud ML Machine Learning Qo A Quality of Analytics S D K Software develop ment kit S3 A mazon Si mple Storage Service vCPU Virtual CPU 7 1 Introduction With the rapidincreasein co mputing power and the growth of available data ma- chinelearning has beco me a mature topicin thelast decade[1]. Nowadays enter- prises of different sizes use so me kind of machinelearning solution to eitheri mprove their business or enhance their product. The develop ment and deploy ment of such solutions bring ne w overhead to these enterprises. A typical machinelearning solu- tion does not only consist of the progra m code thatis usedfor training models but it also has to be able to connect to a training data source, store the trained models and make these models availablefor usagein different syste ms. Typically machine learning syste ms are co mposed as data pipelines. A data pipeline orchestrates different data processing tasks. Usually, tasks are executed consecutively and the result of the pipelineis afunction or model thatis utilized to make predictions[2]. O fflinelearningis asubset of machinelearning workflows[3]. Such workflows do not change the approxi mation of their targetfunction once theinitial training phase is done. This settingis usually utilized when there areinfrequent data updates and when the whole dataset can be usedfor training the model or targetfunction. The firstfocal point of any machinelearning solutionis to develop an algorith m thatcan approxi matethe given data with goodenough accuracy. Afterthe algorith m is constructed evaluation ofitis done. Ifitis good enough an opti mization process can be started. For exa mple, in startups, one might want to have control over different aspects at the sa me ti me. Thisis because require ments can change quickly and the resourcesfor exa mple ti me and money have to be utilized carefully. This topic has not been studied extensively and requires deeper exa mination and more practical applications. Since data pipeline manage ment is still a young topic the available literature and online resources are more li mited than for more general soft ware engineering topics. Thisis why the goal of this thesisis to assess the current state of the art practices and technologies and then based on that kno wledge architect and develop a solution that provides trade-off manage ment and quality assess ment capabilities in anindustrial setting. 1.1 Contributions The main contribution of this thesisis afra me work that gives control over trade-off manage ment and provides task-level monitoring capabilities. The proof of con- ceptis open-sourced and available at https://github.com/kristsellforte/qoa_ framework. Currently, only ali mited a mount of practical exa mplesforsuch pipelines exist. Thefra me work was tested with retail datain collaboration with Sellforte[4], a Finnish startup specializedin marketing and pro motion analytics. A secondary contribution of the thesis is a thorough co mparison of existing machine learning fra meworks and co m mon workflows. 1.2 Structure of the thesis This thesisis structuredin nine sections. The first sectionintroduces the thesis and highlights contributions. The second section provides a deeper background and sets out clear goalsfor the thesis. The third sectionfeatures aliterature overvie w that highlights current challengesin model manage ment and approaches for managing different trade-offs in data pipelines. The fourth section features an overvie w of the current state ofthe art machinelearningfra me worksto
Recommended publications
  • Programming Models to Support Data Science Workflows
    UNIVERSITAT POLITÈCNICA DE CATALUNYA (UPC) BARCELONATECH COMPUTER ARCHITECTURE DEPARTMENT (DAC) Programming models to support Data Science workflows PH.D. THESIS 2020 | SPRING SEMESTER Author: Advisors: Cristián RAMÓN-CORTÉS Dra. Rosa M. BADIA SALA VILARRODONA [email protected] [email protected] Dr. Jorge EJARQUE ARTIGAS [email protected] iii ”Apenas él le amalaba el noema, a ella se le agolpaba el clémiso y caían en hidromurias, en salvajes ambonios, en sustalos exas- perantes. Cada vez que él procuraba relamar las incopelusas, se enredaba en un grimado quejumbroso y tenía que envul- sionarse de cara al nóvalo, sintiendo cómo poco a poco las arnillas se espejunaban, se iban apeltronando, reduplimiendo, hasta quedar tendido como el trimalciato de ergomanina al que se le han dejado caer unas fílulas de cariaconcia. Y sin em- bargo era apenas el principio, porque en un momento dado ella se tordulaba los hurgalios, consintiendo en que él aprox- imara suavemente sus orfelunios. Apenas se entreplumaban, algo como un ulucordio los encrestoriaba, los extrayuxtaba y paramovía, de pronto era el clinón, la esterfurosa convulcante de las mátricas, la jadehollante embocapluvia del orgumio, los esproemios del merpasmo en una sobrehumítica agopausa. ¡Evohé! ¡Evohé! Volposados en la cresta del murelio, se sen- tían balpamar, perlinos y márulos. Temblaba el troc, se vencían las marioplumas, y todo se resolviraba en un profundo pínice, en niolamas de argutendidas gasas, en carinias casi crueles que los ordopenaban hasta el límite de las gunfias.” Julio Cortázar, Rayuela v Dedication This work would not have been possible without the effort and patience of the people around me.
    [Show full text]
  • Presto: the Definitive Guide
    Presto The Definitive Guide SQL at Any Scale, on Any Storage, in Any Environment Compliments of Matt Fuller, Manfred Moser & Martin Traverso Virtual Book Tour Starburst presents Presto: The Definitive Guide Register Now! Starburst is hosting a virtual book tour series where attendees will: Meet the authors: • Meet the authors from the comfort of your own home Matt Fuller • Meet the Presto creators and participate in an Ask Me Anything (AMA) session with the book Manfred Moser authors + Presto creators • Meet special guest speakers from Martin your favorite podcasts who will Traverso moderate the AMA Register here to save your spot. Praise for Presto: The Definitive Guide This book provides a great introduction to Presto and teaches you everything you need to know to start your successful usage of Presto. —Dain Sundstrom and David Phillips, Creators of the Presto Projects and Founders of the Presto Software Foundation Presto plays a key role in enabling analysis at Pinterest. This book covers the Presto essentials, from use cases through how to run Presto at massive scale. —Ashish Kumar Singh, Tech Lead, Bigdata Query Processing Platform, Pinterest Presto has set the bar in both community-building and technical excellence for lightning- fast analytical processing on stored data in modern cloud architectures. This book is a must-read for companies looking to modernize their analytics stack. —Jay Kreps, Cocreator of Apache Kafka, Cofounder and CEO of Confluent Presto has saved us all—both in academia and industry—countless hours of work, allowing us all to avoid having to write code to manage distributed query processing.
    [Show full text]
  • Summary Areas of Interest Skills Experience
    Gonçalo N. Paiva Amador Lisbon, Portugal · Portuguese · 27/07/1983 · Male · ♂ Single (+351) 962-816-858 | [email protected] github.com/g-amador | www.linkedin.com/in/g-amador "First, solve the problem. Then, write the code." John Johnson Summary Software developer in several programming languages, also with experience in primary technical contact point withone or more clients, assisting in planning, debugging, and supervising ongoing critical business applications. Former Project Researcher, Lab Instructor, and Scientific Presenter. Highly motivated, communicative, and self-sufficient professional with solid academic background in C.S. & Engineering.Known as a team player and constant self-driven learner; striving to address novel and exciting challenges. Preference to work with teams in order to grow personally and professionally. Areas of Interest • Computer/Video Games & Gamification. • Artificial Intelligence, Robotics, and Cybernetics. • Touch/Voice/Camera-based HCI technologies. • Multi-Core CPU/GPU and Cloud computing. • 3D Animation/Modelling & Geometrical Computing. • Workforce scheduling & management software. • Mobile Applications Development. Skills Programming/meta Languages & APIs/Frameworks: C/C++[4], C#[2][6], Java[4][5], ASP Classic/.NET (Standard)/.NET Core[6], React Native[5], PL/SQL[5], T-SQL[2][6], Ajax/jQuery/Bootstrap[5], Angular & Node.js[5], Apache Math Commons[4], Apache Struts[5], OpenGL[4], CUDA/OpenMP/OpenCL/MPI[3], HTML/CSS/JavaScript/JSON/XML[3][6], UML[2], LaTeX[4]. Productivity tools: Azure DevOps & Team Foundation Server (TFS)[6], Zendesk[5], Redmine[5], Git[3][6], SVN[2][5], Apache AirFlow[5], Apache Maven[2][5], Apache Ant[4][5], Apache Tomcat[5], Blender[2], JMonkeyEngine[2], Eclipse[3][5], Netbeans[4], Oracle SQL de- veloper[6], Visual Studio 2005-current[4][6], Visual Studio Code[6], MS SQL-server 2012-Current[2][6], PowerBI[5], Google Chrome[4][7], MS Edge[6], Internet Explorer[4][6], SoapUI[5], Postman[6], Swagger[5], Wireshark[1][5], Fiddler[5], IIS 6-8[5].
    [Show full text]
  • TR-4798: Netapp AI Control Plane
    Technical Report NetApp AI Control Plane Pairing Popular Open-Source Tools with NetApp to Enable AI, ML, and DL Data and Experiment Management Mike Oglesby, NetApp October 2020 | TR-4798 Abstract As organizations increase their use of artificial intelligence (AI), they face many challenges, including workload scalability and data availability. This document demonstrates how to address these challenges through the use of NetApp® AI Control Plane, a solution that pairs NetApp data management capabilities with popular open-source tools and frameworks that are used by data scientists and data engineers. In this document, we show you how to rapidly clone a data namespace just as you would a Git repo. We demonstrate how to define and implement AI training workflows that incorporate the near-instant creation of data and model baselines for traceability and versioning. We also show how to seamlessly replicate data across sites and regions and swiftly provision Jupyter Notebook workspaces with access to massive datasets. TABLE OF CONTENTS 1 Introduction ........................................................................................................................................... 5 2 Concepts and Components ................................................................................................................. 6 2.1 Artificial Intelligence ........................................................................................................................................ 6 2.2 Containers ......................................................................................................................................................
    [Show full text]
  • Using Amazon EMR with Apache Airflow: How & Why to Do It
    Using Amazon EMR with Apache Airflow: How & Why To Do It In this introductory article, I explore Amazon EMR and how it works with Apache Airflow. What is Amazon EMR? Amazon EMR is an orchestration tool to create aSpark or Hadoop big data cluster and run it on Amazon virtual machines. That’s the original use case for EMR: MapReduce and Hadoop. We’ll take a look at MapReduce later in this tutorial. What is Apache Airflow? Apache Airflow is a tool for defining and running jobs—i.e., a big data pipeline—on: Apache Hadoop Snowflake (data warehouse charting) Amazon products including EMR,Redshift (data warehouse), S3 (file storage), and Glacier (long term data archival) Many other products Airflow can also start and takedown Amazon EMR clusters. That’s important because your EMR clusters could get quite expensive if you leave them running when they are not in use. Benefits of Airflow For most use cases, there’s two main advantages of Airflow running on an Apache Hadoop and Spark environment: Cost management Optional surge capacity (which, of course, is one element of cost) Plus, it’s simpler to use Airflow and its companion product Genie (developed by Netflix) to do things like run jobs using spark-submit or Hadoop queues, which have a lot of configuration options and an understanding of things like Yarn, a resource manager. In this setup, Airflow: Lets you bundle jar files, Python code, and configuration data into metadata Provides a feedback loop in case any of that goes wrong So, it might be better to use Airflow than the alternative: typing spark-submit into the command line and hoping for the best.
    [Show full text]
  • Migrating from Snowflake to Bigquery Data and Analytics
    Migrating from Snowflake to BigQuery Data and analytics Date:​ February 2021 Contents About this document 3 1. Introduction 4 1.1 Terminology map 4 1.2 Architectural comparison 5 2. Pre-migration 5 2.1 Capacity planning 6 2.2 Data security 7 2.3 Migration considerations 8 2.3.1 Fully supported scenarios 9 2.3.2 Partially supported scenarios 9 2.3.3 Unsupported scenarios 9 3. Migrating your data 10 3.1 Migration strategy 10 3.1.1 Migration architecture 10 3.1.2 Final state 11 3.2 Preparing your Cloud Storage environment 12 3.3 Building your schema 13 3.3.1 Schema changes 13 3.3.2 Clustered tables 13 3.3.3 Updating a schema 13 3.4 Supported data types, properties, and file formats 14 3.4.1 Considerations for using CSV 14 3.4.2 Considerations for using Parquet 15 3.4.3 Considerations for using JSON 15 3.5 Migration tools 16 3.6 Migrating the data 17 3.6.1 Migration using pipelines 17 3.6.1.1 Extract and load (EL) 18 3.6.1.2 Extract, transform, and load (ETL) 19 3.6.1.3 ELT 20 3.6.1.4 Partner tools for migration 20 3.6.2 Example of the export process for migrating Snowflake to BigQuery 20 3.6.2.1 Preparing for the export 20 3.6.2.2 Exporting your Snowflake data 21 1 3.6.2.3 Load data into BigQuery 23 4. Post-migration 23 4.1 Reporting and analysis 23 4.2 Performance optimization 23 2 About this document This document provides guidance on how to migrate your database from a Snowflake data warehouse to BigQuery.
    [Show full text]
  • Spring Boot AMQP Starter 1.5.8.RELEASE
    Veritas Information Studio 1.0 Third-party Software License Agreements April 2019 Third-party software license agreements This document includes the following topics: ■ Third-party software license agreements ■ kubernetes-cni 0.5.1-00 ■ Jackson-annotations 2.9.5 ■ k8s 1.14.5 ■ kubernetes/ingress-nginx 0.9.0-beta.15 ■ pause- amd64 3.0 ■ kubernetes-client-python 3.0.0 ■ kubernetes/api 1.8.0 ■ kubernetes/apiserver 1.10.0-alpha.1 ■ Code Generation Library - cglib:cglib 3.1 ■ Apache CXF Runtime WS Security 3.1.4 ■ jackson-databind jackson-databind-2.9.8 ■ defaultbackend 1.3 ■ go-openapi/jsonpointer ■ Apache Commons Lang 3.2.1 ■ hadoop 2.9.0 Third-party software license agreements 3 ■ smbj 0.5.1 ■ ng2-idle 2.0.0-beta.13 ■ RxJS - ReactiveX/rxjs 5.5.2 ■ JSR-311 - JAX-RS - The Java API for RESTful Web Services (Jersey) 1.1.1-API ■ WSDL4J - wsdl4j:wsdl4j 1.6.3 ■ Jetty extensions to the Google OAuth Client Library for Java. 1.23.0 ■ allblobstore 2.0.3 ■ jclouds blobstore core 2.0.3 ■ Admin Directory API directory_v1-rev86-1.23.0 directory_v1-rev86-1.23.0 ■ Drive API v3-rev87-1.23.0 ■ Apache CXF Runtime JAX-WS Frontend 3.1.11 ■ OpenCMIS Client Implementation 1.1.0 ■ flexboxgrid 6.3.1 ■ matttproud-golang_protobuf_extensions v1.0.0 ■ AWS Java SDK for Amazon CloudWatch 1.11.136 ■ DataStax Java Driver for Apache Cassandra - Object Mapping 3.2.0 ■ DataStax Java Driver for Apache Cassandra - Extras 3.2.0 ■ Bean Validation API - javax.validation:validation-api 2.0.0.CR1 ■ Apache HttpClient 4.5 ■ Apache Log4j API 2.8.2 ■ Apache Kafka - org.apache.kafka:kafka-clients
    [Show full text]
  • Amazon EMR Migration Guide How to Move Apache Spark and Apache Hadoop from On-Premises to AWS
    Amazon EMR Migration Guide How to Move Apache Spark and Apache Hadoop From On-Premises to AWS December 2, 2020 Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. Contents Overview .............................................................................................................................. 1 Starting Your Journey .......................................................................................................... 3 Migration Approaches ...................................................................................................... 3 Prototyping ....................................................................................................................... 6 Choosing a Team ............................................................................................................
    [Show full text]
  • Building a Google Cloud Data Platform
    Building a Google Cloud AData blueprint Platform for success with BigQuery April 2020 Table of Contents 1. Overview 4 1.1 Introduction 4 1.2 Objective 4 1.3 Approach 4 2. Cloud Data Platforms - Key Concepts 5 2.1 Platform objectives 5 2.2 Conceptual architecture 6 2.3 Platform user considerations 6 2.4 Information management 7 2.4.1 Data classification 7 2.4.2 Data governance 8 2.4.3 Data quality 8 2.4.3 Data risks and privacy 9 3. Google Cloud as a Data Platform 10 3.1 Google Cloud’s data capabilities 10 3.2 Solution architecture - Google Cloud data capabilities 11 3.3 BigQuery 12 3.3.1 Overview 12 3.3.2 Data modelling and structuring BigQuery 14 3.3.3 Ingestion 15 3.3.4 Enrichment, processing and analysis 16 3.3.5 Performance and cost optimisation 16 3.4 Data transformation - ELT / ETL 17 3.4.1 Cloud Dataflow 17 3.4.2 Dataprep 17 3.4.3 Other data manipulation tools on Google Cloud 18 Table of Contents 3.4.4 Scheduling and orchestration 18 3.3.5 Data risk 18 3.5 Machine Learning & AI 20 3.5.1 Google ML & AI tooling with used with applied data Science 20 3.5.2 Kubernetes for ML payloads 21 3.6 Data accessibility, reporting and visualisation 22 3.6.1 Data accessibility tools 22 3.6.2 Data Studio 23 4. Building a Cloud Data Blueprint 24 4.1 Principles of constructing a blueprint 24 4.2 People, process and technology considerations 24 4.3 Foundation cloud capabilities 25 5.
    [Show full text]
  • Airflow Documentation
    Airflow Documentation Release 1.10.2 Apache Airflow Jan 23, 2019 Contents 1 Principles 3 2 Beyond the Horizon 5 3 Content 7 3.1 Project..................................................7 3.1.1 History.............................................7 3.1.2 Committers...........................................7 3.1.3 Resources & links........................................8 3.1.4 Roadmap............................................8 3.2 License..................................................8 3.3 Quick Start................................................ 11 3.3.1 What’s Next?.......................................... 12 3.4 Installation................................................ 12 3.4.1 Getting Airflow......................................... 12 3.4.2 Extra Packages......................................... 13 3.4.3 Initiating Airflow Database................................... 13 3.5 Tutorial.................................................. 14 3.5.1 Example Pipeline definition.................................. 14 3.5.2 It’s a DAG definition file.................................... 15 3.5.3 Importing Modules....................................... 15 3.5.4 Default Arguments....................................... 15 3.5.5 Instantiate a DAG........................................ 16 3.5.6 Tasks.............................................. 16 3.5.7 Templating with Jinja...................................... 16 3.5.8 Setting up Dependencies.................................... 17 3.5.9 Recap.............................................. 18 3.5.10
    [Show full text]
  • Apache Airflow Overview Viktor Kotliar
    Apache Airflow overview Viktor Kotliar Data Knowledge Catalog Meeting (TPU/NRC KI) Thursday 23 Jul 2020, 11:00 → 12:00 Europe/Moscow introduction ● Apache Airflow is an open-source workflow management platform. ● It started at Airbnb in October 2014 as a solution to manage the company's increasingly complex workflows. ● From the beginning, the project was made open source, becoming an Apache Incubator project in March 2016 and a Top-Level Apache Software Foundation project in January 2019. ● written in Python, and workflows are created via Python scripts ● designed under the principle of "configuration as code" https://github.com/apache/airflow https://airflow.apache.org/ 2 Principles ● Airflow is a platform to programmatically author, schedule and monitor workflows. ● Use Airflow to author workflows as Directed Acyclic Graphs (DAGs) of tasks. ● The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. ● Rich command line utilities make performing complex surgeries on DAGs a snap. ● The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. ● When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. 3 Principles 2 ● Airflow is not a data streaming solution. ● Tasks do not move data from one to the other (though tasks can exchange metadata!) ● Workflows are expected to be mostly static or slowly changing. You can think of the structure of the tasks in your workflow as slightly more dynamic than a database structure would be. ● Airflow workflows are expected to look similar from a run to the next, this allows for clarity around unit of work and continuity.
    [Show full text]
  • ARCHIVED: Deep Learning On
    This paper has been archived. Deep Learning on AWS Guide August 2019 For the latest technical content, see the AWS Whitepapers & Guides page: aws.amazon.com/whitepapers Archived Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents AWS current product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. AWS responsibilities and liabilities to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved. Archived Contents Overview .............................................................................................................................. 1 Deep Learning Landscape ............................................................................................... 2 Using this Guide ............................................................................................................... 3 Deep Learning Process for Build, Train, and Deploy ......................................................... 4 Step 1. Collect Data ........................................................................................................
    [Show full text]