Amazon Sagemaker -DP 2019

Total Page:16

File Type:pdf, Size:1020Kb

Amazon Sagemaker -DP 2019 Amazon Web Services and Domo: Make AI the Centerpiece of Your Digital Transformation Kumar Venkateswar Nihar Namjoshi Principal Product Manager Director, Product Management Amazon Web Service DOMO Amazon SageMaker Build, Train, and Deploy Machine Learning Models Quickly & Easily, at scale © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Customer-focused Pace of innovation Breadth and depth 90%+ of our ML roadmap is 200+ new ML launches A wide range of AI and ML services defined by customers and major feature updates last year Our Approach to Machine Learning Embedded R&D Multi-framework Security and analytics Customer-centric approach © 2018,Support Amazon Web Services, for the Inc. or mostits Affiliates. All rightsDeep reserved. set Amazon of Confidential security and Trademark with robust encryption and analytics popular frameworks Easily add intelligence to applications without machine learning skills A I SERVICES Vision | Documents | Speech | Language | Chatbots | Forecasting | Recommendations Build, train, and deploy machine learning models fast ML SERVICES Data labeling | Pre-built algorithms & notebooks | One-click training and deployment Flexibility & choice, highest-performing infrastructure ML FRAMEWORKS & Support for ML frameworks | Compute options purpose-built for ML INFRASTRUCTURE © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Vision Speech Language Chatbots Forecasting Recommendations A I SERVICES New REKOGNITION REKOGNITION POLLY TRANSCRIBE TRANSLATE LEX TEXTRACT COMPREHEND & FORECAST PERSONALIZE IMAGE VIDEO COMPREHEND New New MEDICAL New New New GROUND TRUTH ALGORITHMS REINFORCEMENT LEARNING DEPLOYMENT ML SERVICES AMAZON NOTEBOOKS SAGEMAKER New TRAINING HOSTING AWS MARKETPLACE OPTIMIZATION (NEO) New Frameworks Interfaces Infrastructure ML FRAMEWORKS & INFRASTRUCTURE New EC2 P3 EC2 C5 FPGAs GREENGRASS ELASTIC & P3dn INFERENCE © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Vision Speech Language Chatbots Forecasting Recommendations A I SERVICES New REKOGNITION REKOGNITION POLLY TRANSCRIBE TRANSLATE LEX TEXTRACT COMPREHEND & FORECAST PERSONALIZE IMAGE VIDEO COMPREHEND New New MEDICAL New New New GROUND TRUTH ALGORITHMS REINFORCEMENT LEARNING DEPLOYMENT ML SERVICES AMAZON NOTEBOOKS SAGEMAKER New TRAINING HOSTING AWS MARKETPLACE OPTIMIZATION (NEO) New Frameworks Interfaces Infrastructure ML FRAMEWORKS & INFRASTRUCTURE New EC2 P3 EC2 C5 FPGAs GREENGRASS ELASTIC & P3dn INFERENCE © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale 1 2 3 © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale 1 2 3 © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark DATA SCIENCE EVOLUTION CONNECT & CLEAN & EXPLORE & DEVELOP DEPLOY ACT MANAGE COMBINE VISUALIZE MODEL MODEL DATA SCIENCE EVOLUTION CONNECT & CLEAN & EXPLORE & DEVELOP DEPLOY ACT MANAGE COMBINE VISUALIZE MODEL MODEL DATA SCIENCE EVOLUTION CONNECT & CLEAN & EXPLORE & DEVELOP DEPLOY ACT MANAGE COMBINE VISUALIZE MODEL MODEL Motivation • As a platform, Domo is designed to work with your existing investments. • Amazon SageMaker is the leading machine learning cloud service that lets you build, train and deploy models at scale. • Currently, there is no easy way to leverage your data science investments that are outside Domo. Behind the scenes Behind the scenes Behind the scenes Availability • Early access preview summer • Beta summer 2019 • Full GA 2H-2019 QUESTIONS? THANK YOU Ongoing enhancements to Amazon SageMaker MXNet 1.3 container | CloudTrail integration for audit logs | TensorFlow 1.7 Containers | Automatic Model Tuning—Add/Delete tags | Jupyter Notebooks IP Filtering Region expansion to SFO | Image Classification Multi-label Support | TensorFlow and MXNet Containers—Open Sourcing and Local Mode | PyTorch pre-built container Region expansion to PDT | Batch customer VPC | PCI DSS Compliance | XGBoost Instance Weights | NTM—vocab, metrics, and subsampling Anomaly Detection (Random Cut Forest) Algorithm | Deep AR algorithm | SageMaker region expansion to ICN | Hyperparameter tuning job cloning on the console Autoscaling console | PyTorch 1.0 container | Customer VPC support for training and hosting | PrivateLink support for SageMaker inferencing APIs Horovod support in TensorFlow Container | Variable sizes for notebook EBS volumes |nbexample support in SageMaker notebook instances | Tag-based access control Automatic Model Tuning—early stopping | IP Insights algorithm | Chainer 4.1 Container | Region expansion to SIN Built-in Algorithms Pipe Mode Support TensorFlow 1.8 Container | Region expansion to FRA | Training job cloning in console | Algorithm Pipe mode enhancements | Pipe mode support for text, recordIO, and images TensorFlow 1.5, MXNet 1.0, and CUDA 9 Support | DeepAR Algorithm Enhancements | Linear Learner Multi-class Classification | TensorFlow 1.10 Container Region expansion to YUL | BlazingText Algorithm | Batch KMS | k-nearest neighbors | Object detection |Chainer pre-built container | Apache Airflow integration Region expansion to BOM | GDPR compliance | BlazingText Enhancements | TensorFlow 1.9 Container | Notebook bootstrap script Amazon SageMaker Hosting custom header attribute | Metrics Support in Training Jobs | Object2vec | TensorFlow container enhancements | CloudFormation support PrivateLink support for SageMaker Control Plane | MXNet 1.2 Container | HIPAA compliance | Ground Truth | Python SDK Marketplace support Git integration for SageMaker notebooks | Pipe mode support for TensorFlow | ml.p3.2xlarge notebook instances | Internet-free notebook instances Semantic segmentation algorithm | SageMaker Reinforcement Learning support | Linear Learner Improvements | SageMaker Batch Transform Region expansion to NRT | High Performance I/O streaming in PIPE Mode | Pause/resume for active learning algorithms | Pre-built scikit-learn container Step Functions for SageMaker | KMS support for training and hosting | Incremental learning algorithm enhancements | TensorFlow 1.11 container | NTM feature release Deep Learning Compiler | ONNX Support for Frameworks and Algorithms |Full instance type support | Pipe mode CSV support | Region expansion to LHR Incremental training platform support | Login anomaly detection algorithm | Serial inference pipeline | Experiment Management | Region expansion to SYD MXNet container enhancements | Automatic Model Tuning | Automatic Model Tuning—incremental tuning | Spark MLeap 1P container TensorFlow 1.6 and MXNet 1.1 Containers | Region expansion to SIN | Mead Notebook PrivateLink Support | Linear Learnersupport © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker: Build, Train, and Deploy ML Models at Scale 1 1 2 2 3 3 Collect and prepare Choose and Set up and Train and Deploy model Scale and manage training data optimize your manage tune models in production the production ML algorithm environments environment for training © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Successful models require high-quality data © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Successful models require high-quality data © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Ground Truth Build highly accurate training datasets and reduce data labeling costs by up to 70% using machine learning © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Ground Truth How it works © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Ground Truth How it works © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Ground Truth How it works © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Ground Truth How it works © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Ground Truth How it works © 2018, Amazon Web Services, Inc. or its Affiliates. All
Recommended publications
  • Programming Models to Support Data Science Workflows
    UNIVERSITAT POLITÈCNICA DE CATALUNYA (UPC) BARCELONATECH COMPUTER ARCHITECTURE DEPARTMENT (DAC) Programming models to support Data Science workflows PH.D. THESIS 2020 | SPRING SEMESTER Author: Advisors: Cristián RAMÓN-CORTÉS Dra. Rosa M. BADIA SALA VILARRODONA [email protected] [email protected] Dr. Jorge EJARQUE ARTIGAS [email protected] iii ”Apenas él le amalaba el noema, a ella se le agolpaba el clémiso y caían en hidromurias, en salvajes ambonios, en sustalos exas- perantes. Cada vez que él procuraba relamar las incopelusas, se enredaba en un grimado quejumbroso y tenía que envul- sionarse de cara al nóvalo, sintiendo cómo poco a poco las arnillas se espejunaban, se iban apeltronando, reduplimiendo, hasta quedar tendido como el trimalciato de ergomanina al que se le han dejado caer unas fílulas de cariaconcia. Y sin em- bargo era apenas el principio, porque en un momento dado ella se tordulaba los hurgalios, consintiendo en que él aprox- imara suavemente sus orfelunios. Apenas se entreplumaban, algo como un ulucordio los encrestoriaba, los extrayuxtaba y paramovía, de pronto era el clinón, la esterfurosa convulcante de las mátricas, la jadehollante embocapluvia del orgumio, los esproemios del merpasmo en una sobrehumítica agopausa. ¡Evohé! ¡Evohé! Volposados en la cresta del murelio, se sen- tían balpamar, perlinos y márulos. Temblaba el troc, se vencían las marioplumas, y todo se resolviraba en un profundo pínice, en niolamas de argutendidas gasas, en carinias casi crueles que los ordopenaban hasta el límite de las gunfias.” Julio Cortázar, Rayuela v Dedication This work would not have been possible without the effort and patience of the people around me.
    [Show full text]
  • Presto: the Definitive Guide
    Presto The Definitive Guide SQL at Any Scale, on Any Storage, in Any Environment Compliments of Matt Fuller, Manfred Moser & Martin Traverso Virtual Book Tour Starburst presents Presto: The Definitive Guide Register Now! Starburst is hosting a virtual book tour series where attendees will: Meet the authors: • Meet the authors from the comfort of your own home Matt Fuller • Meet the Presto creators and participate in an Ask Me Anything (AMA) session with the book Manfred Moser authors + Presto creators • Meet special guest speakers from Martin your favorite podcasts who will Traverso moderate the AMA Register here to save your spot. Praise for Presto: The Definitive Guide This book provides a great introduction to Presto and teaches you everything you need to know to start your successful usage of Presto. —Dain Sundstrom and David Phillips, Creators of the Presto Projects and Founders of the Presto Software Foundation Presto plays a key role in enabling analysis at Pinterest. This book covers the Presto essentials, from use cases through how to run Presto at massive scale. —Ashish Kumar Singh, Tech Lead, Bigdata Query Processing Platform, Pinterest Presto has set the bar in both community-building and technical excellence for lightning- fast analytical processing on stored data in modern cloud architectures. This book is a must-read for companies looking to modernize their analytics stack. —Jay Kreps, Cocreator of Apache Kafka, Cofounder and CEO of Confluent Presto has saved us all—both in academia and industry—countless hours of work, allowing us all to avoid having to write code to manage distributed query processing.
    [Show full text]
  • Summary Areas of Interest Skills Experience
    Gonçalo N. Paiva Amador Lisbon, Portugal · Portuguese · 27/07/1983 · Male · ♂ Single (+351) 962-816-858 | [email protected] github.com/g-amador | www.linkedin.com/in/g-amador "First, solve the problem. Then, write the code." John Johnson Summary Software developer in several programming languages, also with experience in primary technical contact point withone or more clients, assisting in planning, debugging, and supervising ongoing critical business applications. Former Project Researcher, Lab Instructor, and Scientific Presenter. Highly motivated, communicative, and self-sufficient professional with solid academic background in C.S. & Engineering.Known as a team player and constant self-driven learner; striving to address novel and exciting challenges. Preference to work with teams in order to grow personally and professionally. Areas of Interest • Computer/Video Games & Gamification. • Artificial Intelligence, Robotics, and Cybernetics. • Touch/Voice/Camera-based HCI technologies. • Multi-Core CPU/GPU and Cloud computing. • 3D Animation/Modelling & Geometrical Computing. • Workforce scheduling & management software. • Mobile Applications Development. Skills Programming/meta Languages & APIs/Frameworks: C/C++[4], C#[2][6], Java[4][5], ASP Classic/.NET (Standard)/.NET Core[6], React Native[5], PL/SQL[5], T-SQL[2][6], Ajax/jQuery/Bootstrap[5], Angular & Node.js[5], Apache Math Commons[4], Apache Struts[5], OpenGL[4], CUDA/OpenMP/OpenCL/MPI[3], HTML/CSS/JavaScript/JSON/XML[3][6], UML[2], LaTeX[4]. Productivity tools: Azure DevOps & Team Foundation Server (TFS)[6], Zendesk[5], Redmine[5], Git[3][6], SVN[2][5], Apache AirFlow[5], Apache Maven[2][5], Apache Ant[4][5], Apache Tomcat[5], Blender[2], JMonkeyEngine[2], Eclipse[3][5], Netbeans[4], Oracle SQL de- veloper[6], Visual Studio 2005-current[4][6], Visual Studio Code[6], MS SQL-server 2012-Current[2][6], PowerBI[5], Google Chrome[4][7], MS Edge[6], Internet Explorer[4][6], SoapUI[5], Postman[6], Swagger[5], Wireshark[1][5], Fiddler[5], IIS 6-8[5].
    [Show full text]
  • TR-4798: Netapp AI Control Plane
    Technical Report NetApp AI Control Plane Pairing Popular Open-Source Tools with NetApp to Enable AI, ML, and DL Data and Experiment Management Mike Oglesby, NetApp October 2020 | TR-4798 Abstract As organizations increase their use of artificial intelligence (AI), they face many challenges, including workload scalability and data availability. This document demonstrates how to address these challenges through the use of NetApp® AI Control Plane, a solution that pairs NetApp data management capabilities with popular open-source tools and frameworks that are used by data scientists and data engineers. In this document, we show you how to rapidly clone a data namespace just as you would a Git repo. We demonstrate how to define and implement AI training workflows that incorporate the near-instant creation of data and model baselines for traceability and versioning. We also show how to seamlessly replicate data across sites and regions and swiftly provision Jupyter Notebook workspaces with access to massive datasets. TABLE OF CONTENTS 1 Introduction ........................................................................................................................................... 5 2 Concepts and Components ................................................................................................................. 6 2.1 Artificial Intelligence ........................................................................................................................................ 6 2.2 Containers ......................................................................................................................................................
    [Show full text]
  • Using Amazon EMR with Apache Airflow: How & Why to Do It
    Using Amazon EMR with Apache Airflow: How & Why To Do It In this introductory article, I explore Amazon EMR and how it works with Apache Airflow. What is Amazon EMR? Amazon EMR is an orchestration tool to create aSpark or Hadoop big data cluster and run it on Amazon virtual machines. That’s the original use case for EMR: MapReduce and Hadoop. We’ll take a look at MapReduce later in this tutorial. What is Apache Airflow? Apache Airflow is a tool for defining and running jobs—i.e., a big data pipeline—on: Apache Hadoop Snowflake (data warehouse charting) Amazon products including EMR,Redshift (data warehouse), S3 (file storage), and Glacier (long term data archival) Many other products Airflow can also start and takedown Amazon EMR clusters. That’s important because your EMR clusters could get quite expensive if you leave them running when they are not in use. Benefits of Airflow For most use cases, there’s two main advantages of Airflow running on an Apache Hadoop and Spark environment: Cost management Optional surge capacity (which, of course, is one element of cost) Plus, it’s simpler to use Airflow and its companion product Genie (developed by Netflix) to do things like run jobs using spark-submit or Hadoop queues, which have a lot of configuration options and an understanding of things like Yarn, a resource manager. In this setup, Airflow: Lets you bundle jar files, Python code, and configuration data into metadata Provides a feedback loop in case any of that goes wrong So, it might be better to use Airflow than the alternative: typing spark-submit into the command line and hoping for the best.
    [Show full text]
  • Migrating from Snowflake to Bigquery Data and Analytics
    Migrating from Snowflake to BigQuery Data and analytics Date:​ February 2021 Contents About this document 3 1. Introduction 4 1.1 Terminology map 4 1.2 Architectural comparison 5 2. Pre-migration 5 2.1 Capacity planning 6 2.2 Data security 7 2.3 Migration considerations 8 2.3.1 Fully supported scenarios 9 2.3.2 Partially supported scenarios 9 2.3.3 Unsupported scenarios 9 3. Migrating your data 10 3.1 Migration strategy 10 3.1.1 Migration architecture 10 3.1.2 Final state 11 3.2 Preparing your Cloud Storage environment 12 3.3 Building your schema 13 3.3.1 Schema changes 13 3.3.2 Clustered tables 13 3.3.3 Updating a schema 13 3.4 Supported data types, properties, and file formats 14 3.4.1 Considerations for using CSV 14 3.4.2 Considerations for using Parquet 15 3.4.3 Considerations for using JSON 15 3.5 Migration tools 16 3.6 Migrating the data 17 3.6.1 Migration using pipelines 17 3.6.1.1 Extract and load (EL) 18 3.6.1.2 Extract, transform, and load (ETL) 19 3.6.1.3 ELT 20 3.6.1.4 Partner tools for migration 20 3.6.2 Example of the export process for migrating Snowflake to BigQuery 20 3.6.2.1 Preparing for the export 20 3.6.2.2 Exporting your Snowflake data 21 1 3.6.2.3 Load data into BigQuery 23 4. Post-migration 23 4.1 Reporting and analysis 23 4.2 Performance optimization 23 2 About this document This document provides guidance on how to migrate your database from a Snowflake data warehouse to BigQuery.
    [Show full text]
  • Spring Boot AMQP Starter 1.5.8.RELEASE
    Veritas Information Studio 1.0 Third-party Software License Agreements April 2019 Third-party software license agreements This document includes the following topics: ■ Third-party software license agreements ■ kubernetes-cni 0.5.1-00 ■ Jackson-annotations 2.9.5 ■ k8s 1.14.5 ■ kubernetes/ingress-nginx 0.9.0-beta.15 ■ pause- amd64 3.0 ■ kubernetes-client-python 3.0.0 ■ kubernetes/api 1.8.0 ■ kubernetes/apiserver 1.10.0-alpha.1 ■ Code Generation Library - cglib:cglib 3.1 ■ Apache CXF Runtime WS Security 3.1.4 ■ jackson-databind jackson-databind-2.9.8 ■ defaultbackend 1.3 ■ go-openapi/jsonpointer ■ Apache Commons Lang 3.2.1 ■ hadoop 2.9.0 Third-party software license agreements 3 ■ smbj 0.5.1 ■ ng2-idle 2.0.0-beta.13 ■ RxJS - ReactiveX/rxjs 5.5.2 ■ JSR-311 - JAX-RS - The Java API for RESTful Web Services (Jersey) 1.1.1-API ■ WSDL4J - wsdl4j:wsdl4j 1.6.3 ■ Jetty extensions to the Google OAuth Client Library for Java. 1.23.0 ■ allblobstore 2.0.3 ■ jclouds blobstore core 2.0.3 ■ Admin Directory API directory_v1-rev86-1.23.0 directory_v1-rev86-1.23.0 ■ Drive API v3-rev87-1.23.0 ■ Apache CXF Runtime JAX-WS Frontend 3.1.11 ■ OpenCMIS Client Implementation 1.1.0 ■ flexboxgrid 6.3.1 ■ matttproud-golang_protobuf_extensions v1.0.0 ■ AWS Java SDK for Amazon CloudWatch 1.11.136 ■ DataStax Java Driver for Apache Cassandra - Object Mapping 3.2.0 ■ DataStax Java Driver for Apache Cassandra - Extras 3.2.0 ■ Bean Validation API - javax.validation:validation-api 2.0.0.CR1 ■ Apache HttpClient 4.5 ■ Apache Log4j API 2.8.2 ■ Apache Kafka - org.apache.kafka:kafka-clients
    [Show full text]
  • Amazon EMR Migration Guide How to Move Apache Spark and Apache Hadoop from On-Premises to AWS
    Amazon EMR Migration Guide How to Move Apache Spark and Apache Hadoop From On-Premises to AWS December 2, 2020 Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. Contents Overview .............................................................................................................................. 1 Starting Your Journey .......................................................................................................... 3 Migration Approaches ...................................................................................................... 3 Prototyping ....................................................................................................................... 6 Choosing a Team ............................................................................................................
    [Show full text]
  • Building a Google Cloud Data Platform
    Building a Google Cloud AData blueprint Platform for success with BigQuery April 2020 Table of Contents 1. Overview 4 1.1 Introduction 4 1.2 Objective 4 1.3 Approach 4 2. Cloud Data Platforms - Key Concepts 5 2.1 Platform objectives 5 2.2 Conceptual architecture 6 2.3 Platform user considerations 6 2.4 Information management 7 2.4.1 Data classification 7 2.4.2 Data governance 8 2.4.3 Data quality 8 2.4.3 Data risks and privacy 9 3. Google Cloud as a Data Platform 10 3.1 Google Cloud’s data capabilities 10 3.2 Solution architecture - Google Cloud data capabilities 11 3.3 BigQuery 12 3.3.1 Overview 12 3.3.2 Data modelling and structuring BigQuery 14 3.3.3 Ingestion 15 3.3.4 Enrichment, processing and analysis 16 3.3.5 Performance and cost optimisation 16 3.4 Data transformation - ELT / ETL 17 3.4.1 Cloud Dataflow 17 3.4.2 Dataprep 17 3.4.3 Other data manipulation tools on Google Cloud 18 Table of Contents 3.4.4 Scheduling and orchestration 18 3.3.5 Data risk 18 3.5 Machine Learning & AI 20 3.5.1 Google ML & AI tooling with used with applied data Science 20 3.5.2 Kubernetes for ML payloads 21 3.6 Data accessibility, reporting and visualisation 22 3.6.1 Data accessibility tools 22 3.6.2 Data Studio 23 4. Building a Cloud Data Blueprint 24 4.1 Principles of constructing a blueprint 24 4.2 People, process and technology considerations 24 4.3 Foundation cloud capabilities 25 5.
    [Show full text]
  • Quality of Analytics Management of Data Pipelines for Retail Forecasting
    Krists Kreics Quality of analytics manage ment of data pipelinesfor retailforecasting School of Science Thesis sub mittedfor exa minationfor the degree of Master of Sciencein Technology. Espoo 29.07.2019 Thesis supervisor: Prof. Hong-Linh Truong Thesis advisors: Dr.Sci. ( Tech.) Mikko Ervasti M.Sc. Teppo Luukkonen aalto university abstract of the school of science master’s thesis Author: Krists Kreics Title: Quality of analytics manage ment of data pipelinesfor retailforecasting Date: 29.07.2019 Language: English Numberof pages: 54+3 Degree progra m me: Master’s Progra m meinI C TInnovation Major: Data Science Code: S CI3095 Supervisor: Prof. Hong-Linh Truong Advisors: Dr.Sci. ( Tech.) Mikko Ervasti, M.Sc. Teppo Luukkonen This thesis presents a fra mework for managing quality of analytics in data pipelines. The main research question of this thesisis the trade-off manage ment bet ween cost, ti me and data qualityin retail forcasting. Generally this trade-off in data analyticsis defined as quality of analytics. The challenge is addressed by introducing a proof of concept fra me work that collects real ti me metrics about the data quality, resource consu mption and other relevant metrics fro m tasks within a data pipeline. The data pipelines within thefra me work are developed using Apache Airflow that orchestrates Dockerized tasks. Different metrics of each task are monitored and stored to ElasticSearch. Cross-task co m municationis enabled by using an event driven architecture that utilizes a Rabbit M Q as the message queue and custo m consu meri mages written in python. With the help of these consu mers the syste m can control the result with respect to quality of analytics.
    [Show full text]
  • Airflow Documentation
    Airflow Documentation Release 1.10.2 Apache Airflow Jan 23, 2019 Contents 1 Principles 3 2 Beyond the Horizon 5 3 Content 7 3.1 Project..................................................7 3.1.1 History.............................................7 3.1.2 Committers...........................................7 3.1.3 Resources & links........................................8 3.1.4 Roadmap............................................8 3.2 License..................................................8 3.3 Quick Start................................................ 11 3.3.1 What’s Next?.......................................... 12 3.4 Installation................................................ 12 3.4.1 Getting Airflow......................................... 12 3.4.2 Extra Packages......................................... 13 3.4.3 Initiating Airflow Database................................... 13 3.5 Tutorial.................................................. 14 3.5.1 Example Pipeline definition.................................. 14 3.5.2 It’s a DAG definition file.................................... 15 3.5.3 Importing Modules....................................... 15 3.5.4 Default Arguments....................................... 15 3.5.5 Instantiate a DAG........................................ 16 3.5.6 Tasks.............................................. 16 3.5.7 Templating with Jinja...................................... 16 3.5.8 Setting up Dependencies.................................... 17 3.5.9 Recap.............................................. 18 3.5.10
    [Show full text]
  • Apache Airflow Overview Viktor Kotliar
    Apache Airflow overview Viktor Kotliar Data Knowledge Catalog Meeting (TPU/NRC KI) Thursday 23 Jul 2020, 11:00 → 12:00 Europe/Moscow introduction ● Apache Airflow is an open-source workflow management platform. ● It started at Airbnb in October 2014 as a solution to manage the company's increasingly complex workflows. ● From the beginning, the project was made open source, becoming an Apache Incubator project in March 2016 and a Top-Level Apache Software Foundation project in January 2019. ● written in Python, and workflows are created via Python scripts ● designed under the principle of "configuration as code" https://github.com/apache/airflow https://airflow.apache.org/ 2 Principles ● Airflow is a platform to programmatically author, schedule and monitor workflows. ● Use Airflow to author workflows as Directed Acyclic Graphs (DAGs) of tasks. ● The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. ● Rich command line utilities make performing complex surgeries on DAGs a snap. ● The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. ● When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. 3 Principles 2 ● Airflow is not a data streaming solution. ● Tasks do not move data from one to the other (though tasks can exchange metadata!) ● Workflows are expected to be mostly static or slowly changing. You can think of the structure of the tasks in your workflow as slightly more dynamic than a database structure would be. ● Airflow workflows are expected to look similar from a run to the next, this allows for clarity around unit of work and continuity.
    [Show full text]