Best Practices for Implementing a Data Lake on Amazon S3

Total Page:16

File Type:pdf, Size:1020Kb

Best Practices for Implementing a Data Lake on Amazon S3 S T G 3 5 9 - R Best practices for implementing a Data Lake on Amazon S3 Amy Che John Mallory Gerard Bartolome Principal Technical Program Storage Business Data Platform Architect Manager, Amazon S3 Development Manager Sweetgreen Amazon Web Services Amazon Web Services © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Data at scale Growing From new Increasingly Used by Analyzed by exponentially sources diverse many people many applications Agenda Data at scale and Data Lakes Sweetgreen’s Data Lake best practices Data Lake foundation best practices 100110000100101011100101010111001010100001 011111011010 0011110010110010110 0100011000010 [ Data Lake ] performance best practices [ Data Lake ] security best practices © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Defining the Data Lake Business Machine Intelligence Learning DW Queries Big data Interactive Real-time processing Catalog 10011000010010101110010101 01110010101000010111110110 10 0011110010110010110 0100011000010 Data warehouse Data Lake OLTP ERP CRM LOB Devices Web Sensors Social Defining the Data Lake Amazon Simple Storage Service (S3) Amazon S3 as the foundation for Data Lakes Amazon Athena Elasticsearch Service Durable, available, exabyte scalable Amazon Kinesis EMR Lake Formation & Glue Amazon Amazon Secure, compliant, auditable Redshift Amazon S3 Comprehend Amazon Amazon SageMaker Rekognition High performance Low cost storage and analytics Broad ecosystem integration Snowball Kinesis Data Streams Snowmobile Kinesis Data Firehose Best practices for implementing a Data Lake on Amazon S3 Gerard Bartolome sweetgreen | AWS re:INVENT 7 ECOSYSTEM < IMAGE > sweetgreen | AWS re:INVENT 9 EXTRACTION "Adapt the language to the data; DON'T adapt the data to the language” sweetgreen | AWS re:INVENT 7 TRANSFORM: S3 SECURITY sweetgreen | AWS re:INVENT 7 TRANSFORM: ECS / SERVERLESS sweetgreen | AWS re:INVENT 7 TRANSFORM: EMR sweetgreen | AWS re:INVENT 7 USAGE sweetgreen | AWS re:INVENT 7 CALIFORNIA CONSUMER PRIVACY ACT Anonymize user sweetgreen | AWS re:INVENT 7 We are hiring!! https://grnh.se/b4116be81 © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Data Lake on AWS Amazon Amazon Amazon Amazon Elasticsearch AWS AWS Lake AWS DynamoDB Service Glue Formation AppSync API Gateway Cognito Catalog & search Access & user interfaces Central storage Scalable, secure, cost-effective Amazon Amazon AWS Amazon Amazon Athena EMR Glue Redshift AWS Amazon AWS Direct AWS Database AWS Storage Snowball Kinesis Data Connect Migration Gateway S3 Firehose Service Data ingestion Amazon Amazon Manage & secure Amazon Amazon Elasticsearch DynamoDB QuickSight Kinesis Service AWS AWS AWS Identity and Access Amazon Key Management CloudTrail Management CloudWatch Amazon Service Amazon Amazon Amazon Rekognition SageMaker RDS Neptune Analytics, Machine Learning & Serving Data Lake ingest and transform patterns Pipelined architectures improve governance, data management and efficiency Raw data ETL Production data Data warehouse Amazon S3-Standard Amazon Glue or EMR (Data Lake) Amazon Redshift Amazon S3-Intelligent Tiering Triggered code ETL & catalog management Triggered code Amazon Lambda AWS Glue and AWS Lake Formation Amazon Lambda Data management at scale best practices Utilize S3 object tagging Granularly control access, analyze usage, manage lifecycle policies, and replicate objects Implement lifecycle policies Automated policy-driven archive and data expiration Utilize batch operations Manage millions to billions of objects with a single request Plan for rapid growth and automating management at any scale Choosing the right Data Lake storage class Select storage class by data pipeline stage Production Online Historical Raw data ETL Data Lake cool data data Amazon Amazon Amazon Amazon S3-Standard Amazon S3-Standard S3-Standard S3-Intelligent-Tiering Infrequent Access (S3-IA/ZIA) S3-Glacier or Deep-Archive Small log files Data churn Optimized sizes (MBs) Replicated DR data Historical assets Overwrites if synced Small intermediates Many users Infrequently accessed ML model training Short lived Multiple transforms Unpredictable access Infrequent queries Compliance/Audit Moved & deleted Deletes < 30 days Long lived assets ML model training Data protection Batched & archived Output to Data Lake Hot to cool Planned restores Optimize costs for all stages of Data Lake workflows Efficiently ingest data from all sources IoT, Sensor Data, Clickstream Data, Real time Social Media Feeds, Streaming Logs Predictive analytics, IoT, On-premise Data Lakes, EDW, sentiment analysis, AWS Direct Connect Amazon Kinesis Large Scale Data Collection recommendation engines On-premise ERP, Mainframes, Lab Equipment, NAS Storage Batch BI reporting, log analysis, AWS DataSync AWS Storage Oracle, MySQL, MongoDB, DB2, data warehousing, usage optimization Gateway SQL Server, Amazon RDS Bulk Offline Sensor Data, NAS, AWS Database AWS Snowball Edge On-premise Hadoop Machine learning model training, Ad hoc Migration Service data discovery, data annotation An S3 Data Lake accommodates a wide variety of concurrent data sources Batch relational data ingestion Event-driven batch ingest pipeline Let Amazon CloudWatch Events and AWS Lambda drive the pipeline Run Crawl New raw Crawl SLA ‘optimize’ optimized Ready raw dataset data arrives job dataset for reporting deadline < 22:00 Start Start Start Reporting 02:00 UTC crawler job or trigger crawler dataset UTC ready Data arrives Crawler Job in Amazon S3 succeeds succeeds Real-time data ingestion Collect, process, analyze and aggregate data streams in real time Streaming data collected and processed in fast layer Spark on Amazon Amazon EMR DynamoDB Aggregated and batched before ingesting in S3 Provides real-time SQL insights and query Aggregated raw Kinesis Data Kinesis Data Kinesis Data Streams Analytics Firehose data stored for further analysis Ingest Aggregate, Egress data store data filter, streams streams enrich data Running analytics on AWS Data Lakes Lift & Shift AWS Managed Services Redshift Glue EMR Athena Run third-party analytic tools on EC2 AWS Managed & Serverless Platforms What Use EBS and S3 as data stores Glue, Athena, EMR, Redshift Self-managed environments More options to process data in place Simplify on-premises migrations Focus on data outcomes, not infrastructure Why Use existing tools, code and customizations Speed adoption of new capabilities Minimize application changes More tightly integrated with AWS security You provision, manage and scale Utilizing AWS Lake Formation Consider You monitor and manage availability Flexibility and choice with open data formats You own upgrades and versioning Leverage AWS pace of innovation Amazon S3 is the storage foundation for both approaches AWS Lake Formation Build a secure Data Lake in days Build Data Lakes Simplify security Provide self-service quickly management access to data Move, store, catalog, Build a data catalog that Centrally define security, governance, and clean your data faster describes your data and auditing policies Transform to open formats like Enable analyst and data scientist Enforce policies consistently Parquet and ORC to easily find relevant data across multiple services ML-based de-duplication Analyze with multiple analytics Integrates with IAM and KMS and record matching services without moving data © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Optimizing Data Lake performance Scaling request rates on S3 S3 automatically scales to thousands of transactions per second in request performance At least 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD request per second per prefix in a bucket Horizontally scale parallel requests to S3 endpoints to distribute load over multiple paths through the network 10 prefixes in an S3 bucket will scale read performance to 55,000 read requests per second Use the AWS SDK Transfer Manager to automate horizontally scaling connections No limits to the number of prefixes in a bucket! Vast majority of analytic use cases don’t require prefix customization Optimizing Data Lake performance Scaling request rates on S3 Using the AWS SDK Transfer Manager, the vast majority of applications can use any prefix naming scheme and get thousands of RPS on ingest and egress. AWS SDK retry logic helps occasional 503 errors while S3 automatically scales for sustained high load. Only consider prefix customization if: Your application exponentially increases RPS in seconds or a few minutes (e.g., 0 RPS to 600K RPS for GET in 5 minutes). Your application requires a high RPS on another S3 API like LIST. Automatic request rate scaling on Amazon S3 Autonomous driving Data Lake New index partitions All cars get throttled are created, raising 4000 around 3,500 max TPS 3500 PUTs/sec (total) 3000 2500 S P 2000 T 1500 1000 500 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 TIME CAR01 CAR02 CAR03 CAR04 CAR05 Optimizing Data Lake performance Use optimized object sizes and data formats Aim for 16–256 MB object sizes to optimize throughput and cost • Also reduces LISTs, metadata operations, and job setup time Aggregate during ingest with Kinesis or during ETL with Glue or EMR+Spark Utilize Parquet or ORC formats • Compress by default and are splittable • Parquet enables parallel queries Utilize caching and tiering where appropriate Utilize EMR HDFS namespace for small file Spark workloads Consider Amazon DynamoDB and ElastiCache for low latency data presentation Utilize Amazon CloudFront for distributing frequently
Recommended publications
  • ARCHIVED: Lambda Architecture for Batch and Stream Processing
    Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived. For the latest technical content about Lambda architecture, see the AWS Whitepapers & Guides page: https://aws.amazon.com/whitepapers Archived © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS’s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services, each of which is provided “as is” without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data-processing design pattern to handle massive quantities of data and integrate batch and real-time processing within a single framework. (Lambda architecture is distinct from and should not be confused with the AWS Lambda compute service.) This paper covers the building blocks of a unified architectural pattern that unifies stream (real-time) and batch processing.
    [Show full text]
  • ETL Options Across Cloud Simplifying the Data Preparation, Data Transformation & Data Exploration for Augmented Analytics X-Clouds
    ETL options across cloud Simplifying the data preparation, data transformation & data exploration for augmented analytics x-clouds A Mindtree White paper| October, 2020 Scope of this white paper Would you like to revolutionize and uncover the unlimited power of data from various sources and productionalize your AI/ML models for amazing recommendations? Do you want to consolidate multiple formats & multiple data sources to have powerful big data (volume, variety, veracity and velocity) platform that has faster, easier to manage data lineage with repeatable advanced analytics processes, and receive billions of recommendations from PB scale data? Would you like to see the utmost bigdata features, functions, and 'augmented next-gen analytics' best practices for achieving data-driven rich, deeper insights in a very near-real-time (streaming) or batch model? “The building blocks for achieving that goal is to set up a flexible, insights & low-latency search infused Enterprise DataLake (DWH) or ‘augmented analytics’ platform that should include a data driven ETL & ELT- batch and streaming unified platform; with accelerated practices of data preparation, data enrichment, data transformation and data governance & exploration solutions”. Many organizations that are trying to become data-driven or insights oriented organizations in the near future have started setting up the environment and culture needed for building and using the power of advanced analytics for their business to make swift recommendations and business decisions. Augmented analytics platform enhances the quality and availability of the services for growing the business footprints. To be a harbinger and stay ahead in the current competitive world, there are massive requirements to have the capability for getting deeper insights, customer 360 degree preferences & recommendations, and integration of business systems across multi-channels (social media/,etc.) for seamless user onboarding/marketing.
    [Show full text]
  • Serverless Linear Algebra
    Serverless Linear Algebra Vaishaal Shankar Karl Krauth Kailas Vodrahalli UC Berkeley UC Berkeley Stanford Qifan Pu Benjamin Recht Ion Stoica Google UC Berkeley UC Berkeley Jonathan Ragan-Kelley Eric Jonas Shivaram Venkataraman MIT CSAIL University of Chicago University of Wisconsin-Madison ABSTRACT 1 INTRODUCTION Datacenter disaggregation provides numerous benets to As cloud providers push for datacenter disaggregation [18], both the datacenter operator and the application designer. we see a shift in distributed computing towards greater elas- However switching from the server-centric model to a dis- ticity. Datacenter disaggregation provides benets to both aggregated model requires developing new programming the datacenter operator and application designer. By decou- abstractions that can achieve high performance while ben- pling resources (CPU, RAM, SSD), datacenter operators can eting from the greater elasticity. To explore the limits of perform ecient bin-packing and maintain high utilization datacenter disaggregation, we study an application area that regardless of application logic (e.g., an application using all near-maximally benets from current server-centric datacen- the cores on a machine & using only 5% of RAM). Similarly ters: dense linear algebra. We build NumPyWren, a system the application designer has the exibility to provision and for linear algebra built on a disaggregated serverless pro- deprovision resources on demand during application runtime gramming model, and LAmbdaPACK, a companion domain- (e.g., asking for
    [Show full text]
  • Numpywren: Serverless Linear Algebra
    numpywren: serverless linear algebra Vaishaal Shankar Karl Krauth Qifan Pu Eric Jonas Shivaram Venkataraman Ion Stoica Benjamin Recht Jonathan Ragan-Kelley Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2018-137 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-137.html October 22, 2018 Copyright © 2018, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement This research is supported in part by ONR awards N00014-17-1-2191, N00014-17-1-2401, and N00014-18-1-2833, the DARPA Assured Autonomy (FA8750-18-C-0101) and Lagrange (W911NF-16-1-0552) programs, Amazon AWS AI Research Award, NSF CISE Expeditions Award CCF-1730628 and gifts from Alibaba, Amazon Web Services, Ant Financial, Arm, CapitalOne, Ericsson, Facebook, Google, Huawei, Intel, Microsoft, Scotiabank, Splunk and VMware as well as by NSF grant DGE- 1106400. We would like to thank Horia Mania, Alyssa Morrow and Esther Rolf for helpful comments while writing this paper. numpywren: Serverless Linear Algebra Vaishaal Shankar1, Karl Krauth1, Qifan Pu1, Eric Jonas1, Shivaram Venkataraman2, Ion Stoica1, Benjamin Recht1, and Jonathan Ragan-Kelley1 1 UC Berkeley 2UW Madison 100k 8 Abstract Working Set Size Linear algebra operations are widely used in scientific 75k Maximum Threads 6 computing and machine learning applications.
    [Show full text]
  • An Overview on Amazon Rekognition Technology
    California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of aduateGr Studies 5-2021 An Overview on Amazon Rekognition Technology Raghavendra Kumar Indla Follow this and additional works at: https://scholarworks.lib.csusb.edu/etd Part of the Business Intelligence Commons Recommended Citation Indla, Raghavendra Kumar, "An Overview on Amazon Rekognition Technology" (2021). Electronic Theses, Projects, and Dissertations. 1263. https://scholarworks.lib.csusb.edu/etd/1263 This Thesis is brought to you for free and open access by the Office of aduateGr Studies at CSUSB ScholarWorks. It has been accepted for inclusion in Electronic Theses, Projects, and Dissertations by an authorized administrator of CSUSB ScholarWorks. For more information, please contact [email protected]. AN OVERVIEW ON AMAZON REKOGNITION TECHNOLOGY A Project Presented to the Faculty of California State University, San Bernardino In Partial Fulfillment of the Requirements for the Degree Master of Science in Information Systems and Technology: Business Intelligence by Raghavendra Kumar Indla May 2021 AN OVERVIEW ON AMAZON REKOGNITION TECHNOLOGY A Project Presented to the Faculty of California State University, San Bernardino by Raghavendra Kumar Indla May 2021 Approved by: Dr. Benjamin Becerra, Committee Chair Dr. Conrad Shayo, Committee Member Dr. Javad Varzandeh, Committee Member © 2021 Raghavendra Kumar Indla ABSTRACT The Covid-19 pandemic has disrupted the daily operations of many businesses due to which they were forced to follow the guidelines set by the local, state, and federal government to reduce the spread of the Covid-19 virus. This project focused on how few businesses that resumed their operations during the onset of Covid-19 integrated Amazon Web Services Rekognition technology to comply with government rules and regulations.
    [Show full text]
  • Whitepaper Title
    Cloud Native Data Virtualization on AWS Data Virtualization Guide September 2020 Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. Contents Introduction .......................................................................................................................... 1 Data Integration Challenges ................................................................................................ 3 Business Case ..................................................................................................................... 4 Data Virtualization Architecture ........................................................................................... 5 Custom Data Virtualization Solutions ................................................................................. 7 Commercial Data Virtualization Solutions
    [Show full text]
  • Big Data Analytics Options on AWS
    Big Data Analytics Options on AWS December 2018 This paper has been archived. For the latest technical information, see https://docs.aws.amazon.com/whitepapers/latest/big-data- analytics-options/welcome.html Archived © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS’s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services, each of which is provided “as is” without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. Archived Contents Introduction 5 The AWS Advantage in Big Data Analytics 5 Amazon Kinesis 7 AWS Lambda 11 Amazon EMR 14 AWS Glue 20 Amazon Machine Learning 22 Amazon DynamoDB 25 Amazon Redshift 29 Amazon Elasticsearch Service 33 Amazon QuickSight 37 Amazon EC2 40 Amazon Athena 42 Solving Big Data Problems on AWS 45 Example 1: Queries against an Amazon S3 Data Lake 47 Example 2: Capturing and Analyzing Sensor Data 49 Example 3: Sentiment Analysis of Social
    [Show full text]
  • Aws Data Pipeline Developer Guide
    Aws Data Pipeline Developer Guide Axel spreads her patriot handily, self-displeased and sportless. Goober contrive prolixly if flexile Sandor deject or scribe. Which Andreas vituperating so diametrically that Tabby tippings her stake? With popular etl within a managed service logs into aws data pipeline developer guide to guide you can be queried via data analysts. Latest advances in developer guide waf and. Awsgluejob Resources hashicorpaws Terraform Registry. OLAP databases that push the soil of precomputation to constitute extreme. AWS Data Pipeline Developer Guide API Version 2012-10-29 AWS Data Pipeline Developer Guide AWS Data Pipeline Developer Guide Copyright 2014. Your knowledge of the leading public cloud and aws data preparation step synchronizes the kinesis analytics now! Consuming streaming data Twitter Developer. Today on performance of a hive, processing pipeline using different tables, and many different as well with new data will assume that requires setting up! AWS Data Pipeline Tutorial What is Examples Diagnostics. Precomputation data developer guide if you can be. Passionate software project to mitigate these issues based on platforms to guide aws data developer. AWS Developer Tools Links to developer tools SDKs IDE toolkits and command line tools for developing and managing AWS applications AWS Whitepapers. Many active data engineers commit information, and accessible for usage will receive notifications when a single destination databases, and add a free. Aws Glue Python Example viaggio-giapponeit. Lambda, including AWS Data Pipeline, is contained within one task runner. Next monthly fee for government agencies on. Preparation of systemtechnical documentation as per ISO. AWS Data Pipeline is a strong-based data workflow service that helps you process cannot move about between.
    [Show full text]
  • Lambda Architecture for Batch and Stream Processing
    Lambda Architecture for Batch and Stream Processing October 2018 © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS’s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services, each of which is provided “as is” without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Abstract Lambda architecture is a data-processing design pattern to handle massive quantities of data and integrate batch and real-time processing within a single framework. (Lambda architecture is distinct from and should not be confused with the AWS Lambda compute service.) This paper covers the building blocks of a unified architectural pattern that unifies stream (real-time) and batch processing. After reading this paper, you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS.
    [Show full text]
  • Towards Practical Serverless Analytics
    Towards Practical Serverless Analytics Qifan Pu Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2019-105 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-105.html June 25, 2019 Copyright © 2019, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Towards Practical Serverless Analytics by Qifan Pu A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Ion Stoica, Chair Professor Joeseph Hellerstein Professor Fernando Perez Spring 2019 Towards Practical Serverless Analytics Copyright 2019 by Qifan Pu 1 Abstract Towards Practical Serverless Analytics by Qifan Pu Doctor of Philosophy in Computer Science University of California, Berkeley Professor Ion Stoica, Chair Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. Even though many distributed computation frameworks have moved into the cloud, many users are still left to struggle with complex cluster management and configuration tools there. In this thesis, we argue that cloud stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity.
    [Show full text]
  • Scalable Analytics on Serverless Infrastructure
    Shuffling, Fast and Slow: Scalable Analytics on Serverless Infrastructure Qifan Pu (UC Berkeley), Shivaram Venkataraman (UW Madison), Ion Stoica (UC Berkeley) Abstract workload requirements can be dynamically matched with Serverless computing is poised to fulfill the long-held continuous scaling. promise of transparent elasticity and millisecond-level pric- Fine-grained elasticity in serverless platforms is natu- ing. To achieve this goal, service providers impose a fine- rally useful for on-demand applications like creating image grained computational model where every function has a thumbnails [18] or processing streaming events [26]. How- maximum duration, a fixed amount of memory and no persis- ever, we observe such elasticity also plays an important role tent local storage. We observe that the fine-grained elasticity for data analytics workloads. Consider for example an ad- of serverless is key to achieve high utilization for general hoc data analysis job exemplified by say TPC-DS query computations such as analytics workloads, but that resource 95 [34] (See section 5 for more details). This query con- limits make it challenging to implement such applications as sists of eight stages and the amount of input data at each they need to move large amounts of data between functions stage varies from 0.8MB to 66GB. With a cluster of virtual that don’t overlap in time. In this paper, we present Locus, machines users would need to size the cluster to handle the a serverless analytics system that judiciously combines (1) largest stage leaving resources idle during other stages. Us- cheap but slow storage with (2) fast but expensive storage, ing a serverless platform can improve resource utilization as to achieve good performance while remaining cost-efficient.
    [Show full text]
  • AWS Solutions Constructs AWS Solutions AWS Solutions Constructs AWS Solutions
    AWS Solutions Constructs AWS Solutions AWS Solutions Constructs AWS Solutions AWS Solutions Constructs: AWS Solutions Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS Solutions Constructs AWS Solutions Table of Contents Overview ........................................................................................................................................... 1 What is AWS Solutions Constructs? .............................................................................................. 1 Why use AWS Solutions Constructs? ............................................................................................. 1 Getting Started .................................................................................................................................. 2 Prerequisites .............................................................................................................................. 2 Installing the AWS CDK ............................................................................................................... 2 Working with AWS Solutions Constructs ......................................................................................
    [Show full text]