Amazon Aurora: Amazon’S New Relational Database Engine
Total Page:16
File Type:pdf, Size:1020Kb
BERLIN ©2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Amazon Aurora: Amazon’s New Relational Database Engine Carlos Conde – Technology Evangelist @caarlco ©2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Current DB architectures are monolithic SQL Multiple layers of Transactions functionality all on a Caching single box Logging Current DB architectures are monolithic Application Even when you scale it out, you’re still SQL SQL replicating the same Transactions Transactions Caching Caching stack Logging Logging Current DB architectures are monolithic Application Even when you scale it out, you’re still SQL SQL replicating the same Transactions Transactions Caching Caching stack Logging Logging Current DB architectures are monolithic Application Even when you scale it out, you’re still SQL SQL replicating the same Transactions Transactions Caching Caching stack Logging Logging Storage This is a problem. For cost. For flexibility. And for availability. Reimagining the relational database What if you were inventing the database today? You wouldn’t design it the way we did in 1970. At least not entirely. You’d build something that can scale out, that is self-healing, and that leverages existing AWS services. Relational databases reimagined for the cloud Speed and availability of high-end commercial databases Simplicity and cost-effectiveness of open source databases Drop-in compatibility with MySQL Simple pay as you go pricing Delivered as a managed service A service-oriented architecture applied to the database Data Plane Control Plane • Moved the logging and storage layer into a multi-tenant, scale-out SQL database-optimized storage service Transactions Caching Amazon • Integrated with other AWS services DynamoDB like Amazon EC2, Amazon VPC, Amazon DynamoDB, Amazon SWF, and Amazon Route 53 for control Logging + Storage plane operations Amazon SWF • Integrated with Amazon S3 for continuous backup with 99.999999999% durability Amazon S3 Amazon Route 53 Aurora preview • Sign up for preview access at: https://aws.amazon.com/rds/aurora/preview • Now available in US West (Oregon) and EU (Ireland), in addition to US East (N. Virginia) • Thousands of customers already invited to the limited preview • Now moving to unlimited preview; accepting all requests in 2–3 weeks • Full service launch in the coming months Enterprise grade, open source pricing vCPU Mem Hourly Price Simple pricing db.r3.large 2 15.25 $0.29 • No licenses db.r3.xlarge 4 30.5 $0.58 • No lock-in db.r3.2xlarge 8 61 $1.16 • Pay only for what you use db.r3.4xlarge 16 122 $2.32 db.r3.8xlarge 32 244 $4.64 Discounts • 44% with a 1-year RI • 63% with a 3-year RI • Storage consumed, up to 64 TB, is $0.10/GB-month • IOs consumed are billed at $0.20 per million I/O • Prices are for Virginia Aurora Works with Your Existing Apps Establishing our ecosystem “It is great to see Amazon Aurora remains MySQL compatible; MariaDB connectors work with Aurora seamlessly. Today, customers can take MariaDB Enterprise with MariaDB MaxScale drivers and connect to Aurora, MariaDB, or MySQL without worrying about compatibility. We look forward to working with the Aurora team in the future to further accelerate innovation within the MySQL ecosystem.”— Roger Levy, VP Products, MariaDB Business Intelligence Data Integration Query and Monitoring SI and Consulting Amazon Aurora Is Easy to Use Simplify database management • Create a database in minutes • Automated patching • Push-button scale compute • Continuous backups to S3 Amazon RDS • Automatic failure detection and failover Simplify storage management • Read replicas are available as failover targets—no data loss • Instantly create user snapshots—no performance impact • Continuous, incremental backups to S3 • Automatic storage scaling up to 64 TB—no performance or availability impact • Automatic restriping, mirror repair, hot spot management, encryption Simplify data security • Encryption to secure data at rest Application – AES-256; hardware accelerated – All blocks on disk and in Amazon S3 are encrypted – Key management via AWS KMS SQL Transactions • SSL to secure data in transit Caching • Network isolation via Amazon VPC by default • No direct access to nodes Storage • Supports industry standard security and data protection certifications Amazon S3 Amazon Aurora Is Highly Available Aurora storage • Highly available by default – 6-way replication across 3 AZs AZ 1 AZ 2 AZ 3 – 4 of 6 write quorum • Automatic fallback to 3 of 4 if an AZ is unavailable – 3 of 6 read quorum SQL Transactions • SSD, scale-out, multi-tenant storage – Seamless storage scalability Caching – Up to 64 TB database size – Only pay for what you use • Log-structured storage – Many small segments, each with their own redo logs – Log pages used to generate data pages – Eliminates chatter between database and storage Amazon S3 Consistent, low-latency writes MySQL with standby Amazon Aurora AZ 1 AZ 2 AZ 1 AZ 2 AZ 3 Primary Standby Primary Replica Instance Instance Instance Instance async 4/6 quorum Amazon Elastic EBS PiTR Block Store (EBS) Sequential Sequential write write Distributed writes EBS EBS mirror mirror Amazon S3 Amazon S3 Improvements Type of writes • Consistency—tolerance to outliers Log records • Latency— synchronous vs. asynchronous replication Binlog Data • Significantly more efficient use of network I/O Double-write buffer FRM files, metadata Self-healing, fault-tolerant • Lose two copies or an AZ failure without read or write availability impact • Lose three copies without read availability impact • Automatic detection, replication, and repair AZ 1 AZ 2 AZ 3 AZ 1 AZ 2 AZ 3 SQL SQL Transaction Transaction Caching Caching Read availability Read and write availability Instant crash recovery Traditional databases Amazon Aurora • Have to replay logs since the last • Underlying storage replays redo checkpoint records on demand as part of a disk read • Single-threaded in MySQL; requires a large number of disk • Parallel, distributed, asynchronous accesses Crash at T0 requires Crash at T0 will result in redo a re-application of the logs being applied to each segment SQL in the redo log since on demand, in parallel, asynchronously last checkpoint Checkpointed Data Redo Log T0 T0 Survivable caches • We moved the cache out of Caching process is outside the DB process the database process and remains warm across a database restart SQL SQL SQL • Cache remains warm in the Transactions Transactions Transactions event of a database restart Caching Caching Caching • Lets you resume fully loaded operations much faster • Instant crash recovery + survivable cache = quick and easy recovery from DB failures Multiple failover targets—no data loss MySQL Master MySQL Replica Aurora Master Aurora Replica Single-threaded Page cache binlog apply invalidation 70% Write 70% Write 70% Write 100% New Reads 30% Read 30% New Reads 30% Read Data Volume Data Volume Shared Multi-AZ Storage MySQL read scaling • Replicas must replay logs • Replicas place additional load on master • Replica lag can grow indefinitely • Failover results in data loss Simulate failures using SQL • To cause the failure of a component at the database node: ALTER SYSTEM CRASH [{INSTANCE | DISPATCHER | NODE}] • To simulate the failure of disks: ALTER SYSTEM SIMULATE percent_failure DISK failure_type IN [DISK index | NODE index] FOR INTERVAL interval • To simulate the failure of networking: ALTER SYSTEM SIMULATE percent_failure NETWORK failure_type [TO {ALL | read_replica | availability_zone}] FOR INTERVAL interval Amazon Aurora Is Fast Write performance (console screenshot) • MySQL Sysbench • R3.8XL with 32 cores and 244 GB RAM • 4 client machines with 1,000 threads each Read performance (console screenshot) • MySQL Sysbench • R3.8XL with 32 cores and 244 GB RAM • Single client with 1,000 threads Read replica lag (console screenshot) • Aurora Replica with 7.27 ms replica lag at 13.8 K updates/second • MySQL 5.6 on the same hardware has ~2 s lag at 2 K updates/second Writes scale with table count Write Performance and Table Count 70 MySQL MySQL RDS MySQL Amazon Tables I2.8XL I2.8XL 30K IOPS 60 Aurora Local SSD RAM Disk (Single AZ) 50 10 60,000 18,000 22,000 25,000 40 100 66,000 19,000 24,000 23,000 30 ThousandsWrites of per Second 1,000 64,000 7,000 18,000 8,000 20 10 10,000 54,000 4,000 8,000 5,000 - 10 100 1.000 10.000 Number of Tables Write-only workload 1,000 connections Aurora Query cache (default on for Amazon Aurora, off for MySQL) MySQL on I2.8XL MySQL on I2.8XL with RAM Disk RDS MySQL with 30,000 IOPS (Single AZ) Better concurrency Write Performance and Concurrency 120 RDS MySQL Amazon 100 Connections 30K IOPS Aurora (Single AZ) 80 50 40,000 10,000 60 500 71,000 21,000 40 5,000 110,000 13,000 ThousandsWrites of per Second 20 - 50 500 5.000 OLTP Workload Concurrent Connections Variable connection count 250 tables Query cache (default on for Amazon Aurora, off for MySQL) Aurora RDS MySQL with 30,000 IOPS (Single AZ) Caching improves performance Performance with query cache on and off Amazon Amazon RDS MySQL RDS MySQL 400 Aurora Aurora 30K IOPS 30K IOPS R/W Ratio Without With Without With 350 Caching Caching Caching Caching 300 100/0 160,000 375,000 35,000 19,000 250 50/50 130,000 93,000 24,000 20,000 200 150 0/100 64,000 64,000 16,000 16,000 100 ThousandsOperations/Second of 50 - 100/0 50/50 0/100 OLTP workload Read/Write Ratio 1,000 connections Aurora without Caching 250 tables Aurora with Caching Query cache on/off tested RDS MySQL;30,000 IOPS (Single AZ) - without caching RDS MySQL;30,000 IOPS (Single AZ) - with caching Replicas have up to 400 times less lag Read Replica Lag RDS MySQL Updates/ Amazon 350.000 30K IOPS Second Aurora (Single AZ) 300.000 1,000 2.62 ms 0 s 250.000 2,000 3.42 ms 1 s 200.000 5,000 3.94 ms 60 s 150.000 Read Replica Lag in Milliseconds 100.000 10,000 5.38 ms 300 s 50.000 2,6 3,4 3,9 5,4 Write workload 0 1.000 2.000 5.000 10.000 250 tables Updates per Second Query cache on for Amazon Aurora, off for MySQL (best settings) Aurora RDS MySQL;30,000 IOPS (Single AZ) BERLIN.