NewSQL: Flying on ACID

David Maier

Thanks to H-Store folks, Mike Stonebraker, Fred Holahan NewSQL

• Keep SQL (some of it) and ACID • But be speedy and scalable

5/17/15 David Maier, Portland State University 2 Landscape From: the 451 group

5/17/15 David Maier, Portland State University 3 OLTP Focus

• On-Line Transaction Processing • Lots of small reads and updates • Many transactions no longer have a human intermediary For example, buying sports or show tickets • 100K+ xact/sec, maybe millions • Horses for courses

5/17/15 David Maier, Portland State University 4 Premises

• If you want a fast multi-node DBMS, you need a fast single-node DBMS. • If you want a single-node DBMS to go 100x as fast, you need to execute 1/100 of the instructions.

n You won’t get there on clever disk I/O: Most of the data is living in memory

5/17/15 David Maier, Portland State University 5 Where Does the Time Go? • TPC- CPU cycles • On Shore DBMS • Instruction counts have similar pattern

5/17/15 David Maier, Portland State University 6 A Bit More Detail

Source: S. Harizopoulos, D. J. Abadi, S. Madden, M. Stonebraker, “OLTP Under the Looking Glass”, SIGMOD 2008.

5/17/15 David Maier, Portland State University 7 What are These Different Parts?

Buffer manager: Manages the slots that holds disk pages

n Locate pages by a hash

n Employs an eviction strategy (clock scan – approximates LRU)

n Coordinates with recovery system

5/17/15 David Maier, Portland State University 8 Different Parts 2

Locks: Logical-level shared and exclusive claims to data items and index nodes

n Locks are typically held until the end of a transaction

n Lock manager must also manage deadlocks

5/17/15 David Maier, Portland State University 9 Different Parts 3

Latches: Low-level locks on shared structures

n Free-space list

n Buffer-pool directory (hash table)

n Buffer “clock” Also, “pinning” pages in the buffer pool

5/17/15 David Maier, Portland State University 10 Different Parts 4

Logging: Undo and redo information in case of transaction, application or system failure

n Must be written to disk before corresponding page can be removed from buffer pool

5/17/15 David Maier, Portland State University 11 Strategies to Reduce Cost

• All data lives in main memory

n Multi-copy for high assurance

n Still need undo info (in memory) for rollback and disk-based information for recovery • No user interaction in transactions • Avoid run-time interpretation and planning Compile & register all transactions in advance

5/17/15 David Maier, Portland State University 12 Strategies, cont.

• Serialize transactions Possible, since there aren’t waits for disk I/O or user input

• Parallelize

• Between transactions

• Between parts of a single transaction

• Between primary and secondary copies

5/17/15 David Maier, Portland State University 13 H-Store & VoltDB

• H-Store is the academic project Brown/Yale/MIT http://hstore.cs.brown.edu/ • VoltDB is the company Velocity OnLine Transactions http://docs.voltdb.com/ Community and Enterprise editions

5/17/15 David Maier, Portland State University 14 VoltDB Techniques

Data in main memory

n 32-way cluster can have a terabyte of MM

n Don’t need a buffer manager

n No waiting for disk

n All in-use data generally resides in MM for OLTP systems anyway

5/17/15 David Maier, Portland State University 15 VoltDB Techniques 2

Interact only via stored procedures

n No round trips to client during multi-query transactions

n No waiting on users or network

n Can compile & optimize in advance

n Results come back all at once – no cursors

Need to structure applications carefully

5/17/15 David Maier, Portland State University 16 Discussion Problem

Want to support on-line course reg. 1. Search for courses: number, name 2. User gets list of matching courses 3. User chooses a course 4. Show enrollment status of course 5. If not full, allow user to register Validate prerequisites

5/17/15 David Maier, Portland State University 17 Tables

Offering(CRN, Course#, Days, Limit) Registered(CRN, SID) Student(SID, First, Last, Status) Prereq(Course#, PCourse#, MinMark) Transcript(SID, Course#, Grade) Don’t over-enroll course No user input in transactions Don’t turn student away if you’ve shown space in the course 5/17/15 David Maier, Portland State University 18

VoltDB Techniques 3

Serial execution of transactions

n Avoids locking and latching

n Avoids thread or process switches

n Avoids some logging Still need undo buffer for rollback

5/17/15 David Maier, Portland State University 19 VoltDB Techniques 4

Multiple copies for high availability

n Can specify k-factor for redundancy: can tolerate up to k node failures

n For complete durability: w Snapshot of DB state to disk w Log commands to disk n Synchronously (higher latency)

n Asychronously (lower latency, possible loss)

5/17/15 David Maier, Portland State University 20 VoltDB Techniques 5

Shared-nothing parallelism: tables can be partitioned (or replicated) and spread across multiple sites.

n Each site has its own execution engine and data structures

n No latching of shared structures

n Does incur some latency on multi- transactions

5/17/15 David Maier, Portland State University 21 Can have partitions of several tables at each site

Schema Tree Partitions P1 P2 P3 P4 P5 WAREHOUSE P1 P2 P1 P2 P3 P4 P5 P1 P2 P3 P4 P5 ITEM ITEM DISTRICT STOCK

P1 P2 P3 P4 P5 CUSTOMER P3 P4 ITEM ITEM P1 P2 P3 P4 P5 ORDERS ITEMjITEM

P1 P2 P3 P4 P5 Replicated P5 ORDER_ITEM ITEM

5/17/15 David Maier, Portland State University 22 Data Placement § Assign partitions to sites on nodes.

Partitions Cluster Nodes

P1 P2 ITEM ITEM HT1 HT1

P3 P4 HT2 HT2 ITEM ITEM

Core1 Core2 Core1 Core2

P5 Node 1 Node n ITEM

October 26, 2009 Results

• 45X conventional RDBMS • 7X Cassandra on key-value workload • Has been scaled to 3.3M (simple) transactions per second

5/17/15 David Maier, Portland State University 24 What VoltDB Isn’t Doing

• Reducing latency: aim is increased throughput Might take a while to get results back • All of SQL (e.g., no NOT in WHERE) • Big aggregates • Dynamic DDL • Ad hoc queries (possible, not fast)

5/17/15 David Maier, Portland State University 25 System Structure

Hosts (nodes) each with several sites (< #cores) Each site has data (partitions), indexes, views, stored procedures Client can connect to any host Encouraged for load balancing and availability Also, request queue per host

5/17/15 David Maier, Portland State University 26 In Operation 1. Client invokes stored procedures (SPs) with parameters 2. Sent to some host 3. Rerouted to site with correct partition 4. SPs execute serially (need coordinator if more than one partition) 5. Partition forwards queries to redundant copies and waits 6. [Rollback if aborted] 7. Results come back in VoltTable (array) 5/17/15 David Maier, Portland State University 27 Setting up a Database • Schema definition

• Tables (strings are stored out of line)

• Indexes, views • Select partitioning column (or replicate)

• Can be different for different tables

• Needn’t be a key

• But may want same column to keep transactions in one partition: Use CRN for Offering and Registered

5/17/15 David Maier, Portland State University 28 Setting up a Database 2 • Stored procedures

• In Java and a subset of SQL (some limits)

• SQL can contain ‘?’ for parameters

• Must be deterministic (don’t read system clock or do network I/O)

• Can submit groups of SQL statements

• Can declare that procedure runs in a single partition (fastest) Multi-partition, multi-round can have waits and network delays 5/17/15 David Maier, Portland State University 29 Stored Procedure Example package fadvisor.procedures; import org.voltdb.*; @ProcInfo( singlePartition = true, partitionInfo = "Reservation.FlightID: 0” ) public class HowManySeats extends VoltProcedure { public final SQLStmt GetSeatCount = new SQLStmt( "SELECT NumOfSeats, COUNT(ReserveID) " + "FROM Flight AS F, Reservation AS R " + "WHERE F.FlightID=R.FlightID AND R.FlightID=?;");

Stored Procedure Example cont.

public long run(int flightid) throws VoltAbortException { long numofseats; long seatsinuse; VoltTable[] queryresults; voltQueueSQL(GetSeatCount, flightid); queryresults = voltExecuteSQL(); VoltTable result = queryresults[0]; if (result.getRowCount() < 1) { return -1; } numofseats = result.fetchRow(0).getLong(0); seatsinuse = result.fetchRow(0).getLong(1); numofseats = numofseats - seatsinuse; return numofseats; // Return available seats } } Setting Up a Database 3

• Compile stored procedures and client apps • Set up a Project Definition File

• Schema

• Stored Procedures

• Partitioning

• Groups & permissions

5/17/15 David Maier, Portland State University 32 Project Definition File

Starting a Database

• Need a configuration file • Ask a “lead node” to start VoltDB Lead becomes a peer after start up • Start client apps

5/17/15 David Maier, Portland State University 34 From the Client Side

• Connect to DB • Call stored procedures VoltTable[] results; try { results = client.callProcedure("LookupFlight", origin, dest, departtime).getResults(); } catch (Exception e) { e.printStackTrace(); System.exit(-1); } • Can also be asynch. with callback 5/17/15 David Maier, Portland State University 35 What Can You Change?

• Can add or modify stored procedures while DB is running Need to coordinate change with client apps • Add columns, tables Need to snapshot DB, stop, restart, restore • Add nodes, change partitions Same drill

5/17/15 David Maier, Portland State University 36 High Availability

• If a site is unavailable, use a redundant copy • A node can rejoin a cluster, rebuild the partitions it has Partition being copied is locked for duration • Can specify on a cluster split, only the larger group keeps running

5/17/15 David Maier, Portland State University 37 Snapshots

Can make a consistent copy of snapshot to disk

• Manual or on a schedule

• Each node stores a file locally

• Transaction consistent: will maintain multiple versions of data temporarily

• Can restore with changes • New column • Different partitioning

5/17/15 David Maier, Portland State University 38 Command Logging Can log commands to disk, then play back from last snapshot

• Don’t log data, just commands to re-run

• Don’t need to log SELECTs • Can be synchronous, will delay client responses

• Snapshot + synchronous command logging shouldn’t lose anything

• Replication: Replay log in same order at other site 5/17/15 David Maier, Portland State University 39 Views

• Views are materialized • Must have group-by and return all grouping columns • Aggregates are COUNT and SUM (??)

5/17/15 David Maier, Portland State University 40 Export VoltDB can be the front end to a warehouse or map-reduce engine Export-only tables

n Can only insert into them (but will undo)

n Contents are spooled to a Connector

n Export client polls the Connector

n Export data overflows to disk Have an export client that uses Sqoop to populate HDFS

5/17/15 David Maier, Portland State University 41 Client Languages • C# • C++ • Erlang • Java • JDBC • JSON (HTTP from PHP, Python, Perl, C#) • PHP • Python • Ruby

5/17/15 David Maier, Portland State University 42 Minimal Configuration

• OS: 64-bit • Dual-core, 64-bit proc. (4-8 cores better) • 4 Gbytes memory minimum • Sun Java SDK 6 • Network Time Protocol (NTP)

5/17/15 David Maier, Portland State University 43 NoSQL Features

• Document Model: Can have columns with JSON data field(json_data, ‘site’) = ‘pdx.edu’ Can index JSON fields • Key-Value Model: VoltCache

• Main-memory K/V store

• Memcached interface

5/17/15 David Maier, Portland State University 44 Analytical Processing

Export feature allows sending out batches of data for large-scale processing

n Export to HDFS, use Hadoop

n Export to Vertica (column store)

5/17/15 David Maier, Portland State University 45 Another Take

1# 2# 3# 4# 5# 6#

Towards( Apache#Storm# Google## enterprise(search( SQLStream# Apache#S4# Mortar# Treasure# Compute# AWS# Microso^# DataTorrent# Data# Qubole# Data# Engine# EMR# HDInsight# Metascale# Lucene/Solr# Infochimps# Feedzai# Databricks/Spark# Stra/o# Metamarkets# TeSystems# ZeWaset# A Towards( SRCH2# So^ware#AG# MapR# IBM## Hortonworks# A E*discovery( IBM# Al/scale# BigInsights# Oracle#Big## InfoSphere# Data#Cloud# Data Guavus# Savvis# Cloudera# Elas/csearch# Streams# HP# Lokad# Rackspace# Autonomy# TIBCO# So^layer# Oracle#Big## Google#Cloud## StreamBase# Data#Appliance# Platforms Oracle## Azure# Dataflow# Verizon# Non+rela%onal(zone( Endeca#Server# Search# AWS# xPlenty# Apache# Apache# Apache# IBM# A[vio# Kinesis# Trafodion# Splice#Machine# Tajo# Hive# Drill# Big#SQL# CitusDB# Hadapt# SciDB# HPCC# IBM#InfoSphere## Starcounter# SQLite# Towards( Data#Explorer# NGDATA# MammothDB# Presto# Impala# JethroData# Pivotal# Teradata## RainStor# Map SIEM( Firebird# HD/HAWQ# Aster# Loggly# Sumo# LucidWorks# Ac/an#Ingres# Big#Data# SAP#Sybase#ASE# IBM#PureData# October 2014 Logic# for#Analy/cs# Logentries# SAP#Sybase#SQL#Anywhere# Key:(( B TIBCO# EnterpriseDB# B LogLogic# Rela%onal(zone( SQream# General#purpose# PostgreseXL# Microso^# vFabric#Postgres# Oracle# IBM## SAP## SQL#Server# Oracle# Teradata# Specialist#analy/c# Splunk# PostgreSQL# Exadata# PureData# HANA# PDW# Exaly/cs# easeaeService# Sqrrl# Percona# MySQL# MarkLogic# Enterprise# ArangoDB# # Ac/an#PSQL# XtremeData# BigTables# MariaDB#Enterprise# MariaDB# Oracle# IBM# Informix# SQL## OrientDB# Metamarkets#Druid# Graph# CortexDB# Database# DB2# Exasol# Server# Progress#OpenEdge# Aerospike# Oracle#TimesTen# Ac/an#Vector# Document# Ipedo#XML# Kogni/o# Database# MySQL# solidDB# Key#value#stores# Founda/onDB# VoltDB# MySQL#Cluster#Clustrix# ScaleDB# Spider# Fabric# Tamino# OpenStack#Trove# LucidDB# Key#value#direct## Heroku# XML#Server# DataStax# FairCom# MemSQL# NuoDB# GenieDB# Kx#Systems# access# Enterprise# Handlersocket# Postgres# C InfiniSQL# ScaleBase# StormDB# Ac/an#Matrix# C Documentum# Infobright# IBM#InfoSphere# Hadoop# xDB# Rackspace# YarcData#Cassandra# Datomic# ScaleArc# Cloud## ParStream# MySQL#ecosystem# UniData# Riak# TokuDB# Drizzle# SAP#Sybase#IQ# Neo4J# FatDB# CockroachDB# Tesora# Hypertable# Couchbase# Google#Cloud#SQL# HP#Ver/ca# Advanced## UniVerse# Sparksee# JustOneDB# CodeFutures# Pivotal#Greenplum# clustering/sharding# HBase# Redis# HP#Cloud#RDB# New#SQL#databases# Adabas# TransLa[ce# 114# #for#MySQL# MonetDB# Accumulo# Voldemort# Con/nuent# FlockDB# JumboDB# Pivotal#GemFire#XD# FathomDB# LogicBlox# IBM#IMS# Data#caching# GrapheneDB# Oracle#NoSQL# Al/base#HDB# Zimory#Scale# AWS#RDS# SpaceCurve# WakandaDB# Cassandra.io# RethinkDB# Data#grid# Al/base#XDB# Galera# Google#App## CouchDB# ClearDB# D BerkeleyDB# AWS# D Search# ObjectStore# Engine#Datastore# DeepDB# WebScaleSQL# Azure#SQL# Redshi^# ##RavenDB# LevelDB# Google#Cloud# Database# InfluxDB# 1010data# Appliances# McObject# Datastore# HyperDex# ##TokuMX# Database.com# Google## TempoIQ# BitYota# Inememory# Stardog# CloudBird# BigQuery# Stream#processing# Titan# ObjectRocket# Redis#Labs# AWS# Ac/an# AffinityDB# MongoDB# Redis# Memcached#Cloud# IronCache# Elas/Cache# RedisGreen# # Versant# Compose# Trinity# Redisetoego# MemCachier# InterSystems# SPARQLBASE# Iris#Couch# AWS#Elas/Cache# Caché# Giraph# with#Redis# Grid/cache(zone( https:// MongoLab# Redis#Labs# Ehcache# InfiniSpan# Allegrograph# Redis#Cloud# BigMemory# Red#Hat#JBoss# ObjectRocket# E# HypergraphDB# MagnetoDB# Memcached# Data#Grid# E# 451research.com/ Objec/vity# InfiniteGraph# Azure#DocumentDB# AWS#SimpleDB# IBM## Cloudant# ScaleOut# Pivotal# TIBCO# Oracle## eXtreme## dashboard/dpa 5/17/15 AWS#DynamoDB# DavidSo^ware# Maier, PortlandGemFire# Ac/veSpaces# State UniversityHazelcast# Coherence# Scale# 46 Lotus#Notes# GridGain# GigaSpaces#XAP# CloudTran# ©#2014#by#451#Research#LLC.## 1# 2# 3# 4# 5# 6# All#rights#reserved##