, , and CloudStack: Semper Melior

Xen User Summit| New Orleans, LA | 18 SEP 2013 2 C’est Moi Accept no substitutes

•Patrick McGarry •Community monkey •Inktank / Ceph •/. > ALU > P4 •@scuttlemonkey •patrick@inktankcom 3 Welcome! The plan, Stan

•Ceph in <30s •Ceph, a little bit more •Ceph in the wild •Orchestration •Community status •What’s Next? •Questions 4 What is Ceph? …besides wicked-awesome?

Software All-in-1 CRUSH Scale On commodity hardware Object, block, and file Awesomesauce Huge and beyond

Ceph can run on any Low overhead doesn’t Infrastructure-aware Designed for exabyte, infrastructure, metal mean just hardware, placement algorithm current or virtualized to it means people too! allows you to do really implementations in provide a cheap and cool stuff. the multi-petabyte. powerful storage HPC, Big Data, Cloud, cluster. raw storage. 5 That WAS fast …but you can find out more

Find out more! Ceph.com Use it today Dreamhost.com/cloud/DreamObjects

Get Support Inktank.com 6

OBJECTS VIRTUAL DISKS FILES & DIRECTORIES

CEPH CEPH CEPH GATEWAY BLOCK DEVICE

A powerful S3- and Swift- A distributed virtual block A distributed, scale-out compatible gateway that device that delivers high- filesystem with POSIX brings the power of the performance, cost-effective semantics that provides Ceph Object Store to storage for virtual machines storage for a legacy and modern applications and legacy applications modern applications

CEPH OBJECT STORE A reliable, easy to manage, next-generation distributed object store that provides storage of unstructured data for applications 7 8 9

• CRUSH – Pseudo-random placement algorithm – Ensures even distribution – Repeatable, deterministic – Rule-based configuration • Replica count • Infrastructure topology • Weighting 10 10 10 01 01 10 10 01 11 01 10 hash(object name) % num pg

10 10 01 01 10 10 01 11 01 10

CRUSH(pg, cluster state, rule set) 11

10 10 01 01 10 10 01 11 01 10

10 10 01 01 10 10 01 11 01 10 12 CLIENT 13 14 15 16 CLIENT

?? 17 Ceph in the Wild …with Marty Stouffer 18 Distros No incendiary devices please… 19 OpenStack Our BFF

Object && Block Via RBD and RGW (Swift API) Identity Via Keystone More coming! Work continues with updates in Havana and Icehouse. 20 CloudStack Community maintained

Block Alternate primary, and secondary Community Wido from 42on.com More coming in 4.2! Snapshot & backup support Cloning (layering) support No NFS for system VMs Secondary/Backup storage (s3) 21 Primary Storage Flow A blatent ripoff!

•The mgmt server never talks to the Ceph cluster

•One mgmt server can manage 1000s of

•Mgmt server can be clustered

•Multiple Ceph clusters/pools can be added to CloudStack cluster 22 Other Cloud So many delicious flavors

SUSE Cloud Ganeti Proxmox OpenNebula A pretty package RADOS for Archipelago RBD backed BBC territory

A commercially Virtual server Complete Talk next week in packaged OpenStack management Berlin solution back by software tool on top management with Ceph. of Xen or KVM. KVM and containers. 23 Project Intersection You can always use more friends

Kernel Wireshark STGT VMWare Since 2.6.35 Love me! iSCSI ahoy! Getting creative

Kernel clients for RBD Slightly out-of-date. One of the Linux iSCSI Creative community and CephFS. Active Some work has been target frameworks. member used Ceph to development as a done, but could use Emulates: SBC (disk), back their VMWare Linux file system. some love. SMC (jukebox), MMC infrastructure via (CD/DVD), SSC (tape), OSD. fibre channel. 24 Project Intersection MOAR projects

Hadoop Samba Ganesha XenServer CephFS Upstream CephFS or RBD Recently Open Source

CephFS can serve as a Ceph vfs module Reexporting CephFS Commercially drop-in replacement upstream samba. or RBD for NFS/CIFS. supported product for HDFS. from Citrix. Recently Open Sourced. Still a bit of a tech preview. 25 Doing it with Xen* Don’t let the naming fool you, it’s easy

Support for libvirt XenServer can manipulate Ceph! Blktap{2,3,asplode} Qemu; new boss, same as the old boss (but not really) What’s in a name? Ceph :: XenServer :: Libvirt Block device :: VDI :: storage vol Pool :: Storage Repo :: storage pool 26 XenServer host arch Thanks David Scott!

Client (CloudStack, OpenStack, XenDesktop)

Xapi, XenAPI

xenopsd S M adapters

libvirt libxl ceph ocfs2 libxenguest libxc

xen 27 Gateway Drug No matter what you use!

Come for the block Stay for the object and file Reduced Overhead Easier to manage one cluster “Other Stuff” CephFS prototypes fast development profile ceph-devel lots of partner action 28 Blocks are delicious But what does that mean?

Squash Hotspots Multiple hosts = parallel workload Instant Clones No time to boot for many images Live migration Shared storage allows you to move instances between compute nodes transparently. 29 Objects can juggle And less filling!

Flexible APIs Native support for swift and s3 Secondary Storage Coming with 4.2 Horizontal Scaling Easy with HAProxy or others 30 Files are tricksy You can dress them up, but you can’t take them anywhere

Neat prototypes Image distribution to hypervisors Still early You can fix that! Outside uses Great way to combine resources. 31 Deploying this stuff Where the metal meets the…software 32 Orchestration The new hotness

Chef Puppet Ansible Salt Procedural, Ruby Model-driven Agentless, whole stack Fast, 0MQ

Written in Ruby, this Aimed more at the Using the built-in Using ZeroMQ this tool is more of the dev- sysadmin, this OpenSSH in your OS, is designed for massive side of DevOps. Once procedural tool has a this super easy tool scale and fast, fast, fast. you get past the very wide penetration goes further up the Unfortunately 0MQ has learning curve it’s no built in encryption. powerful though. (even on Windows!). stack than most. 33 Orchestration Cont’d MOAR HOTNESS

Juju Crowbar ComodIT Ceph-deploy Canonical Unleashed Dell has skin in the game Others are joining in Doing it w/o a tool

Being language Complete operations Custom provisioning If you prefer not to agnostic, this tool can platform that can dive and orchestration, use a tool, Ceph gives completely encapsulate a service. Can also all the way down to just one example of you an easy way to handle provisioning all BIOS/RAID level. how busy this corner deploy your cluster by the way down to of the market is. hand. hardware. 34 Ceph Community All your space are belong to us 35 36 Code Contributions Up and to the right! 37 Commits Up and to the right! 38 List Participation Up and to the right! 39 What’s Next? This Ceph thing sounds hot. 40 The Ceph Train Hop on board!

Geo-Replication Erasure Coding Tiering Governance

An ongoing process Reception efficiency Headed to dynamic Making it open-er

While the first pass Currently underway Can already do this in Been talking about it for disaster recovery in the community! a static pool-based forever. The time is is done, we want to setup. Looking to get coming! get to built-in, world- to a use-based wide replication. migration. 41 Get Involved! Open Source is Open!

CDS Ceph Day IRC Lists Quarterly Online Summit Not just for NYC Geek-on-duty Email makes the world go

Online summit puts More planned, During the week Our mailing lists are the core devs together including Santa Clara there are times when very active, check out with the Ceph and London. Keep an Ceph experts are ceph.com for details community. eye out: available to help. Stop on how to join in! http://inktank.com/cephdays/ by oftc.net/ceph 42 Projects Patches welcome

Wiki Redmine IRC Lists http://wiki.ceph.com/04De http://tracker.ceph.com/ http://ceph.com/resources velopment/Project_Ideas New #ceph-devel /mailing-list-irc/ Lists, blueprints, All the things! Splitting off developer Our mailing lists are sideboard, paper cuts, chatter to make it very active, check out etc. easier to filter ceph.com for details discussions. on how to join in! 43 Questions? Comments? Anything for the good of the cause?

E-MAIL [email protected] WEBSITE Ceph.com SOCIAL @scuttlemonkey @ceph Facebook.com/cephstorage