01—2019 Storage Furthermore: Geo Redundancy – – Rook – Project Delivery KOMPETENZ FÜR IHRE SICHERHEIT – SEIT 

Die Dr. Hörtkorn Unternehmensgruppe zählt zu den größten inhabergeführten Versicherungsmaklern in Deutschland. Mit rund 200 Mitarbeitern sind wir Ihr kompetenter Partner rund um die Bereiche Unternehmensversicherung und Vorsorgemanagement. Weitere Infos auf www.dr-hoertkorn.de oder gleich Termin unter 07131/949-0 vereinbaren.

DR. FRIEDRICH E. HÖRTKORN GMBH | Oststraße 38 - 44 | 74072 Heilbronn | Telefon 07131/949-0 | [email protected] BERLIN CHEMNITZ HEILBRONN KARLSRUHE MANNHEIM MÜNCHEN NÜRNBERG PHILADELPHIA (USA) STUTTGART EDITORIAL

Storing Data

Cloud verspricht den Eintritt in die digitale, virtuelle Welt. Die alten Server- schränke werden aufgelöst, die Daten transferiert und der Nutzer arbeitet mit dem Laptop über´s Internet, wo er oder sie möchte. Cloud-Provider ver- sprechen, dass die Daten und Services rund um die Uhr zur Verfügung stehen, und garantieren auch, dass sie auf keinen Fall verloren gehen.

In der Cloud hängen die gespeicherten Daten aber natürlich nicht in der Luft, sie werden weiterhin auf Festplatten in Serverschränken in Rechenzentren ge- speichert, die nur nicht mehr notwendig in der Nähe der Datennutzer stehen müssen. In diesen Rechenzentren sind dann auch häufig die Daten mehrerer Abteilungen, sogar mehrerer Firmen gesammelt. In Public Clouds sind sogar die Daten von unzähligen privaten und kommerziellen Kunden nebeneinander gespeichert. Und dennoch wird eine Datensicherheit garantiert und realisiert, dass tatsächlich nur berechtigte Nutzer einen entsprechenden Zugriff haben.

Dies erfordert eine besondere Infrastruktur in diesen Rechenzentren, aber auch eine besondere Technik und Software. In dieser Ausgabe liegt der Schwerpunkt auf genau dieser Technik und Software. Wir stellen dafür verschiedenen Sto- rage-Lösungen (Ceph, Rook) und auch Storage-Anbieter (NetApp) vor, die die oben genannten Herausforderungen zu lösen versuchen.

Ziel der Lösungen und Angebote ist es, vorhandenen Speicherplatz optimal zu nutzen. Für Firmen, die eine Private Cloud Lösung anstreben, kann mit den richtigen Tools eine auf den Bedarf angepasste Einstiegsinvestition errechnet werden. Denn wer vorhandene Speicherkapazitäten optimal nutzen kann, muss nicht in zusätzliche Kapazitäten investieren.

Über Storage hinausgehend beginnen wir mit der Reihe Project Delivery. In dieser behandeln wir Themen, die direkt mit der Cloudifizierung einhergehen: Zusammenarbeitsmodelle, Agiles Arbeiten, angepasste Prozesse, Projekt- abwicklung im Sinne des Kunden, ohne dabei die beteiligten DevOps Engi- neers aus den Augen zu verlieren, … Wer sich auf Cloud einlässt, betritt damit nicht nur eine neue technische Welt. Und wir möchten helfen, dass Sie diese Schritte nicht blind gehen, sondern bewusst, geplant und langfristig digital.

Natürlich haben wir weitere Cloud-Angebote ausführlich getestet und aus- gewertet. Die Ergebnisse der Tests finden Sie, wie immer, am Ende des Heftes.

Ich wünsche Ihnen, auch im Namen der Cloudibility, ein erfolgreiches, cloudi- ges, nichtsdestotrotz sonniges Jahr 2019! Viel Spaß beim Lesen!

Ihre Friederike Zelke Editor in Chief

the cloud report 01—2019 1 the cloud report IMPRESSUM

Herausgeber Cloudibility UG, Kurfürstendamm 21, 10719 Berlin Geschäftsführung Michael Dombek und Karsten Samaschke Publikationsleitung / Chefredaktion Friederike Zelke Redaktion Julia Hahn, Emelie Gustafsson Onlineredaktion Stefan Klose Vertrieb Linda Kräter Anzeigen Julia Hahn Artdirektion und Gestaltung Anna Bakalovic Herstellung Regina Metz, Andreas Merkert

Kontakt zur Redaktion [email protected] Kontakt zu Anzeigen und Vertrieb [email protected] / [email protected] / [email protected]

Copyright © Cloudibility UG

the cloud report erscheint bei der Cloudibility UG Kurfürstendamm 21, 10719 Berlin Geschäftsführung Michael Dombek und Karsten Samaschke Telefon: +49 30 88 70 61 718 , E-Mail: [email protected]

the-report.cloud

ISSN 2626-1200

The Cloud Report erscheint vierteljährlich jeweils Anfang Januar, April, Juli, Oktober. The Cloud Report gibt es in zwei Versionen, zum einen die Online-Ausgabe, die über die Homepage online oder als Download er- reichbar ist, zum anderen die gedruckte Ausgabe zum Preis von 20 Euro im Jahr abonnierbar über das perso- nalisierte Kundenportal, in dem ein persönlicher Account eingerichtet wird. Bitte geben Sie bei Ihrer Regist- rierung an, in welcher Ausführung Sie den Report beziehen möchten. Das Abonnement ist jederzeit über den persönlichen Zugang des Abonnenten kündbar, per Mail spätestens zwei Wochen vor Erscheinen der neuen Ausgabe über die E-Mailadresse: [email protected]. Für das Abonnement erheben wir relevante per- sönliche Kundendaten. Einzelne Ausgaben können auch ohne Abonnement käuflich erworben werden zum Preis von: 5 Euro, auch in diesem Fall werden relevante persönliche Daten zur Erfüllung des Kaufvertrages erhoben. Weitere Informationen hierzu finden Sie unter: http://the-report.cloud/privacy-policy

2 Impressum the cloud report INHALT

EDITORIAL Storing Data 1

KOMMENTAR Around the globe and into trouble 4

FOKUS Stay ahead of the pack 6 Interview mit Kim-Norman Sahm über Ceph 11 Rook more than Ceph 14 Ceph Day Berlin 2018 19

PROJECT DELIVERY Mind- und Skill-Set für Digital Leadership 20

Time Scope KONFERENZBERICHTECost My first OpenStack Summit 22 Time Cloud-Welt in Mannheim 24 Scope Cost

TESTS

Wir testenTraditional Clouds Project Leadership 26 Digital Project Leadership And the Winner is … 27 Auswertung der technischen Tests 28

the cloud report 01—2019 3 KOMMENTAR Around the globe and into trouble

As simple as it is to produce, consume, manipulate, and And that makes things complicated, as most organiza- store data and applications in cloud environments, you tions and individuals are not aware of the consequences need to ensure their availability, backup, and recovery. and implications of utilizing geo-redundancy. Such con- Surely, this can be handled by built-in mechanisms of the sequences could be: Authorities of countries may gain respective environments such as multi-location storage, access to confident or user-specific data. Privacy regula- backup, and restore - but what is about data safety, data tions would not be fulfilled anymore, data security might integrity and privacy? be hampered, business models and customer‘s trust would vanish, if those consequences would not have been taken In modern cloud environments data can easily be stored into account, mitigated, legally checked and communicat- and backed up in multiple locations around the globe. ed properly. These approaches provide a lot of advantages: Protection from local disasters such as earthquakes or fires, faster Is geo-redundancy a no-go then? availability of data to clients and customers in different re- gions of the world, decentralized and parallel processing of Of course not, but a proper planning process needs to be data, better utilization of resources, etc. And, it can be ini- established and executed, involving legal and data com- tiated easily without having to program and to learn about pliance teams and clarifying these aspects with the same cloud technologies. priority as solving technical issues. Actually, a process like this needs to be executed before, while and after solving Brave new world, problems solved, technical issues, executing 3-2-1 backup strategies (3 cop- cloud technology for the win! ies of data, 2 storage medias, at least 1 offsite) or processing any data in a cloud environment. This ongoing and perma- But, unfortunately, things are not that simple and easy, at nent process is even more required when trying to utilize least from a non-technical point of view, as cloud vendors multi-cloud-environments for better data isolation or to have to apply to legal regulations at the location of the data harness advantages of specific cloud environments. And centers. Which basically means: Legal regulations, such as it needs to remain in place, considering the ever-changing data privacy laws and rights of local authorities apply. nature of laws and regulations.

4 Kommentar So, when trying to secure data by storing and processing it or of simply implementing and setting up of technical ap- in cloud environments, it is not enough to just press a but- proaches. ton or execute a command. It is not enough just to think of a backup strategy. It is not enough to solve a technical issue Cloud is complex. It often encapsulates a technical com- or to provide faster transport of data to users. It may even plexity, making approaches as geo-redundancy and da- be dangerous if not critical to a business to simply store ta-replication as easy as clicking on a button. But it can and data in cloud environments or to utilize awesome technical it will not abstract from legal problems and data-security advantages such as Amazon S3, Azure Storage, blob- and aspects. Moving and executing in cloud environments, object stores, etc. It is not enough to hope to comply with your responsibility does not decrease, it actually increases GDPR- or other privacy regulations – you actually need to and therefore needs to be understood, accepted and man- comply with them, everywhere and anytime. aged continuously and as part of a process.

When executing businesses in cloud environments, gover- As with all cloud solutions and approaches, you remain in nance and legal need to be involved. They need to be part command and you remain responsible for everything your of a - THE - process. Which again should make you think organization creates, operates and stores. of how to set up and execute processes and ensure sus- tainability in your cloud strategy. DevOps and other mod- Literally everywhere on the planet. ern collaboration approaches are required to sensitize for problems and consequences of just „lifting and shifting“ Karsten Samaschke into cloud environments (which is way too often performed Co-Founder und CEO der Cloudibility by setting up VMs, firewalls and infrastructures, bringing Kurfürstendamm 21, 10719 Berlin [email protected] in applications and not realizing the implications of mov- ing from self-owned and self-operated data centers into vendor-owned and vendor-operated cloud environments)

the cloud report 01—2019 5 FOKUS Stay ahead of the pack and capture the full potential of your cloud business

In today’s IT ecosystem, the cloud has become synonymous with flexibility and efficiency. Though, all that glitters is not gold since applications with fixed usage patterns often continue to be deployed on-premises. This leads to hybrid cloud environments creating va- rious data management challenges. This article describes available solutions to tackle risks such as scattered data silos, vendor lock-ins & lack of control scenarios.

The famous quote of Henry Ford “If you always do what ing technologies, there is no doubt, that the amount of you’ve always done, you’ll always get what you’ve always data is further growing. According to the IBM Marketing got” describes pretty accurately what happens once you Cloud report, “10 Key Marketing Trends For 2017,” 90 % of stop challenging current situations to improve them for the data in the world today has been created in the last two the future: you become rigid and restricted in your think- years from which the majority is unstructured. To become ing with the result of being unable to adapt to new situa- an understanding of this, figure 1 illustrates the number of tions. In today’s business environment, data is considered transactions executed every 60 seconds for a variety of as the base to help organizations succeed in their digital data related products within the ecosystem of the internet. transformation by deriving valuable information leading Estimates suggest that by 2020 about 1.7 MB of new eventually to a competitive advantage. The value of data data will be created every second for every human on the has also been recognised by the Economist in May 2017, planet leading to 44 zettabytes of data (or 44 trillion giga- stating that data has replaced oil as the world’s most valu- bytes). The exploding volumes of data changes the nature able resource. Why is that? The use of smartphones and of competition in the corporate world. If an organization the internet have made data abundant, ubiquitous, and is able to collect and process data properly, the product far more valuable since nowadays almost any activity cre- scope can be improved based on specific customer needs ates a digital trace no matter if you are just taking a picture, which attracts more customers, generating even more data having a phone call or browsing through the internet. Also, and so on. The value of data can also be illustrated within with the development of new devices, sensors, and emerg- the Data-Information-Knowledge-Wisdom (DIKW) Pyra-

6 Fokus https://www.beingguru.com/2018/09/what-happens-on-internet-in-60-seconds-in-2018/

Figure 1: 60 seconds in the internet

mid referring back to the initial quote (figure 2). Typically, information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge hence. Data is considered as the initial base to gain wisdom. As a result, the key to success in the digital era is to maximize the value of data. That might mean improving the cus- tomer experience, making information more accessible to stakeholders, or identifying opportunities that lead to new markets and new customers.

All that glitters is not gold In addition to the described observation that the quantity of data is growing exponentially further challenges can be derived for the following three categories: aa Distributed: Data is no longer located at one location such as your local data centre. Data relevant for enter- Figure 2: DIKW pyramid (by Longlivetheux) prises is distributed across multiple locations.

the cloud report 01—2019 7 aa Diverse: Data is no longer available just in a structured and seamlessly moving data into and out of the cloud as format. As already mentioned before, most of the data needed. The question regarding the data movement be- being created is considered as unstructured data such comes even more crucial with regards to potential vendor as images, audio-/video files, emails, web-pages, social lock-ins. To address those challenges, organizations must media messages etc. invest in cloud services while developing new data services aa Dynamic: Given the described increase in quantity, data that are tailored to a hybrid cloud environment. Deploying sets grow quickly and can change over time. Hence, it data services across a hybrid cloud can help organizations is difficult to keep track of the state where the data is to respond faster and stay ahead of the competition. How- located and where it came from. ever, all the data in the world won’t do your organization any good if the people who need it can’t access it. Employ- According to the IDC study: “Become a Data Thriver: Re- ees at every level, not just executive teams, must be able alize Data-Driven Digital Transformation (2007)”, leading to make data-driven decisions. To support organizations digital organizations have discovered that the cloud, with in their digital transformation process by creating new and its power to deliver agility and flexibility has the ability to innovative business opportunities fuelled by distributed, tackle the described challenges and is indispensable for divers and dynamic data sets, organizations often find their achieving their digital transformation. Cloud computing is most valuable data trapped in silos, hampered by com- therefore aiding the business to stay flexible and efficient plexity and too costly to harness (figure 3). To undermine in an ever-changing environment. It enables customers this statement, industry research from RightScale identi- to deploy services or run applications with varying usage fied, that organizations worldwide are wasting, on average needs that allows you to pay what you need, when you a staggering 35 % of their cloud investment. Or, to put it in need it. This realization leads most organizations to hybrid monetary terms, globally over $10 billion is being misspent IT environments, in which data is generated and stored in the provisioning of cloud resources each year. across a combination of on-premises, private cloud, and public cloud resources. The existence of a hybrid IT envi- Every cloud has a silver lining ronment is probably the result of an organic growth and might be more tactical than strategical. Different lines of The ultimate goal of the described problem should be, that business in the organisation are likely using whatever tools business data needs to be shared, protected and integrat- they need to get their jobs done without involving the IT ed at corporate level, regardless where the data is locat- department. This approach creates numerous challeng- ed. Although organizations can outsource infrastructure es for IT teams, such as knowing what data is where, pro- and applications to the cloud, they can never outsource tecting and integrating data, securing data and ensuring the responsibility they have for their business data. Or- compliance, figuring out how to optimize data placement, ganizations have spent years controlling and aligning the

Figure 3: isolated resources / data silos

Figure 4: NetApp Data Fabric

8 Fokus Figure 5: multi-cloud use-case scenarios appropriate levels of data performance, protection, and Cloud Storage security in their data centre to support applications. Now, NetApp offers several services and solutions to address as they seek to pull in a mix of public cloud resources for data protection and security needs, including: infrastructure and apps, they need to maintain control aa Backup and restore services for SaaS services such of their data in this new hybrid cloud. They need a single, Office365 and Salesforce cohesive data environment, which is a vendor-agnostic aa Cloud-integrated backup for on-premises data platform for on-premises and hybrid clouds to give them aa End-to-end protection services for hybrid clouds control over their data. A cloud strategy is only as good as the data management strategy that underpins it and if you NetApp Cloud Volumes Service offers consistent, reliable can’t measure it, you can’t manage it. The starting point to storage and data management with multi-protocol sup- establish an appropriate cloud strategy is to become an in- port for MS Azure, AWS and Google Cloud Platform, en- sight of the data available for being able to control it. This abling existing file-based applications to be migrated at implies that the data locations need to be identified and scale and new applications to consume data and extract additional attributes concerning performance, capacity, value quickly (figure 5). and availability it requires and what the storage costs are. Furthermore, it enables you to scale development ac- After, the data can be integrated to cloud data services ex- tivities in AWS and Google Cloud Platform, including tending the capabilities within the areas of: backup- and building out developer workspaces in seconds rather than disaster recover management, DevOps, production work- hours, and feeding pipelines to build jobs in a fraction of loads, cloud-based analytics etc. the time. Container-based workloads and microservices can also achieve better resiliency with persistent storage The following section describes a data management provided by Cloud Volumes Service. solution by using NetApp’s Data Fabric as an example, Azure NetApp Files similarly enables you to scale de- though there are a variety of vendors offering similar velopment and DevOps activities in Microsoft Azure all in solution. (Editor’s note) a fully managed native Azure service. NetApp Cloud Vol- umes ONTAP® services enable developers and IT opera- NetApp’s Data Fabric (figure 4) empowers organizations to tors to use the same capabilities in the cloud as on-premis- use data for being able to make intelligent decisions about es, allowing DevOps to easily span multiple environments. how to optimize their business and get the most out of their IT infrastructure. They provide essential data visibility Cloud Analytics and insight, data access and control, and data protection Since IT infrastructures are growing more complex and and security. With it, you can simplify the deployment of administrators are asked to do more with fewer resourc- data services across cloud and on-premises environments es. While businesses depend on infrastructures that span to accelerate digital transformation to gain the desired on-premises and cloud, administrators responsible for competitive advantage.1 A short description for use-cases these infrastructures are left with a growing number of inad- within the area of storage, analytics and data provision- equate tools that leads to poor customer satisfaction, out of ing within a hybrid cloud environment is described below, control costs, and an inability to keep pace with innovation. tackling the main issues described earlier. NetApp Cloud Insights is a simple to use SaaS-based monitoring and optimization tool designed specifically for cloud infrastructure and deployment technologies. It pro- vides users with real-time data visualization of the topol- ogy, availability, performance and utilization of their cloud 1 The products related to NetApp’s Data Fabric can be found at cloud.netapp.com. and on-premises resources (figure 6).

the cloud report 01—2019 9 Figure 6: Cloud Insight performance dashboard

Cloud Data Services - Cloud Sync One of the biggest difficulties in moving data is the Transferring data between disparate platforms and main- slow speed of data transfers. Data movers must move data taining synchronisation can be challenging for IT. Moving between on-premises data centres, production cloud en- from legacy systems to new technology, server consolida- vironments and cloud storage as efficiently as possible. tion and cloud migration, all require large amounts of data NetApp Cloud Sync is designed to specifically address to be moved between different domains, technologies and those issues, making use of parallel algorithms to deliver data formats. Existing methods such as relying on simplis- speed, efficiency and data integrity. The objective is to pro- tic copy tools or homegrown scripts that must be created, vide an easy to use cloud replication and synchronisation managed and maintained can be unreliable or not robust service for transferring files between on-premises NFS or enough and fail to address challenges such as: CIFS file shares, Amazon S3 object format, Azure Blob, IBM aa Effectively and securely getting a dataset to the new Cloud , or NetApp StorageGRID (figure 7). target aa Transforming data to the new format and structure aa Timeframe and keeping it up to date aa Cost of the process aa Validating migrated data is consistent and complete

Erik Lau, Solutions Engineer

Erik has a love for technology and the ability to tie technical concepts back to underlying business needs. He works at NetApp as a Solutions Engineer helping customers discovering technical solutions tackling demanding busi- ness challenges within the field of Cloud-Computing and Data-Science.

NetApp Deutschland GmbH Harburger Schloßstrasse 26 21079 Hamburg

Figure 7: Integrate the cloud with your existing infrastructure

10 Fokus INTERVIEW Interview mit Kim-Norman Sahm über Ceph

Kim-Norman Sahm ist Head of Cloud Techno- wurde als die Lösung gesehen, Compute-Res- logy bei der Cloudibility und als Experte in den sourcen effizienter zu nutzen. Der Storage-Be- Bereichen OpenStack, Ceph und Kuberne- reich wandelte sich hin zu SDS-Lösungen (Soft- tes unterwegs. Als typischer Ops-ler ist er im ware Defined Storage), welche eine Vielzahl Thema Storage Zuhause und hat schon einige von Vorteilen mit sich bringen: Kosteneffizienz, Ceph-Projekte umgesetzt. Speichermöglich- Flexibilität, Elastizität, … Software-Lösungen keiten und -kapazitäten spielen im IT-Umfeld brechen die harten Grenzen der klassischen schon immer eine große Rolle, mit dem Gang in Storage-Lösungen auf und ermöglichen ver- die Cloud verändern sich diese Möglichkeiten teilte Systeme, Georedundanz und, zum Bei- aber sehr, wie sie sich verändern und wie Ceph spiel mit Ceph, vermeiden sie Vendor Lock-in, dabei eingebunden werden kann, sind wir in die- das heißt, das Storage-System ist nicht länger sem Interview auf den Grund gegangen. von einem Hersteller abhängig. In der Cloud ist Storage inzwischen ein Service, der Kunde will Warum ist das Thema Storage wichtig? nur bezahlen, was er wirklich nutzt. Für den An- Was ist besonders an Storage in der Cloud? bieter ergeben sich damit auch Vorteile, er kann Was hat sich verändert? den Platz flexibel und somit effizient ausnutzen. Storage war schon immer Thema, in der Lega- cy-Welt gab es die Anforderungen, alle Infor- mationen, die anfallen, zu speichern. Alle Apps waren darauf angewiesen, persistenten Sto- rage zur Verfügung zu haben. Generell wurde mit Storage- und Compute-Ressourcen sehr großzügig umgegangen. Selbst für kleinste An- wendungen wurden häufig zu große Server an- geschafft, die normalerweise zu 90 % ungenutzt blieben. Dabei waren Systeme unflexibel und in monolithische Storage-Blöcke eingeteilt. Es gab nur wenige Storage-Hersteller und die An- gebote waren oft sehr teuer. Mit der Zeit entwickelte es sich in die Rich- tung, Ressourcen effizienter zu nutzen, sowohl Compute als auch Storage. Virtualisierung Kim-Norman Sahm

the cloud report 01—2019 11 Figure 1: Ceph overview

Bezüglich Cloud Native Anwendungen hat logischen, hochverfügbaren Storage-Pool zur sich auch die Mentalität dahingehend verändert, Verfügung, der dann eine Gesamtkapazität der dass nur noch gespeichert wird, was gespeichert Summe aller Platten bereitstellt. Dies kann dann werden muss, nicht mehr alles. Der größte Teil in mehrere logische Pools aufgeteilt werden, die von Microservices sind beispielsweise stateless, den Anwendungen zur Verfügung gestellt wer- es werden keine Daten mehr gespeichert. Somit den. wird der Storage auch diesbezüglich effizient Ein großer Vorteil von Ceph ist, dass es Block-, genutzt. Object- und File-Storage aus einem Backend bereitstellen kann. Man ist nicht in der Situa- Wie kommt hier Ceph ins Spiel? tion, für jeden Storage-Typ eine eigene Storage-­ Ceph ist eine Software Defined Storage-Lö- Lösungen anschaffen zu müssen (Figure 1). sung, entstanden aus der Doktorarbeit von Sage Weil konnte sich Ceph erfolgreich auf Wie arbeitet Ceph? dem damals noch dünn besiedelten Software Als das Ceph-Projekt gestarten wurde, be- Defined Storage-Markt behaupten. Die Open schränkte sich das Angebot auf Block- und Source-Lösung bietet ein hochverfügbares Sto- Object-Storage. Im Vergleich zu anderen Sto- rage-Backend, welches auf jeder beliebigen X86 rage-Lösungen, die über Gateway- oder Pro- Server-Hardware läuft. Vereinfacht ausgedrückt xy-Nodes den Client mit dem Storage-System fasst Ceph alle physikalischen Festplatten im verbinden, führte Ceph von Anfang an eine “no Cluster-Verbund zusammen und stellt diese als single point of failure” ein. Die Ceph-Architektur bestand zunächst aus Ceph Monitor (Mon) und Ceph OSD (Object Storage Daemon). Mons stellen die Cluster-Logik bereit, es müssen mindestens 3, maximal 11 Monitoren im Cluster existieren, deren Anzahl wegen Quorum immer ungerade sein muss. Die Aufgabe der Mons ist es, den Cluster-State zu überwachen und die hochverfügbare Verteilung der Objek- „Ein großer Vorteil von Ceph te zu gewährleisten. Dafür halten die Mons die CRUSH-Map, eine Art Lageplan der Objekte, ist, dass es Block-, Object- vor. Die eigentlichen Nutzdaten werden auf und File-Storage aus einem den OSD-Nodes gespeichert. Eine OSD stellt immer genau eine physikalische Festplatte dar. Backend bereitstellen kann. “ Wenn ein Client auf die Block-Storage-Daten

12 Interview zugreifen möchte, wendet er sich zunächst „Mit seiner großen an einen der Monitor-Nodes und fordert die CRUSH-Map an. Anhand dieser Map und dem Skalierbarkeit ermöglicht Berechnungsalgorithmus CRUSH ist der Client selbständig in der Lage zu berechnen, auf wel- Ceph, mit einem chen OSDs die Daten liegen, die er benötigt, kleinen Setup zu starten.“ und kontaktiert dann direkt die entsprechenden OSD-Nodes (Figure 2). Werden Daten geschrieben, verhält sich das System analog. Um die Hochverfügbar- keit zu gewährleisten, werden Objekte vom Ceph-Cluster dreifach (Ceph Default Wert) ­OpenStack Swift (Object-Storage) anbietet. repliziert. Ein Objekt wird geschrieben, es exis- Diese Entwicklung verhalf Ceph zu einem höhe- tiert danach aber dreimal im Cluster. Das Repli- ren Marktanteil, da Ceph bis heute als das Stan- zierungslevel ist anpassbar, man begeht dabei dard-Storage-Backend für OpenStack gilt. Um aber den Spagat zwischen Hochverfügbarkeit die Dreifaltigkeit des Storage zu vollenden, führ- und Kosteneffizienz. Die Besonderheit hierbei te Ceph mit CephFS ein Netzwerk-basiertes ist, dass der Schreibvorgang dem Client erst Filesystem ein, dessen Client-Modul sich seit bestätigt wird, wenn alle Replikas geschrieben Version 2.6 im -Kernel befindet. wurden. Dies stellt allerdings eine Schwierig- keit beim Aufbau von Geoclustern dar, weil die Warum wird Ceph eingesetzt? Wofür ist es Paket-Laufzeiten zu Problemen führen können. wichtig? Deshalb gibt es keine Ceph-Geocluster. Das Aufgrund seiner Vielseitigkeit eignet sich Ceph Ceph-Projekt arbeitet aktuell unter anderem an für viele Unternehmen. Mit seiner großen asynchronen Schreibvorgängen, um Geocluster Skalierbarkeit ermöglicht Ceph, mit einem klei- zu ermöglichen. nen Setup zu starten und dieses mit der steigen- Einen großen Schritt vorwärts ging es für den Anfrage/Nutzung wachsen zu lassen. Ob als das Ceph-Projekt, als die OpenStack-Com- reiner Object-Store für Backups und andere An- munity auf Ceph aufmerksam wurde und sich wendungen, als Backend für Private Cloud-Lö- dies hervorragend als Backend für OpenStack sungen auf Basis von OpenStack oder KVM Cinder (Block-Storage) sowie als Ersatz für oder als NFS- Ersatz für Linux-Clients, ist Ceph flexibel verwendbar. Durch die gute Integration in Kubernetes wird auch der Einsatz von Ceph in der Container-Welt realisierbar. In den meisten Management-Runden ist das Hauptargument für die Einführung von Ceph der preisliche Vorteil gegenüber kom- Figure 2: Ceph flow merziellen Closed Source Enterprise Sto- rage-Lösungen. Günstige Server-Hardware und Community-Software ermöglichen einen Start mit geringen Capex-Aufwänden. Wem der Ein- satz von Open Source-Software mit Communi- ty-Support schlaflose Nächte bereitet, der hat die Möglichkeit über die Linux-Distributoren kommerziellen Support zu erwerben. Das Subscriptions-Modell ist dabei sehr dif- ferenziert und sollte im Vorfeld gründlich ge- prüft werden. Generell gilt, so vielseitig Ceph ist, desto größer ist die Herausforderung im täglichen Betrieb. Das Ops-Team muss ent- sprechend fit sein.

Das Interview führte Friederike Zelke.

Seite 12,13: http://docs.ceph.com/docs/master/architecture/ Seite

the cloud report 01—2019 13 FOKUS Rook more than Ceph

Rook allows you to run Ceph and other storage backends in Kubernetes with ease. Consumption of storage, especially block and filesystem storage, can be consumed through Kubernetes native ways. This allows users of a Kubernetes cluster to consu- me storage easily as in “any” other standard Kubernetes cluster out there. Allowing users to “switch” between any Kubernetes offering to run their containerized appli- cations. Looking at the storage backends such as Minio and CockroachDB, this can also potentially reduce costs for you if you use Rook to simply run the CockroachDB yourself instead of through your cloud provider.

Data and Persistence Storage: What is the right one? Aren’t we all loving the comfort of the cloud? Simple back- up and also sharing of pictures as an example. Ignoring Block storage privacy concerns for now when using a company for that, Will give you block devices on which you can format as instead of e.g., self hosting, which would be a whole oth- you need, just like a “normal” disk attached to your sys- er topic. I love being able to take pictures of my cats, the tem. Block storage is used for applications, such as MySQL, landscape, and my food and sharing the pictures. Sharing a PostgreSQL, and more, which need the “raw” performance picture with the world or just your family is only a few clicks of block devices and the caching coming with that. away. The best of that, even my mother can do it. Imagine the following situation. Your phone has been Filesystem storage stolen and all your pictures in the cloud have been deleted Is basically a “normal” filesystem which can be consumed due to a software bug. I, personally, would probably get a directly. This is a good way to share data between multiple heart attack just thinking about that I am a person which applications in a read and write a lot manner. This is com- likes to look at old pictures from time to time to remember monly used to share AI models or scientific data between happenings and friends during the time. multiple running jobs or applications. You may ask yourself what does this have to do with “Data and Persistence”. There is a simple answer for that. Technical note: if you have very very old/legacy applica- Pictures are data and the persistence is, well in this case, tions which are not really 64bit compatible, you might run gone because your data has been deleted. into (stat syscall used) problems when the filesystem is us- Persistence of Data has a different importance to each ing 64bit inodes. of us. A student in America may hope for the persistence to be lost on his student debts and the other may have a job agency which basically relies on keeping the data of their clients not only available and intact but also secure.

14 Fokus Object storage Object storage is a very cloud native approach to storing data. You don’t store data on a block device and/or filesys- tem, you use a HTTP API. Most commonly known in the object storage field is Amazon Web Services S3 storage. There are also open source projects implementing (parts) of the S3 API to act as a drop-in replacement for AWS S3. Next to S3, there are also other object store APIs/proto- cols, such as OpenStack Swift, Ceph Rados and more.

In the end it boils down to what are the needs of your ap- plications, but I would definitely keep in mind what the dif- ferent storage types can offer. If you narrowed down what er(s) is important before and while you are using their pro- storage type can be used, look into the storage software posal. As an example, if you should experience problems market to see which “additional” possibilities each soft- with the platform itself or scaling issues of, let’s say, block ware can give you for your storage needs. storage, you can directly give feedback to them about it and possibly work together with them to workout a fix for Storage in a Cloud-Native world the issue. Or provide another product which will be able to In a Cloud-Native world, where everything is dynamic, dis- scale to your current and future needs. tributed, and must be resilient, it is more important than Storage is especially problematic when it comes to ever to keep the feature set of your storage which is used scale depending on the solution you are running/using. for your customer data. It must be highly available all the Assuming your application in itself can scale without is- time, resilient to failure of a server and/or application, and sues, but the storage runs into performance issues. In most scale to the needs of your application(s). cases you can’t just add ten more storage servers and the This might seem like an easy task if your are in the cloud, problem goes away. “Zooming out” of storage as a topic to but even cloud have limits at a certain point. Though if you persistence, one must accept that there are always certain have special needs for anything in the cloud you are using, limits to persisting data. Let it be the amount, speed, or it will definitely help to talk with your cloud provider to re- consistency of data, there will always be a limit or at least solve problems. The point of talking to your cloud provid- a trade off. https://rook.io/docs/rook/v0.9/ceph-storage.html

Figure 1: Rook Architecture.

the cloud report 01—2019 15 Ceph’s priority will always be consistency even if speed ROOK IS A FRAMEWORK needs to be sacrificed for that. TO MAKE IT EASY TO BRING Ceph is not the only storage backend which can be run STORAGE BACKENDS TO RUN using Rook but more on that later.

INSIDE OF KUBERNETES Rook Kubernetes integration In Kubernetes you can consume storage for your ap- plications, through these Kubernetes objects: Per- sistentVolumeClaim, PersistentVolume and StorageClass. Each of these objects has their own A good example for such scaling limits is Facebook. To role. PersistentVolumeClaims are what users create keep it short, Facebook at one point just “admitted” that to claim/request storage for their applications. A Per- there will always be a delay during replication of data/info. sistentVolumeClaim is basically the user facing side of They accept that when a user from Germany updates his storage in Kubernetes as it is standing for a Persistent- profile that it can/will take up to 3-5 minutes before users Volume behind that. To enable users to consume storage from e.g., Seattle, USA, will be able to see those changes. easily through PersistentVolumeClaims, a Kuber- To summarize this section: Your storage should be as netes administrator should create StorageClasses. An Cloud-Native as your application. Talk with your cloud pro- administrator can create multiple StorageClasses and vider during testing and usage, keep them in the loop when also define one as a default. A StorageClass holds pa- you run into issues. Also don’t try to push limits which can’t rameters which can be “used” during the provisioning pro- be pushed right now at the current state of technology. cess by the specific storage provider/driver.

You see, Rook enables you to consume storage the Kuber- What can Rook offer for your Kubernetes netes native way. The way most operators work in point of cluster? their native Kubernetes integration is to watch “simply” for Rook can turn all or selected nodes into “Ceph storage events happening to a certain selection of objects. “Events” servers”. This allows you to use “wasted” space from the are, e.g., that an object has been created, deleted, updated. nodes your Kubernetes cluster runs on. Next to “just utiliz- This allows the operator to react to certain “situations” and ing ‘wasted’ storage”, you don’t need to buy extra storage act accordingly, e.g., when a watched object is deleted, the servers. You would just keep that in mind during planning operator could run it’s own cleanup routines or with Rook the hardware for the Kubernetes cluster (figure 1). as an example, the user creates a Ceph Cluster object and With running storage on the nodes your applications the operator begins to create all the components for the can also run on, the hyperconverged aspect is also kind of Ceph Cluster in Kubernetes. covered. You might not get more performance because of To be able to have custom objects in Kubernetes, Rook your application running on the same node as your storage uses CustomResourceDefinitions. CustomRe- with Ceph, but Ceph and Rook are aware of this and will sourceDefinitions are a Kubernetes feature which possibly look into ways of improving this. Please note that allows users to specify their own objects in their Kuber- netes clusters. These custom objects allow the user to ab- stract certain applications/tasks, e.g., with Rook the user is allowed to create one Ceph Cluster object and have the Rook Ceph operator create all the other objects (Config- Maps, Secrets, Deployments and so on) in Kubernetes. Onto the topic of how Kubernetes mounts the storage for your applications to be consumed: If you have already heard a bit about storage for containers, you may have come across CSI (Container Storage Interface). CSI is a standardized API to request storage. Instead of having to maintain drivers per storage backend in the Kubernetes project, the driver maintenance is moved to each storage backend itself, which allows faster fixes of issues with the driver. The normal process when there is an issue in an in- tree Kubernetes volume plugin is to go through the whole Kubernetes release process to get the fix out. The storage backend projects create a driver which implements the CSI

16 Fokus driver interface/specifications, through which Kubernetes more Ceph Monitors which are the brain of the cluster, and and other platforms can the request storage. a Ceph Manager which takes care of gathering metrics and For mounting Ceph volumes in Kubernetes, currently doing other maintenance tasks. There are more compo- Rook uses the flexvolume driver which may require a small nents in a Ceph cluster, to focus on the third which is next configuration change in existing Kubernetes clusters. Us- to the Monitors and Manager the most important thing ing CSI with Rook Ceph clusters will hopefully soon be which will store your data. Ceph Object Storage Daemon possible when CSI support has been implemented in the (OSD) is the component which “talks” to a disk or directory Rook 0.9 release. Depending on how you see it flexvolume to store and serve your data. is just the mount (and unmount) part of what CSI is. The Rook Ceph operator will start and manage the Ceph monitors, Ceph Manager and Ceph OSDs for you. To Running Ceph with Rook in Kubernetes store data in so called Pools in your Ceph cluster, the user Objects in Kubernetes describe a state, e.g., a Pod object can simply create a Pool object. Again the Rook Ceph op- contains the state (info) on how a Pod must be created erator will take of it and in this create a Ceph pool. The pool (container image, command to be run, ports to be open, can then directly be consumed using a StorageClass and so on). The same applies to a Rook Ceph cluster ob- and PersistentVolumeClaims to dynamically get ject. A Rook Ceph cluster object describes the user desired PersistentVolumes provisioned for your applications. state of a Ceph Cluster in their Kubernetes cluster. Below is an example of a basic Rook Ceph Cluster object: This is how simple it is to run a Ceph cluster inside Kuber- netes and consume the storage of the Ceph cluster.

Rook is more than just Ceph Rook is a framework to make it easy to bring storage back- ends to run inside of Kubernetes. The focus for Rook is to not only bringing Ceph which is for block, filesystem and ob- ject storage, but also for persistence on a more application specific level by running CockroachDB and Minio through a Rook operator. Due to have the abstraction of complex tasks/applications through CustomResourceDefini- tions in Kubernetes, it is as simple as deploying a Ceph Cluster as shown with the above code snippet. Not going into too much details about the example Rook To give a quick overview of the currently implemented Ceph Cluster object here, it will instruct the Rook Ceph storage backends besides Ceph, here is a list of the other operator to use all nodes in your cluster as long as they are storage backends: applicable (don’t have taints and/or other “restrictions” on aa Minio - Minio is an open source object storage which them). For each applicable node it will try to use all emp- implements the S3 API. ty devices and store configs and some state data in the aa CockroachDB - CockroachDB provides ultra-resilient dataDirHostPath: /var/lib/rook. SQL for global business. Rook allows you to run it through If you would search through the Kubernetes API refer- one object to ease the deployment of CockroachDB. ence you wouldn’t find this API (ceph.rok.io/v1be- aa NFS - NFS exports are provided through the NFS Ga- ta1) nor the object kind Cluster. As written in the pre- nesha server on top of arbitrary PersistentVolumeC- vious section, user defined APIs and objects (kinds) are laims. introduced by a CustomResourceDefinition to the Kubernetes API. All CustomResourceDefinition of For more information on the state and availability of each Rook are created during the installation of Rook in your storage backend, please look at “Project Status” section in Kubernetes cluster. the README file in the Rook GitHub project. Please note Creating the above object on your Kubernetes cluster, that not all storage backends here are available in Rook with the Rook Ceph operator running, would cause the version 0.8, which is at the point of writing this article the Rook Ceph operator to react to the event that an object latest version, some are currently only in the latest devel- of type / kind Clusterin the API ceph.rook.io/v1be- opment version but a 0.9 release is targeted to happen ta1 has been created. soon.

Before shortly going into what the Rook Ceph operator Rook project roadmap does now, I give a quick overview about how a “standard” To give you an outlook of what can be to come up, a sum- Ceph cluster looks like. A Ceph cluster always has one or mary of the current Rook project roadmap:

the cloud report 01—2019 17 aa Further stabilization for the CustomResourceDefinitions aa Twitter - @rook_io specifications and managing/orchestration logic: aa Slack - https://rook-io.slack.com/ aa Ceph aa For conferences and meetups: Checkout the #confer- aa CockroachDB ences Channel aa Minio aa Contribute to Rook: aa NFS aa https://github.com/rook/rook aa Dynamic provisioning of filesystem storage for Ceph. aa https://rook.io/ aa Decoupling the Ceph version from Rook to allow the aa Forums - https://groups.google.com/forum/#!forum/ users to run “any” Ceph version. rook-dev aa Simpler and better disk management to allow adding, re- aa Community Meetings moving and replacing disks in Rook Ceph cluster. aa Adding Cassandra as a new storage provider For questions just hop on the Rook.io Slack and ask in the aa Object Storage user CustomResourceDefinition, to #general channel. allow managing users by creating, deleting and modify- ing objects in Kubernetes.

There is more to come for a more detailed roadmap, please Alexander Trost look at the roadmap file in the Rook GitHub project. Rook Maintainer and DevOps Engineer [email protected] How to get involved? If you are interested in Rook, don’t hesitate to connect with the Rook community and project using the below ways.

Extend your cloud capabilities.

cloud.netapp.com 18 Fokus Ceph Day Berlin 2018

Berlin. Im CityCube Berlin wurde im Veranstaltung mit einem Talk über Vorfeld der OpenStack Summit ein the State of Ceph und verkündete bei Tag, am 12.11.2018, dem alleinigen dieser Gelegenheit auch die Grün- Thema Ceph gewidmet. Cloudibi- dung der Ceph Foundation, welche lity war mit drei Personen ebenfalls als direkter Fonds unter der Linux dabei, wenn auch nur als Besucher. Foundation organisiert ist. Aufgabe Der Ceph Day Berlin war eine ganz- ist die finanzielle Unterstützung der tägige Veranstaltung, die sich der Ceph-Projektgemeinschaft und dient Weitergabe der transformativen Kraft als Forum für Koordinierungsaktivi- von Ceph und der Förderung der pul- täten und Investitionen, das den tech- sierenden Ceph-Gemeinschaft ge- nischen Teams Leitlinien für die Road- widmet hat und von dieser und ihren map und die Weiterentwicklung der Freunden ausgerichtet wurde. Ceph Projektverwaltung bietet. ist ein skalierbares, Open-Source- In weiteren Vorträgen wurde unter und Software-definiertes Speicher- anderem von Ceph-Anwendern wie: system, welches die Wirtschaftlichkeit MeerKAT radio telescope; SKA Afri- und das Management von Daten- ca, Bennett SARAO; CERN, Dan van speichern für Unternehmen grund- der Ster; Human Brain Project, Stack legend verbessern kann. HPC, Stig Telfer und SWITCHengines, aa Martin Verges, croit Simon Leinen, vorgestellt, was diese aa Aaron Joue, Ambedded Es waren schätzungsweise 350 Teil- aus ihren Implementierungen gelernt ­Technology nehmer aus verschiedensten Sprach- haben und wie sie damit arbeiten. aa Tom Barron, Red Hat räumen an diesem schönen Herbst- Es kamen aber auch Partner bzw. tag anwesend und widmeten sich fast Kunden von Ceph zur Sprache wie: Es war eine gelungene Veranstaltung. Extend your ausschließlich den Vorträgen und Ge- Die Ceph Days werden von der aa Phil Straw, SoftIron sprächen untereinander in den Pau- aa Robert Sander, Heinlein Support Ceph-Community (und Freunden) in sen und am Ende der Veranstaltung. aa Jeremy Wei, Prophetstor ausgewählten Städten auf der ganzen Sage Weil, Red Hat, Gründer und aa Sebastian Wagner und Lenz Welt veranstaltet und dienen der För- cloud capabilities. Chefarchitekt von Ceph eröffnete die ­Grimmer, SUSE derung dieser lebendigen Gemein- schaft. Neben Ceph-Experten, Commu- nity-Mitgliedern und Anbietern hören Sie auch von Produktionsanwendern von Ceph, die Ihnen vermitteln, was diese aus ihren Implementierungen gelernt haben.

Anna Filipiak

cloud.netapp.com the cloud report 01—2019 19 PROJECT DELIVERY Mind- und Skill-Set für Digital Leadership

Der Prozess der digitalen Trans- auch die erforderlichen Kompetenz- werden. Dadurch entstehen neue formation beschäftigt derzeit den bereiche, die sich daraus für Projekt- Zusammenarbeitsmodelle und Industriestandort Deutschland;­ zahl- leiterinnen ergeben. Herangehensweisen, die parallel zur reiche Programm- und Projekt- tech­nischen Migration in Cloud-Um- leiterinnen1 arbeiten intensiv Enabling Technology felder im Unternehmen verankert daran, einzelne Produkte oder An- werden. Der Wandel im Kontext einer wendungen in neuen, digitalen Kon- Spitzentechnologien wie Cloud Com- Enabling Technology ist nicht nur ein texten zu etablieren. Die Allgegen- puting eröffnen neue Möglichkeiten, technischer Wandel, sondern auch wärtigkeit des Begriffs wirkt auf viele Arbeit zu organisieren, Innovatio- und vor allem ein kultureller Wandel Betrachter ermüdend. Letztlich ist sie nen durchzuführen, Produkte her- im Unternehmen. aber ein Beleg für die Vielschichtig- zustellen und Kunden zu bedienen. keit von technischen, wirtschaft- Die scheinbar unbegrenzte Elastizität Digital Leadership lichen und organisatorischen und Flexibilität der technischen Infra- Veränderungsprozessen, die Projekt- strukturen erschüttern traditionelle Traditionelle Führungskonzepte fo- leiter und Mitarbeiterinnen als digita- Grenzen des unternehmerisch Mög- kussierten die technische Kom- le Transformation erleben. lichen. Cloud-Umgebungen schaf- petenz der Projektleiterinnen. Der fen die Voraussetzungen und den Projekterfolg lag vor allem in der Um diese Komplexität wirksam zu Gestaltungsraum für hochinnovative substanziellen Steuerungsfähigkeit gestalten, brauchen Projektleiter ein Prozesse und Produktlösungen, die begründet. Die wirkungsvolle Imple- adäquates Skill-Set, das sie befähigt,­ bisher nicht wirtschaftlich realisierbar mentierung von Enabling Techno- den Transformationsprozess nicht waren. logies wie Cloud erfordert jedoch ein nur technisch zu organisieren, son- beweglicheres Mind- und Skill-Set. dern auch die Mitarbeiter dafür zu ge- Um diesen Möglichkeitsraum zum Nicht die Planungs- und Steuerungs- winnen. Anhand des Cloud Compu- eigenen Wettbewerbsvorteil zu arbeit steht im Mittelpunkt, son- ting werden in diesem Beitrag sowohl nutzen, müssen alle Abteilungen dern die kommunikative Einbindung das der Technologie innewohnende gleichermaßen in die Gestaltung aller beteiligten Kollegen. Allein mit Veränderungspotenzial skizziert, als der Cloud-Umgebung eingebunden der Einhaltung von Terminplänen kann der Projekterfolg nicht sicher- gestellt werden. Ein wirtschaftlicher Mehrwert für das Unternehmen wird nicht durch das Abhaken von An- forderungsdokumenten oder Beauf- tragungen erreicht. Stattdessen müs- DIGITALE PROJEKTLEITER sen digital versierte Projektleiter alle Beteiligten dazu befähigen, die ver- ­VERWANDELN LIMITIERUNG IN änderte Technologie auch in neuen Kontexten zielführend und wirksam EINEN POSITIVEN IMPULS zu nutzen.

1 In diesem Artikel geben wir jeweils nur eine Form von weiblichen oder männlichen Bezeichnungen wieder, aber selbstverständlich denken wir uns alle mit: weiblich, männlich, divers.

20 Project Delivery Time Time Energetic Curiosity Scope Scope Cost Cost In einem Umfeld sich rasch ver- ändernden, spezialisierten Wissens verfügen auch erfahrene Führungs- persönlichkeiten über eine be- Time Time schränkte Wissensbasis. Digita- Scope Scope Cost Costle Projektleiter verwandeln diese Limitierung in einen positiven Im- puls des Teilens von Wissen und Ver- antwortung. Sie schätzen kritische Fragen höher als unkritische Ant- worten; sie sind skeptisch gegen- über unverrückbaren Wahrheiten. Sie schaffen ein Umfeld, in dem alle Projektbeteiligten neue Antworten Traditional Project LeadershipTraditional Project Leadership Digital Project Leadership Digital Projectentwickeln Leadership und innovative Lösungen formulieren können.

Anpassungsfähigkeit gilt auch für Führungsqualitäten

Digitale Transformation ist ein in- Damit Enabling Technologies ihre Technologie ist verständlich. Einige zwischen weitläufig verwendeter Wirkmächtigkeit entfalten können, Kollegen mögen sogar grundsätzlich Sammelbegriff, der sehr ver- müssen Projektleiterinnen die be- bezweifeln, dass eine zunehmende schiedene Veränderungsprozesse teiligten Mitarbeiter zu eigenstädi- Flexibilität der Geschäftsprozesse subsumiert. Obgleich die Einführung gen Gestaltern und selbstbewussten wünschenswert oder realisierbar von Cloud-Umgebungen eine spezi- Nutzern der Technologie be- sei. Digitale Projektleiter reflek- fische Technologie beinhaltet, rea- fähigen. Dabei helfen ihnen persön- tieren dieses Spannungsfeld und lisieren sich die einzelnen Projek- liche Stärken in den folgenden vier binden auch die skeptischen Mit- te in einem breiten Spannungsfeld Kompetenzbereichen: arbeiterinnen in die Kommunika- wirtschaftlicher und organisatori- tion ein. Gleichzeitig treiben sie den scher Rahmenbedingungen. Ent- Inspire to Grow Veränderungsprozess entschlossen sprechend variabel müssen auch Transformationsprozesse in Cloud-­ voran. Ihre Energie reißt das Team die Kompetenzbereiche digitaler Um­ge­bungen sind komplexe Auf- und die Organisation mit. Sie er- Projektleiterinnen konzipiert werden. gaben in einem dynamischen Arbeits- klären unermüdlich die Bedeutung Damit Unternehmen die Chancen und Wissensumfeld. Die beteiligten und Bedeutsamkeit des Projektes. von Enabling Technologies individu- Kolleginnen, nicht nur in den IT-Ab- Ihre Offenheit und unermüdliche ell realisieren können, gibt es kein sta- teilungen, müssen mit grundsätzlich Kommunikation gerade mit den tisches One-Size-Fits-All Führungs- neuen Konzepten und Applikationen Skeptikern schafft eine belastbare modell. Wer die Ambiguität des oben umgehen lernen. Projektleiterinnen Vertrauensbasis. skizzierten Kompetenzmodells an- inspirieren das Team zum Lernen. Sie nimmt und stattdessen fragt: “Wel- unterstützen das Team auf diesem Focused Vision che konkreten Fähigkeiten sind in persönlichen Wachstumspfad und Komplexe Transformationsprozesse dem konkreten Projekt in einer kon- agieren als Coaches, Mentoren und können nicht en detail geplant wer- kreten Projektphase am wirkungs- Lehrer. den. Viele Unwägbarkeiten und vollsten?”, der ist auf dem richtigen Abhängigkeiten beeinflussen den Weg zum Digital Leadership. Trustworthy Determination Projektverlauf. Digitale Projekt-

Die Migration in Cloud-Umfelder leiterinnen akzeptieren diese Un- Felix Evert und die Entwicklung neuer Zu- sicherheit. Sie haben eine klare Vor- Head of Consulting sammenarbeitsmodelle sind tief- stellung von dem Ergebnis. Sie setzen [email protected] greifende Veränderungen in einer eine klare Richtung und agieren auf Organisation. Eine gewisse Skep- dem Weg flexibel und anpassungs- sis gegenüber der anspruchsvollen fähig.

the cloud report 01—2019 21 My first OpenStack Summit

After a couple of month as a Cloudi- to handle the physical servers vitrally infrastructure? Now OpenStack could bility employee, I felt brave enough to through a dedicated program was a be interesting! visit the OpenStack Summit. Middle positive change but this did not elimi- Do you have several teams in- November, rain in the air and a lem- nate the frustration as the developers volved in your processes? For every ming migration towards the CityCube and users did not manage to levigate addition there is a new opportunity Berlin. What could possibly go wrong? their measure and receive the results miscommunication and compilation. within a reasonable timeframe. Let’s minimize this. What? And so! OpenStack therefore Would you like to have more con- OpenStack is a free and open-source comes in as the next layer on top of trol with a unitary dashboard? Yes. The software platform for cloud comput- the already virtualized and the hy- answer to this is yes. A unitary dash- ing and the pertain community. This pervisors. Through OpenStack all the board will help you to manage your IT community inhere companies, organ- different parts of the infrastructure infrastructure. isations and individuals that develops becomes accessible for the user. Now Do you like open source ? I think software, or in other ways support it is possible to handle the IT environ- you will find that you do! You can rely OpenStack. ment without having to order the in- on and contribute to the community. 2010 Openstack started as a joint frastructure through an IT architect. You can do it yourself, you can adjust project of Rackspace Hosting and No worries, the physical servers are and integrate with open APIs. Be in- NASA, but as of 2016 it is managed still there but the accessibility is sim- dependent and rely on the strength in by the OpenStack Foundation. Since plified. numbers at the same time ! then more than 500 companies have joined the project. For who? The Summit But who wants to work in additional The OpenStack foundation arranges How come? layers? WMWs, KVM, Xen Hyper V every year several events and among Servers and storage are a must have. etc. And THEN OpenStack on top? those the OpenStack Summit. This But sooner or later the limits of the Complex to implement, steep learn- November it was time for the Summit physical servers starts to show. To ing curve and as constantly develop- to take place in Berlin. deal with this requires a lot of time ing solution there is always the risk As a first time visitor, I was excited to and money, something that very few for users coming across inaccurate see what this Summit had in store for companies appreciate. Adding RAM, and/or outdated documents within us. Turned out, I was not alone. 2700 bigger disk, and CPUs.The following the community. What a headache! happy campers from 63 different virtualization amounted to adding Wouldn’t it be better to just place that countries came to participate for the hypervisor. A program that allows order at the IT architect and then get three day long summit. Apart from the multiple operating systems and ap- yourself a coffee? five headline sponsors the lineup was plications to share a single hardware full of OpenStack users. New as well processor. What a treat ! This we need to sort out! as growing. With a program of 200 But later on also this solution start- Do you want to virtualize and scale sessions and workshops and also an ed to chafe as the system admin- your IT-structure? If you are still con- exhibition area, I have to admit it was istrator had to add several types of sidering whether you are going to vir- kind of tricky for a newbie to navigate. hypervisors and virtual servers from tualize or not, maybe OpenStack isn’t different vendors. The more added what you need for the moment. Start from the beginning. the trickier it got to keep up and get Are you looking to automate re- When in doubt, starting from the be- an overview. Having the possibility petitive chores while updating your ginning is always helpful. The keynote

22 Konferenzbericht sessions gave a good introduction. frastructure Summit. OpenStack is gy: Ansible (SUSE), TripleO (RedHat), The speakers let us in on user and just one part of it. Each second talk is Juju (Canonical) or a bunch of tools success stories as well as updates and about container infrastructure topics like Jenkins, Salt (Mirantis). If you’re pilot projects. It also serves a guide like Docker or Kubernetes. Software asking the vendors for their upgrade when you are having difficulties to defined network tools combine and strategy you get always the same an- make up your mind for what further connect the whole stack: bare metal, swer: upgrade is no problem. In the session you wish to attend. virtual machines and containers. real life the situation looks a little bit different. The OpenStack upgrade The exhibition area. OpenStack ❤ Kubernetes is still one of the trickiest parts in the Here you can easily work the room. The combination of OpenStack and OpenStack operation. For my experi- There was a friendly and chatty atmo- Kubernetes is more than a fix idea. ence there is no difference between sphere that featured the exhibition. Both ecosystems work very well to- Vanilla and Distribution for updates. The perfect opportunity to make con- gether. Run Kubernetes on top of Both are working well if you know what tacts, ask questions and see what the OpenStack to use the dynamic in- you are doing. If not you’ll have a big vendors have to offer. (More than the stance provision and easy autoscal- problem. The advantage of Vanilla is goodies of course, even though I took ing of your Kubernetes clusters or that you’ve written the automation by the time to collect them as well.) use Kubernetes to deploy OpenStack your own; you know what your code is easily. OpenStack Kolla-Kubernetes is doing. It makes troubleshooting easi- Bring a friend. obsolete but there is a very nice new er. The complexity is getting higher if Keynotes and Exhibitions in all its glory. project to do this: OpenStack Helm. you’re including third party solutions The best hack for a first time visitor is to Deploy and upgrade containerized like storage or software defined net- bring a friend. A friend who knows the OpenStack environments by using works. The subscription model of the environment, what to see and where the standardized Helm charts. In com- OpenStack distributions are very dif- to be. A friend who also happens to bination with Rook you’re able to run ferent. You need to inform you about be the Head of Cloud Technology at OpenStack and Ceph on Kubernetes scaling costs. Cloudibility. Please read the indenta- with all benefits of microservices. tion from our very own Kim-Norman: Vanilla vs. Distribution Emelie Gustafsson and Kim-Norman Sahm The OpenStack Summit is transform- Each enterprise OpenStack vendor ing more and more to an Open In- is using his own deployment strate-

the cloud report 01—2019 23 Cloud-Welt in Mannheim Continuous Lifecycle und Container Conf 2018

Zwei große Themen in einer Kon- werden? Was kann sinnvoll automa- Entscheidung, in welchen ich gehen ferenz, der Continuous Lifecycle und tisiert ablaufen? Wie sieht es aus mit möchte, wirklich schwer gemacht die Container Conf, zogen über 700 Infrastruktur, Testing und Monitor- wurde. Alle, die ich besucht habe, Teilnehmer nach Mannheim, die sich ing? Was braucht man, um Container hatten ein hohes Niveau, waren tech- vier Tage lang in die Welt der CD, De- “sauber” zu bauen, und, dass die Apps nisch spannend und innovativ und vor ployment, Cluster, Migration, Dock- darin gut laufen, aktualisierbar sind, allem auf die Technologie bezogen er, Istio, Kubernetes, Cloud Native getestet werden können und sich- und nicht werblich. Nirgendwo hatte Development, DevOps … versenk- er sind? Was darf der Container, was ich das Gefühl, dass eigentlich nur ein ten, lernten und diskutierten. Fragen sollte er nicht dürfen? Wohin kann Produkt verkauft werden sollte. wurden aufgegriffen und beantwor- Leading Edge noch führen? tet: Wie bringt man Dev und Ops Meinem Interesse folgend war ich auf zusammen in einem agilen Prozess? Es gab 16 Sessions, bei 14 davon gab dieser Doppel-Konferenz eher bei den Wie kann Development für und in der es mindestens vier Vorträge parallel. Developern unterwegs, die die ersten Cloud aussehen? Welche Tools gibt Das Programm war vielfältig, span- Ergebnisse bezüglich Development in es und wie kann man sie am besten nend und die Vorträge sehr gut be- der Cloud vorstellten. Die Cloud-Welt einsetzen? Helm, Draft, Skaffolk? Wie sucht. Ich konnte leider jeweils nur geht die ersten erfolgreichen Schritte kann die CI/CD-Pipeline optimiert in einem Talk sitzen - wobei mir die beim Coden im Browser in browser- basierten Editoren, die sogar mit Chrome­books verwendet werden können … Sogar Kubernetes stellt in- zwischen eine Entwicklungs- umgebung bereit, in der diverse Tools wie Helm oder Draft das Aufsetzen der Entwicklungsumgebung unterstützen. Einhellige Meinung verschiedener Sprecher: Es ist schon ziemlich cool und holt die Entwickler aus ihren dunk- len Kammern, die Tools sind insgesamt aber noch unausgereift.

Die dunklen Kammern der Entwickler waren immer wieder in verschiedenen Zusammenhängen ein Thema. Das Konzept DevOps wurde diskutiert. Viele Entwickler haben sich über Jahre ihre Arbeit organisiert, ihre Tools auf ihrem Rechner gehabt, haben geco-

24 Konferenzbericht det, und wenn sie fertig waren, haben schlag, auch im Sinne der Gesamt- schon Cloud-ready, was muss um- sie alles weitergeschoben. Cloud-ba- qualität, war das regelmäßige gebaut werden? Was sollte auf keinen siertes Entwickeln ermöglicht nicht Ausliefern von Entwicklungsergeb- Fall mit in die Cloud umziehen? Wer nur, dass man mit anderen Entwicklern nissen, auch von Teilergebnissen, hat gebaut, ist dieser noch vorhanden leichter zusammenarbeiten kann. Man diese können dann getestet und ge- und kann in den Umbauprozess ein- muss nicht darauf warten, bis der Ent- prüft werden, ob das in die richtige bezogen werden? Es gibt ver- wickler seinen Code “abgibt”, mit den Richtung geht, ob Kunden- schiedene Ansätze, diese Fragen zu richtigen Zugängen ist er schon da … anforderungen erfüllt werden, das er- beantworten. Einer sind die 12-Fac- Kann getestet werden, die Umgebung haltene Feedback kann dann zeitnah tor-App-Principles (vgl. https://12fac- kann darauf basierend schon mal auf- in die Entwicklung integriert werden. tor.net/de/). gesetzt werden, … Es gibt schöne neue Man codet nicht erst ein halbes Jahr Möglichkeiten. Aber wie kann der Ent- und bekommt dann eine riesige Das ist nur ein kleiner Einblick in die wickler abgeholt werden? Wie kann er Bug-Liste, die immer frustrieren muss. Ergebnisse der Konferenz, die un- trotz der neuen Zusammenarbeits- Natürlich ist negatives Feedback nie bedingt Lust auf mehr machte. Ins- möglichkeiten sein “Kammergefühl” angenehm, je eher das aber kommt, gesamt war alles sehr gut organisiert, behalten? Menschen sind nun einmal desto erfolgreicher und effizienter die Location toll, die Versorgung Gewohnheitstiere. Es gibt Tools, die es wird die Entwicklung, man nimmt Ops großartig, die Vorträge sehr gut aus- erlauben, sich seine Umgebung so und Kunde direkt von Anfang an mit, gewählt, die Ausstellung der Sponso- zu gestalten, dass es sich wie immer gerade Letzterer bekommt so das Ge- ren lud zum Wandeln ein und die an- anfühlt, sogar einige Desktop-­ fühl, dass es in seinem Sinne läuft und geregten Diskussionen in und Programme können integriert wer- voran geht. Frühzeitiges Feedback, zwischen den Vorträgen zeigte, wie den. Dennoch ändert sich etwas. Und das angenommen und umgesetzt angeregt das Publikum war. Ein gro- das muss der Entwickler auch wollen. werden kann, erleichtert die Zu- ßes Lob gilt deswegen den Organisa- sammenarbeit für alle Seiten. Zu Feh- toren! Schon am Bahnhof wurde man Ein weiteres übergreifendes Thema lern stehen und Korrekturen wollen mit Plakaten “abgeholt” und zum waren Automatisierungen im Umfeld gelingt nicht häufig problemlos, das Congress Center geleitet und auch der Continuous Delivery. Wie kann die muss von allen Beteiligten gewollt und dort blieben keine Fragen offen, man Delivery so gestaltet werden, dass sie gefördert werden, und häufig braucht wusste, wo was stattfand und wohin tatsächlich Continuous ist? Auto- es dafür einen Anstoß von außen. man sich wenden muss, die Pausen matisierte Tests während der Ent- waren genau richtig lang und das wicklung waren ein Vorschlag. Von Der Blick von außen ist auch beim Niveau der Vorträge rundete alles ab. vornherein eingeplante automatisierte Thema Migration in die Cloud in vie- Ich freue mich schon auf das nächste Tests und ein ausgeklügeltes Monito- len Fällen der sinnvollste Ansatz. Be- Jahr in Mannheim! ring des gesamten Systems, das immer vor überhaupt migriert werden kann, komplexer und dynamischer wird, was sollte das vorhandene System gründ- Friederike Zelke das Monitoring vor neue Heraus- lich analysiert werden. Was gibt es? forderungen stellt. Ein weiterer Vor- Was kann erhalten bleiben und ist

the cloud report 01—2019 25 TESTS

Wir testen Clouds

Die Angebote rund um Cloud Com- sich unabhängig zu informieren und Wenn Sie Anregungen haben, die puting ändern sich rasant. Selbst die den richtigen Anbieter für sich zu fin- Fragen zu ergänzen, schreiben Sie Angebote der einzelnen Provider den. Aber nicht nur Leser des Reports uns bitte unter: [email protected]. werden regelmäßig weiterentwickelt. bekommen so umfangreiche, un- Hinweis: In den Auswertungen So ist es kaum möglich, einen Über- abhängige Daten, auch die Provider werden drei verschieden große Virtu- blick zu behalten. Wir, die Cloudibility, können sich über ihren Markt infor- elle Maschinen verwendet: möchten diesem Umstand abhelfen mieren, sehen, wo sie stehen und wo und nach und nach die Angebote im Vergleich ihre Stärken liegen. Sie Small bedeutet: durchleuchten und nach objektiven können aber auch ihre eventuellen aa OS Ubuntu 16.04 Gesichtspunkten auswerten. Dafür Schwächen und Potenziale erkennen, aa 2vCPUs haben wir einen Fragebogen ent- möglichen Nachholbedarf sehen aa 8GB RAM wickelt, den uns Provider, die sich oder Ansätze für weitere Spezialisie- aa min. 50GB HDD unserem Wunsch nach Transparenz rungen und Verbesserungen ent- aa Location: Germany, if not anschließen, ausfüllen und unseren decken. Und natürlich präsentieren Western Europe, if not Europe Lesern so Einblicke in ihr Angebot sie sich so einem interessierten Leser- ermöglichen. Ergänzt werden diese und potenziellen Kundenkreis. Medium bedeutet: Daten durch Testergebnisse, die Neben den schon getesteten aa OS Ubuntu 16.04 unsere Techniker erheben. Anbietern AWS, Microsoft Azure, aa 4vCPUs Die Fragen an die Provider und die Google Cloud Platform und dem aa 16GB RAM Tests decken einen umfangreichen SysEleven Stack haben wir drei neue aa min. 50GB HDD Bereich des Cloud Computings ab: Provider getestet, diese sind: die aa Location: Germany, if not Western Wir erfragen und testen allgemeine Open Telekom Cloud, Noris Cloud Europe, if not Europe Informationen zum Onboarding, zu und IBM Cloud. Verfügbarkeiten, SLAs, zu Datacen- Auf den nächsten Seiten finden Large bedeutet: tern, Compute, Storage, Netzwerk, Sie die Auswertungen nach einzelnen aa OS Ubuntu 16.04 Limitierungen, Skalierung, Techno- Themen sortiert. Sie finden hier die aa 8vCPUs logien, aber auch mehr interne kompletten Auswertungen. Wir wer- aa 32GB RAM Informationen wie Back-up, Security, den nach und nach weitere Clouds aa min. 50GB HDD Imageservice, Patch­management, hinzufügen, sodass in den nächsten aa Location: Germany, if not Western Monitoring, CI/CD, as a Service-An- Ausgaben nur noch exemplarische Europe, if not Europe gebote und natürlich auch den Testauswertungen wiedergegeben Kostenfaktor. werden, die ausführlichen Tabellen So entstehen Rankings und Tabel- finden Sie online. len, die dem Kunden weiterhelfen, aa the-report.cloud

26 Tests And the winners are ...

As with every edition, our team looked of smaller and / or not-so-well-known into many different aspects of sever- cloud vendors, such as SysEleven or al cloud providers. We analyzed their Noris Cloud: Often they offer com- pros and cons, discussed our experi- parable performance and sometimes ences, and decided for the winner in even more options than their bigger several categories. competitors, combined with more personal support and very reasonable Interestingly, we found out that none pricing. of the vendors we tested would not be recommendable - each one has spe- That being said, let’s look into the win- cific strengths and (of course) poten- ners. And don’t forget to check out tial for improvements. We were espe- our detailed comparison tables on the cially impressed by the performance next pages for more details!

Category Winner(s) Reason

Storage SysEleven Stack Highest average IOPs

Software-as-a-Service Microsoft Azure Most available services across all areas Google Cloud Platform Best and most seamless integration of services

Security Microsoft Azure Checking all marks and executing regular penetra- tion tests against their own environment

Network SysEleven Stack Highest measured bandwidth

Image Services Noris Cloud Most comprehensive Image Format support

Database-as-a-Service Noris Cloud Fastest MySQL- and PostgreSQL-performance

Container-as-a-Service Google Cloud Platform Best integration and most convenient usage

Compute Amazon AWS Highest CPU-Scores Google Cloud Platform Best price-performance-ratio Microsoft Azure Best RAM-throughput Noris Cloud Ease of creation IBM Cloud Most supported hypervisor types

Backup & Recovery OTC Most available options at a moderate price

As you can see, each one of our multi-cloud-approaches can be im- vendors has his strengths. From plemented as easy as never before, our perspective there is no right or allowing for a the-right-tool-for-the- wrong considering cloud vendors job-approach and preventing from nowadays, there is only a matter of vendor lock-ins. needs and their fulfillment by a ven- dor. The best thing is: Nowadays, And these are good news for 2019!

the cloud report 01—2019 27 Storage

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which kinds of storage are available? - Object / Blob Storage yes (S3 / Glacier) yes (Azure Blob Storage) yes (Google Cloud Storage) yes (S3) yes yes(IBM Cloud Object Storage) - File Storage yes (EFS) yes (Azure Disk Storage) yes (Google Drive / Persistent Disk) yes (Manila) yes yes(IBM Cloud file storage) - Block Storage yes (EBS) yes (Azure Files) yes (Google Persistent Disk) yes (Cinder/Nova) yes yes(IBM Cloud block storage)

Block - Different yes yes yes No (Local SSD is planed) yes yes yes tier-classes? SATA, SSD, SAS

Object - S3 and/or Swift? S3 Azure Blob Storage Buckets (as S3) Object S3 OpenStack Swift S3 S3 S3 Swift

File - Accessing file EFS GlusterFS; BeeGFS; Luster Google Cloud Storage FUSE; Beta: not provided as a service NFS CephFS NFS storage via (cluster) file Google Cloud Filestore system.

Storage capacity limits Overall size: Unlimited Overall size: 500 TB per Storage Account Overall size: Unlimited Overall size: 15TB+ 50 TB of Object storage 1 TB Block Storage Unlimited for Object Storage for 5 TB per S3 object 200 Storage Accounts per Subscriptions 5 TB per individual object 32 TB of Block Storage Standard Plan 10 PB of File Storage 12 TB of Block Storage 12 TB of File Storage

Duration of provisioning? 1.18 min 1.32 min 20 sec 10 sec 2.5 min

Throughput IOPS (only –– Random read test: bw=12311KB/s, –– Random read test: bw=19565KB/s, –– Random Read: bw=2881.2KB/s, –– Random Read: bw=111019KB/s, –– Random Read: bw=8041.8 KB/s, –– Random Read: bw=80730 KB/s, –– Random Read: bw=92883 KB/s, Block- and File-Storage) iops=3077 iops=2445 iops=360 iops=13877 iops=1005 iops=10091 iops=11610, –– Random write test: bw=158779KB/s, –– Random write test: bw=5685.7KB/s, –– Random Write: bw=11490KB/s, –– Random Write: bw=30406KB/s, –– Random Write: bw=64736 KB/s, –– Random Write: bw=134696 KB/s, –– Random Write: bw=243007KB/s, iops=39694 iops=710 iops=1436 iops=3800 iops=1011 iops=2104 iops=3796, –– Random Read and write test: –– Random Read and write test: –– Random Read and write: –– Random Read & Write: –– Random Read and write: –– Random Read and write: –– Random Read and write: –– read: bw=9205.6KB/s, iops=2301; –– read: bw=13835KB/s, iops=1729; –– Read: bw=5086.4KB/s, iops=635; –– Read: bw=74293KB/s, iops=9286; –– Read:bw=14454 KB/s, iops=903 –– Read: bw=55438 KB/s, iops=3464 –– Read: bw=78238 KB/s, iops=4889, –– write: bw=3069.9KB/s, iops=767; –– write: bw=1576.3KB/s, iops=197 ; –– Write: bw=598117B/s, iops=73 –– write: bw=8464.4KB/s, iops=1058 –– Write:bw=1646.9 KB/s, iops=102 –– Write: bw=6316.2KB/s, iops=394 –– Write: bw=8913.9KB/s, iops=557, –– sequential read test: bw=3537.2MB/s, –– sequential read test: bw=19551KB/s, –– Sequential Read: bw=30820KB/s, –– Sequential Read: bw=96140KB/s, –– Sequential Read: bw=11034 KB/s, –– Sequential Read: bw=61150KB/s, –– Sequential Read: bw=79440 KB/s, iops=452753 iops=2443 iops=3852 iops=12017 iops=1379 iops=7643 iops=9929 –– sequential write test: bw=62963KB/s, –– sequential write test: bw=11801KB/s, –– Sequential Write: bw=30820KB/s, –– Sequential Write: bw=32778KB/s, –– Sequential Write: bw=64540 KB/s, –– Sequential Write: bw=141092KB/s, –– Sequential Write: bw=85321 KB/s, iops=1967 iops=1475 iops=3852 iops=4097 iops=2016 iops=4409 iops=2666

Limitations - IOPS, If Volume Size (SSD)is: –– A standard disk is expected to handle Results: the-report.cloud A Common I/O (SATA) is expected to A BSS Mass Storage is expected to A Block Storage from 25 GB to Limitations because of 1) 1 GiB-16 TiB,Then Maximum IOPS**/ 500 IOPS or 60MB/s handle 1000 IOPS or 40 MB/s. handle 200 IOPS or 100 MB/s 12,000 GB capacity is expected to storage technology Volume: 10,000. –– A P30 Premium disk is expected to A High I/O (SAS) is expected to han- A BSS-performance Storage is handle 48,000 IOPS. 2) 4 GiB - 16 Tib, Then Maximum IOPS**/ handle 5000 IOPS or 200MB/s dle 3000 IOPS or 120 MB/s. expected to handle 1000 IOPS or Volume: 32,000***. A High I/O (SAS) is expected to han- 160 MB/s dle 3000 IOPS or 550 MB/s. A BSS-Ultra-SSD-Storage is expect- If Volume Size (HDD)is: A Ultra-High I/O (SSD) is expected to ed to handle 10,000 IOPS or 300 1) 500 GiB - 16 TiB,Then Maximum handle 20,000 IOPS or 320 MB/s. MB/s IOPS**/Volume:500 A Ultra-High I/O (SSD) is expected to 2) 500 Gib - 16 Tib,Then Maximum handle 30,000 IOPS or 1GB/s. IOPS**/Volume:250

Costs - total price for the Storage price with additional Storage price with additional Storage price with additional Storage price with additional Storage price with additional Storage price with additional Storage price with additional Disk which is mounted to 50 gb = € 71.3 50 gb = € 72.03 50 gb = € 85.10 50 gb = € 2.85 50 gb = € 6.90 50 gb = € 6.00 50 gb = € 8.23 the VM 50GB

28 Tests Storage

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which kinds of storage are available? - Object / Blob Storage yes (S3 / Glacier) yes (Azure Blob Storage) yes (Google Cloud Storage) yes (S3) yes yes(IBM Cloud Object Storage) - File Storage yes (EFS) yes (Azure Disk Storage) yes (Google Drive / Persistent Disk) yes (Manila) yes yes(IBM Cloud file storage) - Block Storage yes (EBS) yes (Azure Files) yes (Google Persistent Disk) yes (Cinder/Nova) yes yes(IBM Cloud block storage)

Block - Different yes yes yes No (Local SSD is planed) yes yes yes tier-classes? SATA, SSD, SAS

Object - S3 and/or Swift? S3 Azure Blob Storage Buckets (as S3) Object S3 OpenStack Swift S3 S3 S3 Swift

File - Accessing file EFS GlusterFS; BeeGFS; Luster Google Cloud Storage FUSE; Beta: not provided as a service NFS CephFS NFS storage via (cluster) file Google Cloud Filestore system.

Storage capacity limits Overall size: Unlimited Overall size: 500 TB per Storage Account Overall size: Unlimited Overall size: 15TB+ 50 TB of Object storage 1 TB Block Storage Unlimited for Object Storage for 5 TB per S3 object 200 Storage Accounts per Subscriptions 5 TB per individual object 32 TB of Block Storage Standard Plan 10 PB of File Storage 12 TB of Block Storage 12 TB of File Storage

Duration of provisioning? 1.18 min 1.32 min 20 sec 10 sec 2.5 min

Throughput IOPS (only –– Random read test: bw=12311KB/s, –– Random read test: bw=19565KB/s, –– Random Read: bw=2881.2KB/s, –– Random Read: bw=111019KB/s, –– Random Read: bw=8041.8 KB/s, –– Random Read: bw=80730 KB/s, –– Random Read: bw=92883 KB/s, Block- and File-Storage) iops=3077 iops=2445 iops=360 iops=13877 iops=1005 iops=10091 iops=11610, –– Random write test: bw=158779KB/s, –– Random write test: bw=5685.7KB/s, –– Random Write: bw=11490KB/s, –– Random Write: bw=30406KB/s, –– Random Write: bw=64736 KB/s, –– Random Write: bw=134696 KB/s, –– Random Write: bw=243007KB/s, iops=39694 iops=710 iops=1436 iops=3800 iops=1011 iops=2104 iops=3796, –– Random Read and write test: –– Random Read and write test: –– Random Read and write: –– Random Read & Write: –– Random Read and write: –– Random Read and write: –– Random Read and write: –– read: bw=9205.6KB/s, iops=2301; –– read: bw=13835KB/s, iops=1729; –– Read: bw=5086.4KB/s, iops=635; –– Read: bw=74293KB/s, iops=9286; –– Read:bw=14454 KB/s, iops=903 –– Read: bw=55438 KB/s, iops=3464 –– Read: bw=78238 KB/s, iops=4889, –– write: bw=3069.9KB/s, iops=767; –– write: bw=1576.3KB/s, iops=197 ; –– Write: bw=598117B/s, iops=73 –– write: bw=8464.4KB/s, iops=1058 –– Write:bw=1646.9 KB/s, iops=102 –– Write: bw=6316.2KB/s, iops=394 –– Write: bw=8913.9KB/s, iops=557, –– sequential read test: bw=3537.2MB/s, –– sequential read test: bw=19551KB/s, –– Sequential Read: bw=30820KB/s, –– Sequential Read: bw=96140KB/s, –– Sequential Read: bw=11034 KB/s, –– Sequential Read: bw=61150KB/s, –– Sequential Read: bw=79440 KB/s, iops=452753 iops=2443 iops=3852 iops=12017 iops=1379 iops=7643 iops=9929 –– sequential write test: bw=62963KB/s, –– sequential write test: bw=11801KB/s, –– Sequential Write: bw=30820KB/s, –– Sequential Write: bw=32778KB/s, –– Sequential Write: bw=64540 KB/s, –– Sequential Write: bw=141092KB/s, –– Sequential Write: bw=85321 KB/s, iops=1967 iops=1475 iops=3852 iops=4097 iops=2016 iops=4409 iops=2666

Limitations - IOPS, If Volume Size (SSD)is: –– A standard disk is expected to handle Results: the-report.cloud A Common I/O (SATA) is expected to A BSS Mass Storage is expected to A Block Storage from 25 GB to Limitations because of 1) 1 GiB-16 TiB,Then Maximum IOPS**/ 500 IOPS or 60MB/s handle 1000 IOPS or 40 MB/s. handle 200 IOPS or 100 MB/s 12,000 GB capacity is expected to storage technology Volume: 10,000. –– A P30 Premium disk is expected to A High I/O (SAS) is expected to han- A BSS-performance Storage is handle 48,000 IOPS. 2) 4 GiB - 16 Tib, Then Maximum IOPS**/ handle 5000 IOPS or 200MB/s dle 3000 IOPS or 120 MB/s. expected to handle 1000 IOPS or Volume: 32,000***. A High I/O (SAS) is expected to han- 160 MB/s dle 3000 IOPS or 550 MB/s. A BSS-Ultra-SSD-Storage is expect- If Volume Size (HDD)is: A Ultra-High I/O (SSD) is expected to ed to handle 10,000 IOPS or 300 1) 500 GiB - 16 TiB,Then Maximum handle 20,000 IOPS or 320 MB/s. MB/s IOPS**/Volume:500 A Ultra-High I/O (SSD) is expected to 2) 500 Gib - 16 Tib,Then Maximum handle 30,000 IOPS or 1GB/s. IOPS**/Volume:250

Costs - total price for the Storage price with additional Storage price with additional Storage price with additional Storage price with additional Storage price with additional Storage price with additional Storage price with additional Disk which is mounted to 50 gb = € 71.3 50 gb = € 72.03 50 gb = € 85.10 50 gb = € 2.85 50 gb = € 6.90 50 gb = € 6.00 50 gb = € 8.23 the VM 50GB

the cloud report 01—2019 29 Compute

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Small VM: OS Ubuntu 16.04; 2vCPUs; 8GB RAM; yes yes yes yes yes yes yes min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

Medium VM: OS Ubuntu 16.04; 4vCPUs; 16GB RAM; yes yes yes yes yes yes yes min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

Large VM: OS Ubuntu 16.04; 8vCPUs; 32GB RAM; yes yes yes yes yes yes yes min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

GPU support for the VM? yes yes yes no yes no yes

AutoScaling for VM? yes yes yes yes yes yes yes

Availability Zones (i.e. Availability set) possible yes yes yes yes yes yes yes

Startup-time (till time of availability) - Small 28 sec 151 sec 31 sec 26 sec 80 sec 70 sec 120 sec - Medium 30 sec 192 sec 44 sec 28 sec 83 sec 80 sec 156 sec - Large 33 sec 203 sec 46 sec 31 sec 100 sec 100 sec 322 sec

Count of steps until VM is created 7 steps 4 Steps 5 Steps 2 Steps 3 Steps 2 Steps 4 Steps

RAM throughput (sysbench, Block size 1k) - Read 792.57 MB/sec 4224.71 MB/sec 3199.60 MB/sec 3500.09 MB/sec 2616.52 MB/sec 2591.06 MB/sec 590.88 MB/sec - Write 759.62 MB/sec 2801.53 MB/sec 2283.16 MB/sec 2539.60 MB/sec 1936.31 MB/sec 2256.63 MB/sec 557.13 MB/sec

CPU speed (geekbench) - Small Single Core 3340 3268 2909 3303 2765 2822 2374 - Small Multi Core 6343 3027 3818 5927 5397 5154 4552 - Medium Single Core 3310 2975 3022 3141 2804 2825 2647 - Medium Multi Core 11546 9669 7227 9875 9913 9781 9006 - Large Single Core 3363 3315 3065 3302 2799 2802 2663 - Large Multi Core 21687 19689 13705 16897 18470 15780 16253

VM accessible via Console no yes yes yes yes yes yes

Total cost of VM per month (732hrs, * = converted from USD) - Small € 65.67 * € 60.98 € 46.05 n/a € 74.57 n/a € 78.77 - Medium € 129.44 * € 121.97 € 92.10 n/a € 150.28 n/a € 141.19 - Large € 256.97 * € 243.94 € 184.21 n/a € 292.42 n/a € 301.21

Supported disk formats / images –– OVA –– VHD –– VMDK –– ISO –– VMDK –– ISO –– VMDK –– VMDK –– VDH –– QCOW2 –– QCOW2 –– PLOOP –– AKI –– RAW –– RAW –– RAW –– RAW –– QCOW2 –– ARI –– VHD/VHDX –– AKI –– VHD –– RAW –– AMI –– AMI –– VHDX –– VDI –– QCOW2 –– KRI –– VHD –– RAW –– VMDK –– AKI –– ARI –– AMI –– OVA –– DOCKER

Can bare-metal servers be deployed via the cloud? yes no no no yes no yes

Which hypervisor is used? - KVM - Hyper-V - VMware ESXi - KVM - KVM - KVM - PowerVM - KVM - Xen - VMware ESX Server - Xen - Xen - KVM - z/VM

30 Tests Compute

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Small VM: OS Ubuntu 16.04; 2vCPUs; 8GB RAM; yes yes yes yes yes yes yes min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

Medium VM: OS Ubuntu 16.04; 4vCPUs; 16GB RAM; yes yes yes yes yes yes yes min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

Large VM: OS Ubuntu 16.04; 8vCPUs; 32GB RAM; yes yes yes yes yes yes yes min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

GPU support for the VM? yes yes yes no yes no yes

AutoScaling for VM? yes yes yes yes yes yes yes

Availability Zones (i.e. Availability set) possible yes yes yes yes yes yes yes

Startup-time (till time of availability) - Small 28 sec 151 sec 31 sec 26 sec 80 sec 70 sec 120 sec - Medium 30 sec 192 sec 44 sec 28 sec 83 sec 80 sec 156 sec - Large 33 sec 203 sec 46 sec 31 sec 100 sec 100 sec 322 sec

Count of steps until VM is created 7 steps 4 Steps 5 Steps 2 Steps 3 Steps 2 Steps 4 Steps

RAM throughput (sysbench, Block size 1k) - Read 792.57 MB/sec 4224.71 MB/sec 3199.60 MB/sec 3500.09 MB/sec 2616.52 MB/sec 2591.06 MB/sec 590.88 MB/sec - Write 759.62 MB/sec 2801.53 MB/sec 2283.16 MB/sec 2539.60 MB/sec 1936.31 MB/sec 2256.63 MB/sec 557.13 MB/sec

CPU speed (geekbench) - Small Single Core 3340 3268 2909 3303 2765 2822 2374 - Small Multi Core 6343 3027 3818 5927 5397 5154 4552 - Medium Single Core 3310 2975 3022 3141 2804 2825 2647 - Medium Multi Core 11546 9669 7227 9875 9913 9781 9006 - Large Single Core 3363 3315 3065 3302 2799 2802 2663 - Large Multi Core 21687 19689 13705 16897 18470 15780 16253

VM accessible via Console no yes yes yes yes yes yes

Total cost of VM per month (732hrs, * = converted from USD) - Small € 65.67 * € 60.98 € 46.05 n/a € 74.57 n/a € 78.77 - Medium € 129.44 * € 121.97 € 92.10 n/a € 150.28 n/a € 141.19 - Large € 256.97 * € 243.94 € 184.21 n/a € 292.42 n/a € 301.21

Supported disk formats / images –– OVA –– VHD –– VMDK –– ISO –– VMDK –– ISO –– VMDK –– VMDK –– VDH –– QCOW2 –– QCOW2 –– PLOOP –– AKI –– RAW –– RAW –– RAW –– RAW –– QCOW2 –– ARI –– VHD/VHDX –– AKI –– VHD –– RAW –– AMI –– AMI –– VHDX –– VDI –– QCOW2 –– KRI –– VHD –– RAW –– VMDK –– AKI –– ARI –– AMI –– OVA –– DOCKER

Can bare-metal servers be deployed via the cloud? yes no no no yes no yes

Which hypervisor is used? - KVM - Hyper-V - VMware ESXi - KVM - KVM - KVM - PowerVM - KVM - Xen - VMware ESX Server - Xen - Xen - KVM - z/VM

the cloud report 01—2019 31 Backup & Recovery

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Who is responsible for the cloud resources backups Owner of the Resources Owner of the Re- Owner of the Resources Owner of the resources Owner of the Resources Owner of the Resources Owner of the Resources (Cloud Provider or Owner of the Resources) sources

Which types of backups are supported for VMs? - Snapshots - Full Backups - Snapshots - Instance Snapshots - Snapshot - Snapshot - Snapshot - Incremental Backups - Differential Backups - Incremental Backups - Volume Snapshots - Full Backups - Full Backups - Full Backups - Incremental Backups - Incremental Backups - Incremental Backups - Incremental Backups

Where will the backup be stored? Amazon S3 Recovery Services Google Cloud Storage Storage Volume backup service Ceph Object Storage System Evault Vault Different datacenters Different Datacenters r1 cdp

Can backups be scheduled? yes yes yes no yes yes yes

Usage costs € 40.96 € 21.57 € 13.25 n/a € 5.00 n/a € 260.79 - 500 GB Backup Storage - HDD - 20% change per month - Frankfurt / Western Europe

Is it possible to restore data from a previous date? yes yes yes no yes no yes

Container as a Service

SysEleven Noris Questions AWS Azure Google Cloud Platform Stack OTC Cloud IBM Cloud

Which technologies are being provided/supported? Kubernetes Kubernetes Kubernetes Kubernetes Kubernetes Kubernetes Docker Docker Mesophere Docker Mesophere Cloud Container Engine

Is a managed container service available? yes (EKS) yes (AKS) yes yes yes yes

Can worker nodes be accessed directly by customers? yes yes yes yes yes yes

Can master nodes be accessed directly by customers? no yes yes no yes yes

Which version of the technologies/Kubernetes is 1.10.3 1.10.6 (West Europe) 1.10.6 (IWest Europe) n/a 1.9.2-r2 1.11.3 being offered? 1.11.1 (Canada / USA) 1.11.1 (Canada / USA) 1.10.8 1.9.8

How much time does it take to provide the container service? - Cluster 11 min <2 min <3 min n/a 8 min <2 min - Worker 8 min n/a n/a n/a 8 min 13 min

Costs € 194.34 per month € 101.24 per month € 80.23 per month n/a € 247.68 per month (Managed service, 732hrs per month, overall min. 8+ GB RAM, default HDD, 1-2 vCPUs, hosted in Frankfurt Machines: 4x t2.small (2 GB RAM, 1 vCPU) Machines: 4x A1 v2 (2 GB RAM, 1 vCPU) Machines: 3x n1-standard-1 (3.5 GB RAM, Machines: u2c.2X4 (4 GB RAM, 2 vCPU) or Western Europe, Storage / IPs not included) Note: EKS-cluster is € 0.17 * per hour Note: AKS is free 1 vCPU) Note: worker nodes is € 0.34 per hour * = Prices in USD have been converted to EUR Note: This example has less computing pow- er but more RAM than AWS's and Azure's examples

Shared or dedicated Container Engine Cluster? Dedicated Shared Shared Shared Shared dedicated Dedicated

Limitations - How many master and worker nodes can Max cluster: 3 Max nodes/cluster: 100 Max nodes/cluster: 100 Max nodes/cluster: 1000 n/a be deployed? Max pods/nodes: 110 Max pods/nodes: 100 Max Cluster/subscription: 100

Do you have full access to all K8s ressources (no RBAC no no no n/a yes no restriction)?

32 Tests Backup & Recovery

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Who is responsible for the cloud resources backups Owner of the Resources Owner of the Re- Owner of the Resources Owner of the resources Owner of the Resources Owner of the Resources Owner of the Resources (Cloud Provider or Owner of the Resources) sources

Which types of backups are supported for VMs? - Snapshots - Full Backups - Snapshots - Instance Snapshots - Snapshot - Snapshot - Snapshot - Incremental Backups - Differential Backups - Incremental Backups - Volume Snapshots - Full Backups - Full Backups - Full Backups - Incremental Backups - Incremental Backups - Incremental Backups - Incremental Backups

Where will the backup be stored? Amazon S3 Recovery Services Google Cloud Storage Storage Volume backup service Ceph Object Storage System Evault Vault Different datacenters Different Datacenters r1 cdp

Can backups be scheduled? yes yes yes no yes yes yes

Usage costs € 40.96 € 21.57 € 13.25 n/a € 5.00 n/a € 260.79 - 500 GB Backup Storage - HDD - 20% change per month - Frankfurt / Western Europe

Is it possible to restore data from a previous date? yes yes yes no yes no yes

Container as a Service

SysEleven Noris Questions AWS Azure Google Cloud Platform Stack OTC Cloud IBM Cloud

Which technologies are being provided/supported? Kubernetes Kubernetes Kubernetes Kubernetes Kubernetes Kubernetes Docker Docker Mesophere Docker Mesophere Cloud Container Engine

Is a managed container service available? yes (EKS) yes (AKS) yes yes yes yes

Can worker nodes be accessed directly by customers? yes yes yes yes yes yes

Can master nodes be accessed directly by customers? no yes yes no yes yes

Which version of the technologies/Kubernetes is 1.10.3 1.10.6 (West Europe) 1.10.6 (IWest Europe) n/a 1.9.2-r2 1.11.3 being offered? 1.11.1 (Canada / USA) 1.11.1 (Canada / USA) 1.10.8 1.9.8

How much time does it take to provide the container service? - Cluster 11 min <2 min <3 min n/a 8 min <2 min - Worker 8 min n/a n/a n/a 8 min 13 min

Costs € 194.34 per month € 101.24 per month € 80.23 per month n/a € 247.68 per month (Managed service, 732hrs per month, overall min. 8+ GB RAM, default HDD, 1-2 vCPUs, hosted in Frankfurt Machines: 4x t2.small (2 GB RAM, 1 vCPU) Machines: 4x A1 v2 (2 GB RAM, 1 vCPU) Machines: 3x n1-standard-1 (3.5 GB RAM, Machines: u2c.2X4 (4 GB RAM, 2 vCPU) or Western Europe, Storage / IPs not included) Note: EKS-cluster is € 0.17 * per hour Note: AKS is free 1 vCPU) Note: worker nodes is € 0.34 per hour * = Prices in USD have been converted to EUR Note: This example has less computing pow- er but more RAM than AWS's and Azure's examples

Shared or dedicated Container Engine Cluster? Dedicated Shared Shared Shared Shared dedicated Dedicated

Limitations - How many master and worker nodes can Max cluster: 3 Max nodes/cluster: 100 Max nodes/cluster: 100 Max nodes/cluster: 1000 n/a be deployed? Max pods/nodes: 110 Max pods/nodes: 100 Max Cluster/subscription: 100

Do you have full access to all K8s ressources (no RBAC no no no n/a yes no restriction)?

the cloud report 01—2019 33 IaaS: Patch Management

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud provide a man- yes (Amazon Systems Management yes (Azure Automation) no no no no yes (IBM BigFix Patch Manage- aged patch service? Service) ment(bookable))

Which operating systems are Linux: Linux: Linux: Linux: Linux: Linux. supported? –– Red Hat Enterprise Linux (RHEL) 6 –– CentOS 6 (x86/x64), 7 (x64) –– CentOS –– openSUSE 42.x –– openSUSE 42.x –– CentOS-Minimal 7.X (x86 / x64), 7 (x64) –– Red Hat Enterprise 6 (x86 / x64), 7 –– Container-Optimized OS from –– CentOS 6.x , 7.x –– CentOS 6.x , 7.x –– CentOS-LAMP 7.X –– SUSE Linux Enterprise Server (SLES) (x64) Google –– Debian 8.x 9.x –– Debian 8.x 9.x –– CentOS-Minimal 6.X 12 (x64) –– SUSE Linux Enterprise Server 11 (x86 –– CoreOS –– Fedora 24 , 25 , 26 , 27 –– Fedora 25 , 26 , 27 –– CentOS-LAMP 6.X –– Amazon Linux 2 2-2.0 (x86 / x64) / x64), 12 (x64) –– Debian –– EulerOS 2.x –– EulerOS 2.x –– Debian Minimal Stable 9.X –– Amazon Linux 2012.03 - 2017.03 –– Ubuntu 14.04 LTS and 16.04 LTS (x86 –– Red Hat Enterprise Linux (RHEL) –– Ubuntu 14.04.x –– Ubuntu 14.04.x ,16.04.x.18.04.x –– Debian Minimal Stable 8.X (x86 / x64) / x64) –– RHEL for SAP –– Ubuntu 16.04.x –– SUSE Enterprise Linux 11 , 12 –– Debian LAMP Stable 8.X –– Amazon Linux 2015.03 - 2018.03 –– SUSE Enterprise Linux Server (SLES) –– SUSE Enterprise Linux 11 , 12 –– Oracle Linux 6.8 , 7.2 –– Red Hat Minimal 7.x (x64) Windows: –– SLES for SAP –– Oracle Linux 6.8 , 7.2 –– Red Enterprise Linux 6.8 , 7.3 –– Red Hat LAMP 7.x –– Ubuntu Server 14.04 LTS,16.04 LTS –– Windows Server 2008 R2 SP1 and –– Ubuntu –– Red Enterprise Linux 6.8 , 7.3 –– Red Hat Minimal 6.x and 18.04 LTS (x86 / x64) later Windows: –– Red Hat LAMP 6.x –– CentOS 6 (x86 / x64), 7(x64) –– Windows Server 2008, Windows Windows: Windows: –– Window Server 2012 R2 –– Ubuntu Minimal 18.04-LTS –– Raspbian Jessie (x86) Server 2008 R2 RTM –– Windows Server –– Windows 2008 –– Ubuntu LAMP 18.04-LTS –– Raspbian Stretch (x86) –– Windows 2012 –– Ubuntu Minimal 16.04-LTS –– Window Server 2016 –– Ubuntu LAMP 16.04-LTS Windows: –– Ubuntu Minimal 14.04-LTS -Windows Server 2008 through –– Ubuntu LAMP 14.04-LTS Windows Server 2016, including R2 versions Windows. –– Standard 2016 –– Standard 2012 –– R2 Standard 2012

Is the from the yes yes yes yes yes yes yes deployed VM at a current patch level?

What is the current available 0.04 0.04 0.04 0.04 0.04 0.04 0.04 patch level in our sample VM? –– Ubuntu 16.04 LTS with latest patches applied

Can the cloud show/provide the yes yes yes yes yes yes yes patch level of existing machines?

Is an overview of the patchlevel of all provided images available? yes yes yes yes yes yes

Is a centralized Update- / Repo- server available? no no no no yes no no

34 Tests IaaS: Patch Management

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud provide a man- yes (Amazon Systems Management yes (Azure Automation) no no no no yes (IBM BigFix Patch Manage- aged patch service? Service) ment(bookable))

Which operating systems are Linux: Linux: Linux: Linux: Linux: Linux. supported? –– Red Hat Enterprise Linux (RHEL) 6 –– CentOS 6 (x86/x64), 7 (x64) –– CentOS –– openSUSE 42.x –– openSUSE 42.x –– CentOS-Minimal 7.X (x86 / x64), 7 (x64) –– Red Hat Enterprise 6 (x86 / x64), 7 –– Container-Optimized OS from –– CentOS 6.x , 7.x –– CentOS 6.x , 7.x –– CentOS-LAMP 7.X –– SUSE Linux Enterprise Server (SLES) (x64) Google –– Debian 8.x 9.x –– Debian 8.x 9.x –– CentOS-Minimal 6.X 12 (x64) –– SUSE Linux Enterprise Server 11 (x86 –– CoreOS –– Fedora 24 , 25 , 26 , 27 –– Fedora 25 , 26 , 27 –– CentOS-LAMP 6.X –– Amazon Linux 2 2-2.0 (x86 / x64) / x64), 12 (x64) –– Debian –– EulerOS 2.x –– EulerOS 2.x –– Debian Minimal Stable 9.X –– Amazon Linux 2012.03 - 2017.03 –– Ubuntu 14.04 LTS and 16.04 LTS (x86 –– Red Hat Enterprise Linux (RHEL) –– Ubuntu 14.04.x –– Ubuntu 14.04.x ,16.04.x.18.04.x –– Debian Minimal Stable 8.X (x86 / x64) / x64) –– RHEL for SAP –– Ubuntu 16.04.x –– SUSE Enterprise Linux 11 , 12 –– Debian LAMP Stable 8.X –– Amazon Linux 2015.03 - 2018.03 –– SUSE Enterprise Linux Server (SLES) –– SUSE Enterprise Linux 11 , 12 –– Oracle Linux 6.8 , 7.2 –– Red Hat Minimal 7.x (x64) Windows: –– SLES for SAP –– Oracle Linux 6.8 , 7.2 –– Red Enterprise Linux 6.8 , 7.3 –– Red Hat LAMP 7.x –– Ubuntu Server 14.04 LTS,16.04 LTS –– Windows Server 2008 R2 SP1 and –– Ubuntu –– Red Enterprise Linux 6.8 , 7.3 –– Red Hat Minimal 6.x and 18.04 LTS (x86 / x64) later Windows: –– Red Hat LAMP 6.x –– CentOS 6 (x86 / x64), 7(x64) –– Windows Server 2008, Windows Windows: Windows: –– Window Server 2012 R2 –– Ubuntu Minimal 18.04-LTS –– Raspbian Jessie (x86) Server 2008 R2 RTM –– Windows Server –– Windows 2008 –– Ubuntu LAMP 18.04-LTS –– Raspbian Stretch (x86) –– Windows 2012 –– Ubuntu Minimal 16.04-LTS –– Window Server 2016 –– Ubuntu LAMP 16.04-LTS Windows: –– Ubuntu Minimal 14.04-LTS -Windows Server 2008 through –– Ubuntu LAMP 14.04-LTS Windows Server 2016, including R2 versions Windows. –– Standard 2016 –– Standard 2012 –– R2 Standard 2012

Is the operating system from the yes yes yes yes yes yes yes deployed VM at a current patch level?

What is the current available 0.04 0.04 0.04 0.04 0.04 0.04 0.04 patch level in our sample VM? –– Ubuntu 16.04 LTS with latest patches applied

Can the cloud show/provide the yes yes yes yes yes yes yes patch level of existing machines?

Is an overview of the patchlevel of all provided images available? yes yes yes yes yes yes

Is a centralized Update- / Repo- server available? no no no no yes no no

the cloud report 01—2019 35 Databases (DB-as-a-Service)

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which DB engines are Relational DB Relational DB Relational DB Relational DB Relational DB offered? –– MySQL –– Azure SQL Database –– PostgreSQL –– PostgreSQL –– Db2 on Cloud –– PostgreSQL –– Azure Database for MySQL –– MySQL –– MySQL –– PostgreSQL –– MariaDB –– Azure Database for PostgreSQL –– Google Cloud Spanner –– Microsoft SQL Server –– MySQL –– Oracle –– Azure Database for Maria DB –– Microsoft SQL Server –– Microsoft SQL Server Non-Relational DB Non-Relational DB Non-Relational DB –– Amazon Aurora –– Google Cloud Datastore –– MongoDB –– Cloudant Non-Relational DB –– Google Cloud BigTable –– Redis –– MongoDB Non-Relational DB –– Azure Cosmos DB –– ScyllaDB –– Amazon DynamoDB –– Azure Table Storage Data Warehouse / Big Data –– Redis –– Amazon ElastiCache –– Redis –– Google Cloud BigQuery –– JanusGraph –– Amazon Neptune –– Google Cloud Dataflow –– etcd –– Redis Data Warehouse / Big Data –– Google Cloud Dataproc (Hadoop / –– Elasticsearch –– MemCached –– SQL Data Warehouse Spark) –– HDInsight (Hadoop, Spark, Hive, LLAP, –– Google Cloud Datalab Data Warehouse / Big Data Data Warehouse / Big Data Kafka, Storm, R.) –– Google Cloud Dataprep –– Db2 Warehouse on Cloud –– Amazon Redshift –– Azure Databricks (Spark) –– Amazon Athena –– Azure Data Factory –– Amazon EMR (Hadoop, Spark, HBase, –– Azure Stream Analytics Presto, etc.) –– Amazon Kinesis –– Amazon Elasticsearch Service –– Amazon Quicksight

Performance of MyS- QL (MySQL Sysbench, table-size (row data): 1000000, Threads: 16) - Read Transactions: 59354 (988.96/sec) Transactions: 52354 (988.96/sec) Transactions: 49084 (817.85/sec) Transactions: 52545 (875.53/sec) Transactions: 353 (5.65/sec) - Write Transactions: 42052 (699.07/sec) Transactions: 41002 (683.25/sec) Transactions: n/a Transactions: 75435 (1256.91/sec) Transactions: 873 (14.29/sec) - Read / Write Transactions: 28325 (471.91/sec) Transactions: 30412 (506.86/sec) Transactions: 29329 (488.66/sec) Transactions: 28676 (477.75/sec) Transactions: 273 (4.31/sec)

Supported DB –– MySQL 5.7, 5.6, 5.5 –– MySQL 5.7, 5.6 –– MySQL 5.7, 5.6 –– PostgreSQL 9.6.5, 9.6.3, 9.5.5 –– Db2-ge ­Versions –– MariaDB 10.2,10.1,10.0 –– MariaDB currently on wait list as beta –– PostgreSQL 9.6.x –– MySQL 5.7.20, 5.7.17, –– PostgreSQL 9.6.10,9.6.9,9.5.14,9.5.13,9.4.19, –– Microsoft SQL Server 2017 RTM, 2016 –– Azure SQL Database: Microsoft SQL –– 5.6.35, 5.6.34, 5.6.33, 5.6.30 9.4.18 SP1, 2014 SP2, 2012 SP4, 2008 R2 SP3 Server 2017 –– Microsoft SQL Server 2014 SP2 SE –– MySQL 5.7.22 –– Oracle 12c (12.1.0.2, 12.1.0.1), Oracle 11g –– Microsoft SQL Server 2017, 2016 SP1, –– MongoDB –– Cloudant-h7 (11.2.0.4, 11.2.0.3, 11.2.0.2) 2014 SP2, 2012 SP4, 2008 R2 SP3 –– Redis 3.0.7 –– MongoDB 3.4.10,3.2.18,3.2.11,3.2.10 –– PostgreSQL 11 Beta 1, 10.4, 10.3, 10.1, –– PostgreSQL 10.3, 9.6.x, 9.5.x –– ScyllaDB 2.0.3 9.6.x, 9.5.x, 9.4.x, 9.3.x –– Redis 4.0.10,3.2.12 –– JanusGraph 0.1.1 beta –– etcd 3.3.3,3.2.18 –– Elasticsearch 6.2.2 , 5.6.9 –– Db2 Warehouse-ef

Additional Services –– Roll Back (Amazon Redshift) –– Troubleshooting as Service –– Rollback –– Rollback –– Rollback and Support –– Support –– Rollback –– Support –– Support –– Support –– Support

Total price for the € 62.45 for db.t2.medium € 59.04 for Gen 4 2 vCores, Basic Tier € 118.64 for db-n1-standard-2 For MySQL: For MySQL: database (­Generation 2) Total Price = € 298.40 for 100 GB Storage Total Price = € 812 for 100 GB Storage –– MySQL For PostgreSQL: For PostgreSQL: –– 2 vCores Total Price = € 312.80 for 100 GB Storage Total Price = € 542 for 100 GB Storage –– 100 GB Storage –– Frankfurt / Western Europe –– 100% active per month –– No dedicated backup

How does backup/ Amazon RDS creates a storage volume Point in Time Restore Automatic Backups MySQL daily backups MySQL daily backups restore work? snapshot of DB instance for backup and Geo-Restore On-demand restore.

36 Tests Databases (DB-as-a-Service)

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which DB engines are Relational DB Relational DB Relational DB Relational DB Relational DB offered? –– MySQL –– Azure SQL Database –– PostgreSQL –– PostgreSQL –– Db2 on Cloud –– PostgreSQL –– Azure Database for MySQL –– MySQL –– MySQL –– PostgreSQL –– MariaDB –– Azure Database for PostgreSQL –– Google Cloud Spanner –– Microsoft SQL Server –– MySQL –– Oracle –– Azure Database for Maria DB –– Microsoft SQL Server –– Microsoft SQL Server Non-Relational DB Non-Relational DB Non-Relational DB –– Amazon Aurora –– Google Cloud Datastore –– MongoDB –– Cloudant Non-Relational DB –– Google Cloud BigTable –– Redis –– MongoDB Non-Relational DB –– Azure Cosmos DB –– ScyllaDB –– Amazon DynamoDB –– Azure Table Storage Data Warehouse / Big Data –– Redis –– Amazon ElastiCache –– Redis –– Google Cloud BigQuery –– JanusGraph –– Amazon Neptune –– Google Cloud Dataflow –– etcd –– Redis Data Warehouse / Big Data –– Google Cloud Dataproc (Hadoop / –– Elasticsearch –– MemCached –– SQL Data Warehouse Spark) –– HDInsight (Hadoop, Spark, Hive, LLAP, –– Google Cloud Datalab Data Warehouse / Big Data Data Warehouse / Big Data Kafka, Storm, R.) –– Google Cloud Dataprep –– Db2 Warehouse on Cloud –– Amazon Redshift –– Azure Databricks (Spark) –– Amazon Athena –– Azure Data Factory –– Amazon EMR (Hadoop, Spark, HBase, –– Azure Stream Analytics Presto, etc.) –– Amazon Kinesis –– Amazon Elasticsearch Service –– Amazon Quicksight

Performance of MyS- QL (MySQL Sysbench, table-size (row data): 1000000, Threads: 16) - Read Transactions: 59354 (988.96/sec) Transactions: 52354 (988.96/sec) Transactions: 49084 (817.85/sec) Transactions: 52545 (875.53/sec) Transactions: 353 (5.65/sec) - Write Transactions: 42052 (699.07/sec) Transactions: 41002 (683.25/sec) Transactions: n/a Transactions: 75435 (1256.91/sec) Transactions: 873 (14.29/sec) - Read / Write Transactions: 28325 (471.91/sec) Transactions: 30412 (506.86/sec) Transactions: 29329 (488.66/sec) Transactions: 28676 (477.75/sec) Transactions: 273 (4.31/sec)

Supported DB –– MySQL 5.7, 5.6, 5.5 –– MySQL 5.7, 5.6 –– MySQL 5.7, 5.6 –– PostgreSQL 9.6.5, 9.6.3, 9.5.5 –– Db2-ge ­Versions –– MariaDB 10.2,10.1,10.0 –– MariaDB currently on wait list as beta –– PostgreSQL 9.6.x –– MySQL 5.7.20, 5.7.17, –– PostgreSQL 9.6.10,9.6.9,9.5.14,9.5.13,9.4.19, –– Microsoft SQL Server 2017 RTM, 2016 –– Azure SQL Database: Microsoft SQL –– 5.6.35, 5.6.34, 5.6.33, 5.6.30 9.4.18 SP1, 2014 SP2, 2012 SP4, 2008 R2 SP3 Server 2017 –– Microsoft SQL Server 2014 SP2 SE –– MySQL 5.7.22 –– Oracle 12c (12.1.0.2, 12.1.0.1), Oracle 11g –– Microsoft SQL Server 2017, 2016 SP1, –– MongoDB –– Cloudant-h7 (11.2.0.4, 11.2.0.3, 11.2.0.2) 2014 SP2, 2012 SP4, 2008 R2 SP3 –– Redis 3.0.7 –– MongoDB 3.4.10,3.2.18,3.2.11,3.2.10 –– PostgreSQL 11 Beta 1, 10.4, 10.3, 10.1, –– PostgreSQL 10.3, 9.6.x, 9.5.x –– ScyllaDB 2.0.3 9.6.x, 9.5.x, 9.4.x, 9.3.x –– Redis 4.0.10,3.2.12 –– JanusGraph 0.1.1 beta –– etcd 3.3.3,3.2.18 –– Elasticsearch 6.2.2 , 5.6.9 –– Db2 Warehouse-ef

Additional Services –– Roll Back (Amazon Redshift) –– Troubleshooting as Service –– Rollback –– Rollback –– Rollback and Support –– Support –– Rollback –– Support –– Support –– Support –– Support

Total price for the € 62.45 for db.t2.medium € 59.04 for Gen 4 2 vCores, Basic Tier € 118.64 for db-n1-standard-2 For MySQL: For MySQL: database (­Generation 2) Total Price = € 298.40 for 100 GB Storage Total Price = € 812 for 100 GB Storage –– MySQL For PostgreSQL: For PostgreSQL: –– 2 vCores Total Price = € 312.80 for 100 GB Storage Total Price = € 542 for 100 GB Storage –– 100 GB Storage –– Frankfurt / Western Europe –– 100% active per month –– No dedicated backup

How does backup/ Amazon RDS creates a storage volume Point in Time Restore Automatic Backups MySQL daily backups MySQL daily backups restore work? snapshot of DB instance for backup and Geo-Restore On-demand restore.

the cloud report 01—2019 37 Logging as a Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud platform provide a yes yes yes yes no no yes Logging as a Service functionality?

Is the data stored in encrypted yes yes yes yes no no yes form?

Which logging technology is –– AWS Cloudwatch –– Activity Logs Stackdriver Logging Cloud trace no no Bluemix UI used? –– AWS Cloudtrail –– Activity diagnostics Logs Cloud Foundry Line Interface(CLI) –– AWS VPC flow logs –– Azure AD Reporting External logging –– Amazon Cloudfront access logs –– Virtual machines and cloud services –– Amazon S3 access logs –– Azure Storage Analytics –– Network Security Group (NSG) flow logs –– Application insight

On what basis is logging billed? Based on: Data Ingestion 5 GB/month1 € 2.38/GB Logging = € 0.43/GB n/a n/a n/a Lite: Free –– Resources Data Retention 31 days2 € 0.10/GB/month –– Region Log Collection: –– API –– 0.38 € / GB ingested –– Event type –– 0.08 € / GB stored per month –– Alarm/resources Log Collection with 2 GB/Day Search: –– 0.38 € / GB ingested –– 0.08 € / GB stored per month –– 3.01 € / day searchable

Log Collection with 5 GB/Day Search –– 0.38 € / GB ingested –– 0.08 € / GB stored per month –– 7.52 € / day searchable

Log Collection with 10 GB/Day Search –– 0.38 € / GB ingested –– 0.08 € / GB stored per month –– 15.04 € / day searchable

38 Tests Logging as a Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud platform provide a yes yes yes yes no no yes Logging as a Service functionality?

Is the data stored in encrypted yes yes yes yes no no yes form?

Which logging technology is –– AWS Cloudwatch –– Activity Logs Stackdriver Logging Cloud trace no no Bluemix UI used? –– AWS Cloudtrail –– Activity diagnostics Logs Cloud Foundry Line Interface(CLI) –– AWS VPC flow logs –– Azure AD Reporting External logging –– Amazon Cloudfront access logs –– Virtual machines and cloud services –– Amazon S3 access logs –– Azure Storage Analytics –– Network Security Group (NSG) flow logs –– Application insight

On what basis is logging billed? Based on: Data Ingestion 5 GB/month1 € 2.38/GB Logging = € 0.43/GB n/a n/a n/a Lite: Free –– Resources Data Retention 31 days2 € 0.10/GB/month –– Region Log Collection: –– API –– 0.38 € / GB ingested –– Event type –– 0.08 € / GB stored per month –– Alarm/resources Log Collection with 2 GB/Day Search: –– 0.38 € / GB ingested –– 0.08 € / GB stored per month –– 3.01 € / day searchable

Log Collection with 5 GB/Day Search –– 0.38 € / GB ingested –– 0.08 € / GB stored per month –– 7.52 € / day searchable

Log Collection with 10 GB/Day Search –– 0.38 € / GB ingested –– 0.08 € / GB stored per month –– 15.04 € / day searchable

the cloud report 01—2019 39 Network

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is network monitoring availble? yes yes yes yes yes yes yes

Is a Content Delivery Network (CDN) available? yes yes yes no yes yes yes

Sample Measurements Iperf Result: Iperf Result: Iperf Result: Iperf Result: Iperf Result: Iperf Result: Iperf Result: TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth Sender: 959 Mbits/sec Sender: 906 Mbits/sec Sender: 3.85 Gbits/sec Sender: 9.97 Gbits/sec Sender: 101 Mbits/sec Sender: 3.74 Gbits/sec Sender: 101 Mbits/sec Receiver: 958 Mbits/sec Receiver: 904 Mbits/sec Receiver: 3.85 Gbits/sec Receiver: 9.97 Gbits/sec Receiver: 99.6 Mbits/sec Receiver: 3.74 Gbits/sec Receiver: 99.8 Mbits/sec UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth Sent: 990 Mbits/sec Sent: 923 Mbits/sec Sent: 3.85 Gbits/sec Sent: 7.26 Gbits/sec Sent: 99.0 Mbits/sec Sent: 2.16 Gbits/sec Sent: 99.0 Mbits/sec

Public IPs - Public IPs for VMs? yes yes yes yes yes yes yes - Available kinds of public IPs for VMs floating / static floating / static floating / static floating static floating floating/static - Public IPs for LoadBalancers? yes yes yes yes yes yes yes - Available kinds of public IPs for LoadBalancers static static static static static floating static

Is a dedicated network connection from data- yes (AWS Direct Connect) yes (Azure Express Route) yes (Google Cloud Interconnect) no yes yes yes center to public cloud possible?

Network Security features (Network Traffic - AWS Web Application Firewall Network Access Controls - Firewall Security Groups Network Security Groups Network Security Groups Network Security Groups analysis, Network Security Groups) - Network security groups - User-Defined Routes - Network security groups Firewalls Firewalls Firewalls (Multi VLAN, Single VLAN - Network Traffic analysis - Network Security Appliance - Network Traffic analysis and Web App) - Application Gateway DDOS mitigation - Azure Web Application Firewall - Network Availability Control

Network security group (NSG); Network security group flow logs; Log Analytics; Log analytics workspace; Network Watcher

Traffic costs per GB Up to 10 TB per month: € 0.15 Up to 5 GB per month: Free Up to 10 TB per month: € 0.073 Up to 50 TB per month: € 0.06 € 0.05 € 0.078 Next 40 TB per month: € 0.095 Next 5 GB per month: € 0.075 Next 140 TB per month: € 0.063 Next 150 TB per month: € 0.04 Next 100 TB per month: € 0.078 Next 100 TB per month: € 0.06 Next 350 TB per month: € 0.039 Over 150 TB per month: € 0.069 Next 350 TB per month: € 0.043

Security

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

–– Integration to a SIEM possible? (Security Information and Event Management) yes yes yes no yes no yes –– Security Groups yes yes yes yes yes yes yes –– Disk Encryption yes yes yes no yes yes yes –– Network Traffic Analyse yes yes yes no yes no yes

Protection against Denial of Service Attacks yes yes yes yes yes yes yes

Firewall - Does the cloud provider provide additional integrated security features yes yes no no yes yes yes i.e. a Next Generation Firewall?

Does the cloud provider keep an eye on current threats and take action? yes yes yes yes yes yes yes

Does the cloud provider support additional integrated security features for cloud resources using 3rd party tools: –– IDS (Intrusion Detection System) yes yes yes yes yes no yes –– IPS (Intrusion Prevention System) yes yes yes yes yes no yes –– ATP (Advanced Threat Protection) yes yes yes yes yes no yes

Does the provider carry out regular penetration tests against the platform? No yes no no no no no

40 Tests Network

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is network monitoring availble? yes yes yes yes yes yes yes

Is a Content Delivery Network (CDN) available? yes yes yes no yes yes yes

Sample Measurements Iperf Result: Iperf Result: Iperf Result: Iperf Result: Iperf Result: Iperf Result: Iperf Result: TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth TCP: Bandwidth Sender: 959 Mbits/sec Sender: 906 Mbits/sec Sender: 3.85 Gbits/sec Sender: 9.97 Gbits/sec Sender: 101 Mbits/sec Sender: 3.74 Gbits/sec Sender: 101 Mbits/sec Receiver: 958 Mbits/sec Receiver: 904 Mbits/sec Receiver: 3.85 Gbits/sec Receiver: 9.97 Gbits/sec Receiver: 99.6 Mbits/sec Receiver: 3.74 Gbits/sec Receiver: 99.8 Mbits/sec UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth UDP: Bandwidth Sent: 990 Mbits/sec Sent: 923 Mbits/sec Sent: 3.85 Gbits/sec Sent: 7.26 Gbits/sec Sent: 99.0 Mbits/sec Sent: 2.16 Gbits/sec Sent: 99.0 Mbits/sec

Public IPs - Public IPs for VMs? yes yes yes yes yes yes yes - Available kinds of public IPs for VMs floating / static floating / static floating / static floating static floating floating/static - Public IPs for LoadBalancers? yes yes yes yes yes yes yes - Available kinds of public IPs for LoadBalancers static static static static static floating static

Is a dedicated network connection from data- yes (AWS Direct Connect) yes (Azure Express Route) yes (Google Cloud Interconnect) no yes yes yes center to public cloud possible?

Network Security features (Network Traffic - AWS Web Application Firewall Network Access Controls - Firewall Security Groups Network Security Groups Network Security Groups Network Security Groups analysis, Network Security Groups) - Network security groups - User-Defined Routes - Network security groups Firewalls Firewalls Firewalls (Multi VLAN, Single VLAN - Network Traffic analysis - Network Security Appliance - Network Traffic analysis and Web App) - Application Gateway DDOS mitigation - Azure Web Application Firewall - Network Availability Control

Network security group (NSG); Network security group flow logs; Log Analytics; Log analytics workspace; Network Watcher

Traffic costs per GB Up to 10 TB per month: € 0.15 Up to 5 GB per month: Free Up to 10 TB per month: € 0.073 Up to 50 TB per month: € 0.06 € 0.05 € 0.078 Next 40 TB per month: € 0.095 Next 5 GB per month: € 0.075 Next 140 TB per month: € 0.063 Next 150 TB per month: € 0.04 Next 100 TB per month: € 0.078 Next 100 TB per month: € 0.06 Next 350 TB per month: € 0.039 Over 150 TB per month: € 0.069 Next 350 TB per month: € 0.043

Security

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

–– Integration to a SIEM possible? (Security Information and Event Management) yes yes yes no yes no yes –– Security Groups yes yes yes yes yes yes yes –– Disk Encryption yes yes yes no yes yes yes –– Network Traffic Analyse yes yes yes no yes no yes

Protection against Denial of Service Attacks yes yes yes yes yes yes yes

Firewall - Does the cloud provider provide additional integrated security features yes yes no no yes yes yes i.e. a Next Generation Firewall?

Does the cloud provider keep an eye on current threats and take action? yes yes yes yes yes yes yes

Does the cloud provider support additional integrated security features for cloud resources using 3rd party tools: –– IDS (Intrusion Detection System) yes yes yes yes yes no yes –– IPS (Intrusion Prevention System) yes yes yes yes yes no yes –– ATP (Advanced Threat Protection) yes yes yes yes yes no yes

Does the provider carry out regular penetration tests against the platform? No yes no no no no no

the cloud report 01—2019 41 Image Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which operating systems Windows: Windows: Windows: Linux: Windows: Windows: Windows: are offered by the provider? –– Windows Server 2016 –– Windows Server, version 1709 Windows Server –– Ubuntu 18.04 LTS –– Windows 2008 –– Window Server 2012 R2 –– Standard 2016 –– Windows Server 2012 –– Windows Server 2016 –– windows-1709-core –– Ubuntu 18.04 LTS sys11 optimized –– Windows 2012 –– Standard 2012 –– Windows Server 2012 R2 –– Windows Server 2012 R2 –– windows-1709-core-for-containers –– Ubuntu 16.04 LTS –– Window Server 2016 Linux: –– R2 Standard 2012 –– Windows Server 2008 –– Windows Server 2012 –– windows-1803-core –– Ubuntu 16.04 LTS sys11 optimized –– openSUSE 42.x –– Windows Server 2008 R2 –– Windows Server 2008 R2 SP1 –– windows-1803-core-for-containers –– Ubuntu 14.04 LTS Linux: –– CentOS 6.x , 7.x Linux: –– Windows Server 2003 R2 –– Windows Server 2008 SP2 –– windows-2016 –– Ubuntu 14.04 LTS sys11 optimized –– openSUSE 42.x –– Debian 8.x 9.x –– CentOS-Minimal 7.X –– Windows 10 –– windows-2016-core –– Rescue Ubuntu 16.04 sys11 –– CentOS 6.x , 7.x –– Fedora 25 , 26 , 27 –– CentOS-LAMP 7.X Linux: –– windows-2012-r2 –– Rescue Ubuntu 18.04 sys11 –– Debian 8.x 9.x –– EulerOS 2.x –– CentOS-Minimal 6.X –– CentOS Linux: –– windows-2012-r2-core –– Fedora 24 , 25 , 26 , 27 –– Ubuntu 14.04.x ,16.04.x.18.04.x –– CentOS-LAMP 6.X –– Amezone Linux –– CentOS-based 6.9 –– windows-2008-r2 –– EulerOS 2.x –– SUSE Enterprise Linux 11 , 12 –– Debian Minimal Stable 9.X –– Gentoo –– CentOS-based 7.4 –– Ubuntu 14.04.x –– Oracle Linux 6.8 , 7.2 –– Debian Minimal Stable 8.X –– Mint –– ClearLinux Linux: –– Ubuntu 16.04.x –– Red Enterprise Linux 6.8 , 7.3 –– Debian LAMP Stable 8.X –– Debian –– Container Linux –– centos-6,7 –– SUSE Enterprise Linux 11 , 12 –– Red Hat Minimal 7.x –– SUSE –– Debian 8 “Jessie” –– Container-Optimized OS from Google –– Oracle Linux 6.8 , 7.2 –– Red Hat LAMP 7.x –– FreeBSD –– Debian 9 “Stretch” (cos-stable,beta,dev) –– Red Enterprise Linux 6.8, 7.3 –– Red Hat Minimal 6.x –– RHEL –– Red Hat Enterprise Linux 7.x –– coreos-stable,beta,alpha –– Red Hat LAMP 6.x –– SUSE Linux Enterprise Server –– SLES 11SP4 –– debian-9 –– Ubuntu Minimal 18.04-LTS –– Ubuntu 18.04,16.04,14.04 –– SLES 12SP3 –– rhel-6,7 –– Ubuntu LAMP 18.04-LTS –– Ubuntu 14.04-LTS –– rhel-7-sap-apps –– Ubuntu Minimal 16.04-LTS –– Ubuntu 16.04-LTS –– rhel-7-sap-hana –– Ubuntu LAMP 16.04-LTS –– Ubuntu 18.04-LTS –– sles-11,12 –– Ubuntu Minimal 14.04-LTS –– sles-12-sp3-sap –– sles-12-sp2-sap –– ubuntu-1804-lts,1710,1604-lts,1404-lts –– ubuntu-minimal-1804-lts,1604-lts

Can own images be yes yes yes yes yes yes yes ­uploaded? Supported formats: Supported formats: Supported formats: Supported formats Supported Formats: Supported Formats: - OVA File - VHD / VHDX - VMDK - ISO - VHD - VHD - VMDK - VHDX - QCOW2 - VMDK - VMDK - VHD - VPC - RAW - VHDX - QCOW2 - RAW - VDI - AKI - QCOW2 - AKI - QCOW2 - AMI - RAW - ARI - KRI - AMI

Can existing licenses be yes yes yes yes yes n/a yes used to minimize costs?

Is there an image build yes yes yes yes yes yes yes service? Supported formats: Supported formats: Supported formats: Supported formats: Supported Formats: Supported Formats: Supported formats: - OVA file - VHD / VHDX - VMDK - ISO - VHD - ISO - VHD - VMDK - VHDX - QCOW2 - VMDK - PLOOP - VMDK - VHD - VPC - RAW - VHDX - QCOW2 - QCOW2 - VDI - AKI - QCOW2 - RAW - AKI - QCOW2 - AMI - RAW - VDI - ARI - KRI - VHD - AMI - VMDK - AKI - ARI - AMI - OVA - DOCKER

Can images be created from existing cloud instances? yes yes yes yes yes yes yes

Are different patch levels of images available? yes yes yes yes yes yes yes

42 Tests Image Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which operating systems Windows: Windows: Windows: Linux: Windows: Windows: Windows: are offered by the provider? –– Windows Server 2016 –– Windows Server, version 1709 Windows Server –– Ubuntu 18.04 LTS –– Windows 2008 –– Window Server 2012 R2 –– Standard 2016 –– Windows Server 2012 –– Windows Server 2016 –– windows-1709-core –– Ubuntu 18.04 LTS sys11 optimized –– Windows 2012 –– Standard 2012 –– Windows Server 2012 R2 –– Windows Server 2012 R2 –– windows-1709-core-for-containers –– Ubuntu 16.04 LTS –– Window Server 2016 Linux: –– R2 Standard 2012 –– Windows Server 2008 –– Windows Server 2012 –– windows-1803-core –– Ubuntu 16.04 LTS sys11 optimized –– openSUSE 42.x –– Windows Server 2008 R2 –– Windows Server 2008 R2 SP1 –– windows-1803-core-for-containers –– Ubuntu 14.04 LTS Linux: –– CentOS 6.x , 7.x Linux: –– Windows Server 2003 R2 –– Windows Server 2008 SP2 –– windows-2016 –– Ubuntu 14.04 LTS sys11 optimized –– openSUSE 42.x –– Debian 8.x 9.x –– CentOS-Minimal 7.X –– Windows 10 –– windows-2016-core –– Rescue Ubuntu 16.04 sys11 –– CentOS 6.x , 7.x –– Fedora 25 , 26 , 27 –– CentOS-LAMP 7.X Linux: –– windows-2012-r2 –– Rescue Ubuntu 18.04 sys11 –– Debian 8.x 9.x –– EulerOS 2.x –– CentOS-Minimal 6.X –– CentOS Linux: –– windows-2012-r2-core –– Fedora 24 , 25 , 26 , 27 –– Ubuntu 14.04.x ,16.04.x.18.04.x –– CentOS-LAMP 6.X –– Amezone Linux –– CentOS-based 6.9 –– windows-2008-r2 –– EulerOS 2.x –– SUSE Enterprise Linux 11 , 12 –– Debian Minimal Stable 9.X –– Gentoo –– CentOS-based 7.4 –– Ubuntu 14.04.x –– Oracle Linux 6.8 , 7.2 –– Debian Minimal Stable 8.X –– Mint –– ClearLinux Linux: –– Ubuntu 16.04.x –– Red Enterprise Linux 6.8 , 7.3 –– Debian LAMP Stable 8.X –– Debian –– Container Linux –– centos-6,7 –– SUSE Enterprise Linux 11 , 12 –– Red Hat Minimal 7.x –– SUSE –– Debian 8 “Jessie” –– Container-Optimized OS from Google –– Oracle Linux 6.8 , 7.2 –– Red Hat LAMP 7.x –– FreeBSD –– Debian 9 “Stretch” (cos-stable,beta,dev) –– Red Enterprise Linux 6.8, 7.3 –– Red Hat Minimal 6.x –– RHEL –– Red Hat Enterprise Linux 7.x –– coreos-stable,beta,alpha –– Red Hat LAMP 6.x –– SUSE Linux Enterprise Server –– SLES 11SP4 –– debian-9 –– Ubuntu Minimal 18.04-LTS –– Ubuntu 18.04,16.04,14.04 –– SLES 12SP3 –– rhel-6,7 –– Ubuntu LAMP 18.04-LTS –– Ubuntu 14.04-LTS –– rhel-7-sap-apps –– Ubuntu Minimal 16.04-LTS –– Ubuntu 16.04-LTS –– rhel-7-sap-hana –– Ubuntu LAMP 16.04-LTS –– Ubuntu 18.04-LTS –– sles-11,12 –– Ubuntu Minimal 14.04-LTS –– sles-12-sp3-sap –– sles-12-sp2-sap –– ubuntu-1804-lts,1710,1604-lts,1404-lts –– ubuntu-minimal-1804-lts,1604-lts

Can own images be yes yes yes yes yes yes yes ­uploaded? Supported formats: Supported formats: Supported formats: Supported formats Supported Formats: Supported Formats: - OVA File - VHD / VHDX - VMDK - ISO - VHD - VHD - VMDK - VHDX - QCOW2 - VMDK - VMDK - VHD - VPC - RAW - VHDX - QCOW2 - RAW - VDI - AKI - QCOW2 - AKI - QCOW2 - AMI - RAW - ARI - KRI - AMI

Can existing licenses be yes yes yes yes yes n/a yes used to minimize costs?

Is there an image build yes yes yes yes yes yes yes service? Supported formats: Supported formats: Supported formats: Supported formats: Supported Formats: Supported Formats: Supported formats: - OVA file - VHD / VHDX - VMDK - ISO - VHD - ISO - VHD - VMDK - VHDX - QCOW2 - VMDK - PLOOP - VMDK - VHD - VPC - RAW - VHDX - QCOW2 - QCOW2 - VDI - AKI - QCOW2 - RAW - AKI - QCOW2 - AMI - RAW - VDI - ARI - KRI - VHD - AMI - VMDK - AKI - ARI - AMI - OVA - DOCKER

Can images be created from existing cloud instances? yes yes yes yes yes yes yes

Are different patch levels of images available? yes yes yes yes yes yes yes

the cloud report 01—2019 43 Software as a Service / Applikationen

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is an office suite offered? no yes yes yes yes Is it deeply integrated with other services? n/a no yes no yes

Managed App Services –– AWS Step Functions –– Azure Stack –– Google App Engine –– Distributed Message Service –– Mobile Foundation –– Amazon API Gateway –– Security and Compliance –– GSuite –– Simple Message ­Notification –– AppId –– Amazon Elastic Transcoder –– Backups and Archives –– Workspace –– Mobile Analytics –– Amazon SWF –– Disaster Recovery –– Push Notifications –– Cosmos DB –– Networks –– Active Directory Services –– Development and Testing Services –– Mobile Services

Mobile App Services AWS Mobile Azure Mobile App Service Google Firebase / App Engine IBM Mobile Foundation –– Push Notifications yes yes yes yes –– User Management yes yes yes yes –– NoSQL-Datenbase yes yes yes yes –– File Storage yes yes yes yes –– Messaging yes yes yes yes –– Social Networks no yes yes yes

Application Environments –– Websites yes (AWS Lightsail) yes (Azure Web Sites) no no –– Microservices yes (AWS Elastic Beanstalk) yes (Azure Service Fabric) yes (App Engine) yes –– Messaging yes (AWS SQS) yes (Azure Service Bus) yes (Cloud Pub/Sub) yes (IBM message Hub) –– Serverless yes (AWS Lambda) yes (Azure Functions) yes (Cloud Functions) yes (Cloud Functions)

Rollback to a previous application yes yes yes yes version?

Monitoring

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Dashboard yes yes yes yes yes yes yes

Which cloud resources will be monitored? –– VMs yes yes yes yes yes yes yes –– Apps yes yes yes no yes yes yes –– Network yes yes yes yes yes yes yes –– LoadBalancer yes yes yes yes yes yes yes –– Storage yes yes yes no yes yes yes

Connection/Usage of external yes yes yes no no no yes ­monitoring solutions

44 Tests Software as a Service / Applikationen

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is an office suite offered? no yes yes yes yes Is it deeply integrated with other services? n/a no yes no yes

Managed App Services – AWS Step Functions – Azure Stack – Google App Engine – Distributed Message Service – Mobile Foundation – Amazon API Gateway – Security and Compliance – GSuite – Simple Message Notification – AppId – Amazon Elastic Transcoder – Backups and Archives – Workspace – Mobile Analytics – Amazon SWF – Disaster Recovery – Push Notifications – Cosmos DB – Networks – Active Directory Services – Development and Testing Services – Mobile Services

Mobile App Services AWS Mobile Azure Mobile App Service Google Firebase / App Engine IBM Mobile Foundation – Push Notifications yes yes yes yes – User Management yes yes yes yes – NoSQL-Datenbase yes yes yes yes – File Storage yes yes yes yes – Messaging yes yes yes yes – Social Networks no yes yes yes

Application Environments – Websites yes (AWS Lightsail) yes (Azure Web Sites) no no – Microservices yes (AWS Elastic Beanstalk) yes (Azure Service Fabric) yes (App Engine) yes – Messaging yes (AWS SQS) yes (Azure Service Bus) yes (Cloud Pub/Sub) yes (IBM message Hub) – Serverless yes (AWS Lambda) yes (Azure Functions) yes (Cloud Functions) yes (Cloud Functions)

Rollback to a previous application yes yes yes yes version?

Monitoring

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Dashboard yes yes yes yes yes yes yes

Which cloud resources will be monitored? – VMs yes yes yes yes yes yes yes – Apps yes yes yes no yes yes yes – Network yes yes yes yes yes yes yes – LoadBalancer yes yes yes yes yes yes yes – Storage yes yes yes no yes yes yes

Connection/Usage of external yes yes yes no no no yes monitoring solutions

the cloud report 01—2019 45 Save your Early Bird Ticket

www.containerdays.io