Analyzing Performance in the Cloud Solving an Elastic Problem with a Scientific Approach

Total Page:16

File Type:pdf, Size:1020Kb

Analyzing Performance in the Cloud Solving an Elastic Problem with a Scientific Approach Analyzing Performance in the Cloud solving an elastic problem with a scientific approach Nicholas Wakou (Dell EMC), Alex Krzos (Red Hat) Thursday, October 27, 2016 Barcelona Openstack Summit 2016 Presenters Nicholas Wakou is a Principal Alex Krzos is a Senior Performance Performance Engineer with the Engineer at Red Hat working on Dell EMC Open Source Solutions Openstack [email protected] [email protected] https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16204/ana lyzing-performance-in-the-cloud-solving-an-elastic-problem-with-a-scientific-approach Agenda ➢ CLOUD DEFINITION & CHARACTERISTICS ➢ PERFORMANCE MEASURING TOOLS ➢ SPEC CLOUD IaaS 2016 BENCHMARK ➢ PERFORMANCE MONITORING TOOLS ➢ PERFORMANCE CHARACTERIZATION ➢ TUNING TIPS CLOUD DEFINITION & CHARACTERISTICS DEFINING A CLOUD NIST SPECIAL PUBLICATION 800-145 Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf CLOUD CHARACTERISTICS PERFORMANCE MEASURING TOOLS RALLY OpenStack Benchmarking Tool ➢ as-an-App and as-a-Service ➢ Verification ➢ Benchmarking ➢ Profiling ➢ Reports ➢ SLAs for Benchmarks ➢ Many plugins Source: What is Rally?, https://rally.readthedocs.io/en/latest/ PERFKIT BENCHMARKER OpenSource Living Benchmarking Framework containing a set of Benchmarks used to compare cloud offerings/environments ➢ 10+ Cloud Providers/Environments ➢ 34+ Benchmarks ➢ Large Community Involvement ➢ Capture Cloud Elasticity with Benchmark Results ➢ Use Cloud/Environment CLI Tooling ➢ Publish Results to BigQuery for Comparison Source: Introduction to Perfkit Benchmark and How to Extend it, https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/wiki/ Tech-Talks PERFKIT EXPLORER Dashboarding and Performance Analysis Tool for PerfKitBenchmarker Results ➢ Multiple Chart Options ➢ Uses BigQuery as backend data-store ➢ Hosted in Google App Engine Source: https://github.com/GoogleCloudPlatform/PerfKitExplorer CLOUDBENCH ➢ Framework that automates cloud-scale evaluation and benchmarking ➢ Benchmark Harness ▪ Requests the Cloud Manager to create an instance(s) ▪ Submit configuration plan and steps to the Cloud Manager on how the test will be performed ▪ At the end of the test, collect and log applicable performance data and logs ▪ Destroy Instances no longer needed for the test. HARNESS AND WORKLOAD CONTROL Benchmark Harness Cloud SUT Benchmark Harness. It comprises of Cloud Bench (CBTOOL) and baseline/elasticity drivers, Group of boxes represents an and report generators. application instance For white-box clouds the benchmark harness is outside the SUT. For black-box clouds, it can be in the same location or campus. BROWBEAT Orchestration tool for existing OpenStack Workloads ➢ Combines Workloads, Metrics, and Results into single tool ➢ Runs Performance Workloads: ➢ Rally - Control Plane ➢ Rally Plugins & Rally+pBench Plugins - Control+Data Plane ➢ Shaker - Network Data Plane ➢ PerfKitBenchmarker - Data Plane + Cloud Elasticity ➢ Provides Performance Infrastructure Installation and Configuration for ➢ Carbon/Graphite/Grafana ➢ Collectd ➢ ELK ➢ FluentD ➢ Provides dashboards for Visualizing and Comparing Results and System Performance Metrics BROWBEAT - RESULTS BROWBEAT - Metrics SPEC CLOUD IAAS 2016 BENCHMARK SPEC CLOUD IAAS 2016 BENCHMARK ➢ Measures performance of Infrastructure-as-a-Service (IaaS) Clouds ➢ Measures both control and data plane ▪ Control: management operations, e.g., Instance provisioning time ▪ Data: virtualization, network performance, runtime performance ➢ Uses workloads that ➢ resemble “real” customer applications ➢ benchmarks the cloud, not the application ➢ Produces metrics (“elasticity”, “scalability”, “provisioning time”) which allow comparison SPEC Cloud IaaS Benchmarking : Dell Leads the Way http://en.community.dell.com/techcenter/cloud/b/dell-cloud- blog/archive/2016/06/24/spec-cloud-iaas-benchmarking-dell-leads-the-way Scalability and Elasticity Analogy Climbing a mountain IDEAL Scalability • Mountain: Keep on climbing • Cloud: keep on adding load without errors Elasticity • Mountain: Each step takes identical time • Cloud: performance within limits as load increases Elasticity – time for each step { { { { { Scalability – conquering{ an infinitely high mountain { { c c 18 WHAT IS MEASURED? ➢ Measures the number of AIs that can be loaded onto a Cluster before SLA violations occur ➢ Measures the scalability and elasticity of the Cloud under Test (CuT) ➢ Not a measure of Instance density ➢ SPEC Cloud workloads can individually be used to stress the CuT: ▪ KMeans – CPU/Memory ▪ YCSB - IO SPEC CLOUD BENCHMARK PHASES Baseline Phase ▪ Determine the results KMeans baseline AI for a single application instance of a workload ▪ AI = stream of 5 runs YCSB baseline AI Elasticity Phase Determine cloud elasticity and scalability results when multiple workloads are run BENCHMARK STOPPING CONDITIONS ➢ 20% AIs fail to provision ➢ 10% AIs have errors in any run ➢ Max number of AIs set by Cloud Provider ➢ 50% AIs have QoS violations ▪ KMeans completion time ≤ 3.33x Baseline phase ▪ YCSB Throughput ≥ Baselinethroughput / 3 ▪ YCSB Read Response Time ≤ 20 x BaselineReadResponse Time ▪ YCSB Insert Response Time ≤ 20 x BaselineInsertResponse Time HIGH LEVEL REPORT SUMMARY PUBLISHED RESULTS WEBSITE https://www.spec.org/cloud_iaas2016/results/cloudiaas2016.html PERFORMANCE MONITORING TOOLS CEILOMETER Another familiar OpenStack project ➢ https://wiki.openstack.org/wiki/Telemetry ➢ Goal is to efficiently collect, normalize and transform data produced by OpenStack services ➢ Interacts directly with the OpenStack services through defined interfaces ➢ Applications can leverage Ceilometer to gather OpenStack performance data Source: http://docs.openstack.org/developer/ceilometer/architecture.html COLLECTD/GRAPHITE/GRAFANA ➢ Collectd ➢ Daemon to collect System Performance Statistics ➢ Plugins for CPU, Memory, Disk, Network, Process, … ➢ Graphite/Carbon ➢ Carbon receives metrics, and flushes them to whisper database files ➢ Graphite is webapp frontend to Carbon ➢ Grafana ➢ Visualize metrics from multiple backends. GANGLIA Ganglia is a scalable, distributed monitoring system for high-performance computing systems such as Server Nodes, Clusters and Grids. - Relatively easy to setup - Tracks a lot hardware-centric metrics - Low operational burden PERFORMANCE CHARACTERIZATION PROVISIONING TIME: SPEC CLOUD ➢ The time needed to bring up a new instance, or add more resources (like CPU or storage) to an existing instance ➢ Instance: Time FROM request to create a new instance TO time when the instance responds to a netcat probe on port 22. ➢ Application instance: Time FROM request to create a new instance TO time when the AI reports readiness to accept client requests. ➢ Provisioning Time Characterization using Baseline phase ➢ Increase number of VMs (vary YCSB seeds KMeans and/or Hadoop slaves) and note impact on provisioning time. ➢ vary instance configuration (flavor) IO LIMITS PCI-E Limits For PCI-E Gen-3 capable slots. http://www.tested.com/tech/457440-theoretical-vs- actual-bandwidth-pci-express-and-thunderbolt/ SAS Limit An LSI whitepaper, Switched SAS: Sharable, Scalable SAS Infrastructure http://www.abacus.cz/prilohy/_5025/5025548/SAS_Switch_White%20Paper_US-EN_092210. pdf NETWORK/IO CHARACTERIZATION ➢ SPEC Cloud YCSB Baseline tests – Throughput (ops/s) ➢ Vary number of Seeds ➢ Increase number of YCSB records and operations ➢ Increase number of YCSB threads ➢ CloudBench fio ➢ CloudBench Netperf ➢ Understand network utilization under load ➢ Management networks ➢ Data networks (Neutron tenant) ➢ Monitor with Ganglia, collectd, Linux tools (vmstat, iostat etc) CPU CHARACTERIZATION ➢ Understand CPU utilization under load ➢ Monitor with Ganglia, collectd, graphana ➢ Linux tools (top, vmstat), SPEC Cloud, Kmeans Note: ✓ CPU user time ✓ CPU system time ✓ CPU iowait time ✓ CPU irq time ➢ Use SPEC Cloud Baseline tests for CPU Characterization ➢ Vary number of hadoopslaves ➢ Increase sample size, number of dimensions, number of clusters SCALABILITY/ELASTICITY ➢ Understand Scalability/Elasticity of the CuT ➢ SPEC Cloud Elasticity phase ➢ Vary number of AIs ➢ Monitor with FDR html report TUNING TIPS HARDWARE/OS TUNING ➢ Latest BIOS and Firmware revs ➢ Appropriate BIOS settings ➢ RAID/JBOD ➢ Disk controller ➢ NIC driver- Interrupt coalescing and affinitization ➢ NIC bonding ➢ NIC jumbo frames ➢ OS configuration settings CLOUD TUNING ▪ HW/OS Tuning ▪ Cloud Configs/Settings ▪ Workload tuning INSTANCE CONFIGURATION Performance is impacted by ▪ Instance type (flavor) ▪ Number of Instances OVER-SUBSCRIPTION Beware of over-subscription !!! LOCAL STORAGE Use of local storage instead of shared storage (like Ceph) could improve performance by over 50% ... depending on Ceph replication. Source: OpenStack: Install and configure a storage node - OpenStackkilo. http://docs.OpenStack.org/kilo/install-guide/install/yum/content/cinder-install-storage-node.html (2015) NUMA NODES Pinning instance CPU to physical CPUs (NUMA nodes) on local storage further improves performance. Source: Red Hat: Cpu pinning and numa topology awareness in OpenStackcompute. http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-OpenStack-compute/ (2015)
Recommended publications
  • ENOS: a Holistic Framework for Conducting Scientific
    ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack Ronan-Alexandre Cherrueau, Adrien Lebre, Dimitri Pertin, Anthony Simonet, Matthieu Simonin To cite this version: Ronan-Alexandre Cherrueau, Adrien Lebre, Dimitri Pertin, Anthony Simonet, Matthieu Simonin. ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack. [Technical Report] RT-0485, Inria Rennes Bretagne Atlantique; Nantes. 2016. hal-01415522v2 HAL Id: hal-01415522 https://hal.inria.fr/hal-01415522v2 Submitted on 13 Dec 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack Ronan-Alexandre Cherrueau , Adrien Lebre , Dimitri Pertin , Anthony Simonet , Matthieu Simonin TECHNICAL REPORT N° 485 November 2016 Project-Teams Ascola and ISSN 0249-0803 ISRN INRIA/RT--485--FR+ENG Myriads ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack Ronan-Alexandre Cherrueau , Adrien Lebre , Dimitri Pertin , Anthony Simonet , Matthieu Simonin Project-Teams Ascola and Myriads Technical Report n° 485 — version 2 — initial version November 2016 — revised version Décembre 2016 — 13 pages Abstract: By massively adopting OpenStack for operating small to large private and public clouds, the industry has made it one of the largest running software project.
    [Show full text]
  • Towards Measuring and Understanding Performance in Infrastructure- and Function-As-A-Service Clouds
    Thesis for the Degree of Licentiate of Engineering Towards Measuring and Understanding Performance in Infrastructure- and Function-as-a-Service Clouds Joel Scheuner Division of Software Engineering Department of Computer Science & Engineering Chalmers University of Technology and Gothenburg University Gothenburg, Sweden, 2020 Towards Measuring and Understanding Performance in Infrastructure- and Function-as-a-Service Clouds Joel Scheuner Copyright ©2020 Joel Scheuner except where otherwise stated. All rights reserved. Technical Report ISSN 1652-876X Department of Computer Science & Engineering Division of Software Engineering Chalmers University of Technology and Gothenburg University Gothenburg, Sweden This thesis has been prepared using LATEX. Printed by Chalmers Reproservice, Gothenburg, Sweden 2020. ii ∼100% of benchmarks are wrong. - Brendan Gregg, Senior Performance Architect iv Abstract Context Cloud computing has become the de facto standard for deploying modern software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics. Objective The goal of this licentiate thesis is to measure and understand performance in IaaS and FaaS clouds. My PhD thesis will extend and leverage this understanding to propose solutions for building performance-optimized FaaS cloud applications. Method To achieve this goal, quantitative and qualitative research methods are used, including experimental research, artifact analysis, and literature review. Findings The thesis proposes a cloud benchmarking methodology to estimate application performance in IaaS clouds, characterizes typical FaaS applications, identifies gaps in literature on FaaS performance evaluations, and examines the reproducibility of reported FaaS performance experiments.
    [Show full text]
  • Measuring Cloud Network Performance with Perfkit Benchmarker
    Measuring Cloud Network Performance with PerfKit Benchmarker A Google Cloud Technical White Paper January 2020 For more information visit ​google.com/cloud Table of Contents 1. Abstract ​3 2. Introduction ​3 2.1 PerfKit Benchmarker ​4 2.1.1 PerfKit Benchmarker Basic Example ​5 2.1.2 PerfKit Benchmarker Example with Config file ​5 3. Benchmark Configurations ​6 3.1 Basic Benchmark Specific Configurations ​7 3.1.1 Latency (ping and netperf TCP_RR) ​7 3.1.2 Throughput (iperf and netperf) ​7 3.1.3 Packets per Second ​8 3.2 On-Premises to Cloud Benchmarks ​9 3.3 Cross Cloud Benchmarks ​10 3.4 VPN Benchmarks ​11 3.5 Kubernetes Benchmarks ​12 3.6 Intra-Zone Benchmarks ​13 3.7 Inter-Zone Benchmarks ​13 3.8 Inter-Region Benchmarks ​14 3.9 Multi Tier ​15 4. Inter-Region Latency Example and Results ​16 5. Viewing and Analyzing Results ​18 5.1 Visualizing Results with BigQuery and Data Studio ​18 6. Summary ​21 For more information visit ​google.com/cloud 1. Abstract Network performance is of vital importance to any business or user operating or running part of their infrastructure in a cloud environment. As such, it is also important to test the performance of a cloud provider’s network before deploying a potentially large number of virtual machines and other components of virtual infrastructure. Researchers at SMU’s AT&T Center for Virtualization (see smu.edu/provost/virtualization) have been working in conjunction with a team ​ ​ at Google to run network performance benchmarks across various cloud providers using automation built around PerfKit Benchmarker (see github.com/GoogleCloudPlatform/PerfKitBenchmarker) to track changes in network ​ performance over time.
    [Show full text]
  • ENOS: a Holistic Framework for Conducting Scientific Evaluations Of
    ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack Ronan-Alexandre Cherrueau, Adrien Lebre, Dimitri Pertin, Anthony Simonet, Matthieu Simonin To cite this version: Ronan-Alexandre Cherrueau, Adrien Lebre, Dimitri Pertin, Anthony Simonet, Matthieu Si- monin. ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack . [Technical Report] RT-0485, Inria Rennes Bretagne Atlantique; Nantes. 2016. <hal- 01415522v2> HAL Id: hal-01415522 https://hal.inria.fr/hal-01415522v2 Submitted on 13 Dec 2016 HAL is a multi-disciplinary open access L'archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destin´eeau d´ep^otet `ala diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publi´esou non, lished or not. The documents may come from ´emanant des ´etablissements d'enseignement et de teaching and research institutions in France or recherche fran¸caisou ´etrangers,des laboratoires abroad, or from public or private research centers. publics ou priv´es. ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack Ronan-Alexandre Cherrueau , Adrien Lebre , Dimitri Pertin , Anthony Simonet , Matthieu Simonin TECHNICAL REPORT N° 485 November 2016 Project-Teams Ascola and ISSN 0249-0803 ISRN INRIA/RT--485--FR+ENG Myriads ENOS: a Holistic Framework for Conducting Scientific Evaluations of OpenStack Ronan-Alexandre Cherrueau , Adrien Lebre , Dimitri Pertin , Anthony Simonet , Matthieu Simonin Project-Teams Ascola and Myriads Technical Report n° 485 — version 2 — initial version November 2016 — revised version Décembre 2016 — 13 pages Abstract: By massively adopting OpenStack for operating small to large private and public clouds, the industry has made it one of the largest running software project.
    [Show full text]
  • Optimal Allocation of Virtual Machines in Multi-Cloud Environments with Reserved and On-Demand Pricing
    Optimal allocation of virtual machines in multi-cloud environments with reserved and on-demand pricing Jos´eLuis D´ıaz∗, Joaqu´ınEntrialgo, Manuel Garc´ıa, Javier Garc´ıa, Daniel Fernando Garc´ıa Dept. of Informatics, University of Oviedo, Campus de Viesques, 33204 Gij´on(Spain) Abstract In the Cloud Computing market, a significant number of cloud providers offer Infrastructure as a Service (IaaS), including the capability of deploying virtual machines of many different types. The deployment of a service in a public provider generates a cost derived from the rental of the allocated virtual machines. In this paper we present LLOOVIA (Load Level based OpimizatiOn for VIrtual machine Allocation), an optimization technique designed for the optimal allocation of the virtual machines required by a service, in order to minimize its cost, while guaranteeing the required level of performance. LLOOVIA considers virtual machine types, different kinds of limits imposed by providers, and two price schemas for virtual machines: reserved and on-demand. LLOOVIA, which can be used with multi-cloud environments, provides two types of solutions: (1) the optimal solution and (2) the approximated solution based on a novel approach that uses binning applied on histograms of load levels. An extensive set of experiments has shown that when the size of the problem is huge, the approximated solution is calculated in a much shorter time and is very close to the optimal one. The technique presented has been applied to a set of case studies, based on the Wikipedia workload. These cases demonstrate that LLOOVIA can handle problems in which hundreds of virtual machines of many different types, multiple providers, and different kinds of limits are used.
    [Show full text]
  • CERN Cloud Benchmark Suite
    CERN Cloud Benchmark Suite D. Giordano, C. Cordeiro (CERN) HEPiX Benchmarking Working Group - Kickoff Meeting June 3rd 2016 D. Giordano Aim • Benchmark computing resources – Run in any cloud IaaS (virtual or bare-metal) – Run several benchmarks – Collect, store and share results • Satisfy several use cases – Procurement / acceptance phase – Stress tests of an infrastructure – Continuous benchmarking of provisioned resources – Prediction of job performance D. Giordano HEPiX Benchmarking Working Group 03/06/2016 2 ~1.5 year Experience • Running in a number of cloud providers • Collected ~1.2 M tests • Adopted for – CERN Commercial Cloud activities – Azure evaluation – CERN Openstack tests • Focus on CPU performance – To be extended to other areas • Further documentation – White Area Lecture (Dec 18th) … D. Giordano HEPiX Benchmarking Working Group 03/06/2016 3 Strategy – Allow collection of a configurable number of benchmarks • Compare the benchmark outcome under similar conditions – Have a prompt feedback about executed benchmarks • In production can suggest deletion and re-provisioning of underperforming VMs – Generalize the setup to run the benchmark suite in any cloud – Ease data analysis and resource accounting • Examples of analysis with Ipython shown in the next slides • Compare resources on cost-to-benefit basis – Mimic the usage of cloud resources for experiment workloads • Benchmark VMs of the same size used by VOs (1 vCPU, 4 vCPUs, etc) • Probe randomly assigned slots in a cloud cluster – Not knowing what the neighbor is doing D. Giordano HEPiX Benchmarking Working Group 03/06/2016 4 Cloud Benchmark Suite D. Giordano HEPiX Benchmarking Working Group 03/06/2016 5 A Scalable Architecture • A configurable sequence of benchmarks to run • Results are collected in Elasticsearch cluster & monitored with Kibana – Metadata: VM UID, CPU architecture, OS, Cloud name, IP address, … • Detailed analysis performed with Ipython analysis tools D.
    [Show full text]
  • Bulut Ortamlarinda Hipervizör Ve Konteyner Tipi Sanallaştirmanin Farkli Özellikte Iş Yüklerinin Performansina Etkisinin Değerlendirilmesi
    Uludağ Üniversitesi Mühendislik Fakültesi Dergisi, Cilt 25, Sayı 2, 2020 ARAŞTIRMA DOI: 10.17482/uumfd.605560 BULUT ORTAMLARINDA HİPERVİZÖR VE KONTEYNER TİPİ SANALLAŞTIRMANIN FARKLI ÖZELLİKTE İŞ YÜKLERİNİN PERFORMANSINA ETKİSİNİN DEĞERLENDİRİLMESİ Gökhan IŞIK * Uğur GÜREL * Ali Gökhan YAVUZ ** Alınma: 21.08.2019; düzeltme: 22.07.2020; kabul: 26.08.2020 Öz: Fiziksel kaynakların verimli kullanılabilmesini sağlayan sanallaştırma teknolojilerindeki ilerlemeler, bulut bilişim, nesnelerin interneti ve yazılım tanımlı ağ teknolojilerinin gelişiminde büyük pay sahibi olmuştur. Günümüzde hipervizör sanallaştırma çözümlerine alternatif olarak konteyner teknolojileri ortaya çıkmıştır. Bulut sağlayıcıları kullanıcılarına daha verimli ortamlar sunmak için hem hipervizör hem konteyner sanallaştırma çözümlerini destekleyen sistemleri tercih etmektedirler. Bu çalışmada, ‘Alan bilgisine dayalı’ metodoloji uygulanarak OpenStack bulut altyapısında işlemci, bellek, ağ ve disk yoğunluklu iş yüklerinin KVM hipervizör ve LXD konteyner sanallaştırma çözümleri üzerinde performans değerlendirmesi yapılmıştır. OpenStack bulut ortamında PerfKit Benchmarker ve Cloudbench kıyaslama otomasyon araçları vasıtasıyla ağ yoğunluklu iş yükü olarak Iperf, işlemci yoğunluklu iş yükü olarak HPL, bellek yoğunluklu iş yükü olarak Stream ve disk yoğunluklu iş yükü olarak Fio kıyaslama araçları kullanılarak performans testleri gerçekleştirilmiştir. Yapılan testler sonucunda OpenStack bulut altyapısında LXD sanallaştırma KVM sanallaştırmaya göre işlemci, ağ, sabit disk sürücüsünde
    [Show full text]
  • HPC on Openstack Review of the Our Cloud Platform Project
    HPC on OpenStack Review of the our Cloud Platform Project Petar Forai, Erich Bingruber, Uemit Seren Post FOSDEM tech talk event @UGent 04.02.2019 Agenda Who We Are & General Remarks (Petar Forai) Cloud Deployment and Continuous Verification (Uemit Seren) Cloud Monitoring System Architecture (Erich Birngruber) The “Cloudster” and How we’re Building it! Shamelessly stolen from Damien François Talk -- “The convergence of HPC and BigData What does it mean for HPC sysadmins?” Who Are We ● Part of Cloud Platform Engineering Team at molecular biology research institutes (IMP, IMBA,GMI) located in Vienna, Austria at the Vienna Bio Center. ● Tasked with delivery and operations of IT infrastructure for ~ 40 research groups (~ 500 scientists). ● IT department delivers full stack of services from workstations, networking, application hosting and development (among many others). ● Part of IT infrastructure is delivery of HPC services for our campus ● 14 People in total for everything. Vienna Bio Center Computing Profile ● Computing infrastructure almost exclusively dedicated to bioinformatics (genomics, image processing, cryo electron microscopy, etc.) ● Almost all applications are data exploration, analysis and data processing, no simulation workloads ● Have all machinery for data acquisition on site (sequencers, microscopes, etc.) ● Operating and running several compute clusters for batch computing and several compute clusters for stateful applications (web apps, data bases, etc.) What We Currently Have ● Siloed islands of infrastructure ● Cant talk to other islands, can’t access data from other island (or difficult logistics for users) ● Nightmare to manage ● No central automation across all resources easily possible What We’re Aiming At Meet the CLIP Project ● OpenStack was chosen to be evaluated further as platform for this ● Setup a project “CLIP” (Cloud Infrastructure Project) and formed project team (4.0 FTE) with a multi phase approach to delivery of the project.
    [Show full text]
  • Platforma Vert.X
    Univerzita Hradec Králové Fakulta informatiky a managementu katedra informatiky a kvantitativních metod DIPLOMOVÁ PRÁCE Analýza výkonnosti cloud computingové infrastruktury Autor: Michael Kutý Studijní obor: Aplikovaná informatika Vedoucí práce: Ing. Vladimír Soběslav, Ph.D Hradec Králové, 2016 Prohlášení Prohlašuji, že jsem diplomovou práci vypracoval samostatně a uvedl jsem všechny použité pra- meny a literaturu. V Hradci Králové dne 29. srpna 2016 Michael Kutý ii Poděkování Rád bych zde poděkoval Ing. Vladimíru Soběslavovi, Ph.D za odborné vedení práce, podnětné rady a čas, který mi věnoval. Také bych rád poděkoval své rodině a přátelům za trpělivost a morální podporu během celého studia. iii Anotace Diplomová práce se zaměřuje na problematiku měření a porovnávání výkonnosti clou- dových platforem a poskytovatelů. Cílem této práce je představit základní kvantitativní znaky a charakteristiky cloudových služeb a následně navrhnout a implementovat soft- ware pro automatizované testování a sběr dat z libovolných platforem a poskytova- telů. Nástroj bude využit pro testování vybraných poskytovatelů veřejného cloudu a výsledná data budou použita pro následné srovnání. Annotation This master thesis focuses on performence measurement and comparison of various cloud platforms and providers. The goal of this master thesis is to introduce the prin- ciples and terminology for performence evaluation of cloud platforms on a theoretical level and in a real world examples. The first part of practical part focuses on the descrip- tion of the architecture designed solutions and in next part will be tested and compared selected cloud providers. iv Obsah 1. Úvod 1 1.1. Cíl a metodika práce ................................1 1.2. Definice a použité pojmy .............................2 2. Cloud computingové služby 4 2.1.
    [Show full text]
  • Contributions to Large-Scale Distributed Systems the Infrastructure Viewpoint - Toward Fog and Edge Computing As the Next Utility Computing Paradigm? Adrien Lebre
    Contributions to Large-scale Distributed Systems The infrastructure Viewpoint - Toward Fog and Edge Computing as The Next Utility Computing Paradigm? Adrien Lebre To cite this version: Adrien Lebre. Contributions to Large-scale Distributed Systems The infrastructure Viewpoint - To- ward Fog and Edge Computing as The Next Utility Computing Paradigm?. Distributed, Parallel, and Cluster Computing [cs.DC]. Université de Nantes, 2017. tel-01664343 HAL Id: tel-01664343 https://tel.archives-ouvertes.fr/tel-01664343 Submitted on 14 Dec 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Université Bretagne Loire Habilitation à Diriger des recherches Presentée par Adrien Lebre École Doctorale Sciences et technologies de l’information, et mathématiques Discipline : Informatique et applications, section CNU 27 Unité de recherche : IMT Atlantique, Inria, LS2N Soutenue le 1er Sept. 2017 Contributions to Large-scale Distributed Systems The infrastructure Viewpoint Toward Fog and Edge Computing as The Next Utility Computing Paradigm? Contributions aux systèmes distribués à large échelle la vision infrastructure Vers le Fog et l’Edge computing comme nouveau modèle d’informatique utilitaire ? M. Claude Jard, Président Professor in Computer Science at University of Nantes Head of Nantes Digital Science Laboratory (LS2N).
    [Show full text]
  • Network Function Virtualization Benchmarking: Performance Evaluation of an IP Multimedia Subsystem with the Gym Framework Avalia
    UNIVERSIDADE ESTADUAL DE CAMPINAS Faculdade de Engenharia Elétrica e de Computação Claudio Roberto Bertoldo Junior Network Function Virtualization Benchmarking: Performance Evaluation of an IP Multimedia Subsystem with the Gym Framework Avaliação Comparativa de Virtualização de Função de Rede: Análise de Desempenho de um Subsistema Multimídia IP com o Framework Gym Campinas 2017 UNIVERSIDADE ESTADUAL DE CAMPINAS Faculdade de Engenharia Elétrica e de Computação Claudio Roberto Bertoldo Junior Network Function Virtualization Benchmarking: Performance Evaluation of an IP Multimedia Subsystem with the Gym Framework Avaliação Comparativa de Virtualização de Função de Rede: Análise de Desempenho de um Subsistema Multimídia IP com o Framework Gym Dissertação apresentada à Faculdade de En- genharia Elétrica e de Computação da Uni- versidade Estadual de Campinas como parte dos requisitos exigidos para a obtenção do título de Mestre em Engenharia Elétrica, na Área de Engenharia de Computação. Supervisor: Prof. Dr. Christian Rodolfo Esteve Rothenberg Este exemplar corresponde à versão final da dissertação defendida pelo aluno Claudio Roberto Bertoldo Junior, e orientada pelo Prof. Dr. Christian Rodolfo Esteve Rothen- berg Campinas 2017 Agência(s) de fomento e nº(s) de processo(s): Não se aplica. Ficha catalográfica Universidade Estadual de Campinas Biblioteca da Área de Engenharia e Arquitetura Luciana Pietrosanto Milla - CRB 8/8129 Bertoldo Junior, Claudio Roberto, 1987- B462n BerNetwork Function Virtualization benchmarking : performance evaluation of an IP Multimedia Subsystem with the Gym Framework / Claudio Roberto Bertoldo Junior. – Campinas, SP : [s.n.], 2017. BerOrientador: Christian Rodolfo Esteve Rothenberg. BerDissertação (mestrado) – Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação. Ber1. Virtualização. 2. Redes. 3. Desempenho.
    [Show full text]
  • MUSA D7.1 Initial Market Study
    MUlti-cloud Secure Applications Deliverable title Deliverable ID: D7.1 Preparation date: 30/06/2015 Initial market study, trends, Editor/Lead beneficiary (name/partner): segmentation and requirements Antonio M. Ortiz / Montimage Internally reviewed by (name/partner): Erkuden Ríos / Tecnalia Stefan Spahr / Lufthansa Systems Abstract: This deliverable presents a detailed analysis of the target market of the MUSA solution, focusing on the main sectors that may influence the MUSA developments and that will guide their exploitation. The deliverable constitutes the first step towards a thorough exploitation plan that will be developed based on the findings of the present study. The document covers the MUSA context analysis, the definition of the MUSA value proposition, the value chain description, the MUSA targeted sectors and competitors analysis. Dissemination level PU Public X CO Confidential, only for members of the consortium and the Commission Services This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644429 D7.1: Initial market study, trends, segmentation and requirements 3 MUSA consortium Fundación Tecnalia Research & Project manager: Erkuden Rios Innovation [email protected] (TECNALIA, Spain) +34 664 100 348 www.tecnalia.com/en Centro Regionale Information e Communication Technology Contact: Massimiliano Rak (CER ICT, Italy) [email protected] CA Technologies Development Contact: Victor Muntes Spain SAU (CA, Spain) [email protected]
    [Show full text]