Openstack Day 2018

Openstack Day 2018

ISTI-CNR, NeMIS laboratory, InfraScience group InfraScience Data Center OpenStack day 2018 Andrea Dell’Amico <[email protected]> Sep 21th, 2018 Andrea Dell’Amico CNR-ISTI InfraScience laboratory 1/23 ISTI-CNR, NeMIS laboratory, InfraScience group D4Science InfraScience Data Center D-Net CNR-ISTI InfraScience laboratory From pure PV Xen and AoE to OpenStack and Ceph: a journey Andrea Dell’Amico CNR-ISTI InfraScience laboratory 2/23 ISTI-CNR, NeMIS laboratory, InfraScience group D4Science InfraScience Data Center D-Net D4Science.org Integrated technologies that provide elastic access and usage of data and data-management capabilities D4Science web site: https://www.d4science.org • Virtual Research Environments (VRE) that give access to multiple services • Data discovery, accessing, analysis, and transformation in standard format • Powered by gCube: https://www.gcube-system.org/ Andrea Dell’Amico CNR-ISTI InfraScience laboratory 3/23 ISTI-CNR, NeMIS laboratory, InfraScience group D4Science InfraScience Data Center D-Net OpenAIRE, https://www.openaire.eu • Operate a pan-European (and global) network for Open Science to articles and research data across countries and across research communities • Definition and dissemination of guidelines for sharing scholarly products and links between them • Provide services for populating and provide to the public an information graph of interlinked scholarly entities • Provide services for assessing research impact of funders (the Commission in primis) and monitoring of open access trends • Powered by D-Net: http://www.d-net.research-infrastructures.eu/ Andrea Dell’Amico CNR-ISTI InfraScience laboratory 4/23 InfraScience Data Center, 2015 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph InfraScience DC: Virtualization servers, 2015 27 servers that host virtual machines PV Xen on all the servers. • Circa 250 Xen PV VMs • Most of the VMs run on the newer and bigger servers. • The older servers did not support hardware virtualization Andrea Dell’Amico CNR-ISTI InfraScience laboratory 6/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph InfraScience DC: Storage, 2015 10 servers used as storage area network (SAN) • block devices exported by ATA over Ethernet (AoE) • some block devices locally redundant using software raid • some block devices have no redundancy Andrea Dell’Amico CNR-ISTI InfraScience laboratory 7/23 InfraScience DC, early 2018 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph InfraScience DC: Virt servers, early 2018 19 servers that host Xen based virtual machines PV Xen on all the servers. • More than 400 Xen PV VMs • 4.8 TB of RAM, 960 CPU cores 5 servers that host oVirt (KVM) based virtual machines • They host the corporate services • Hyperconverged setup, gluster and hypervisors run on the same hosts • The oVirt Manager runs on top of gluster too Andrea Dell’Amico CNR-ISTI InfraScience laboratory 9/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph InfraScience Data Center: AoE Storage 11 servers used as storage area network (SAN) • Block devices exported by ATA over Ethernet (AoE) • Some block devices locally redundant using software raid • Some block devices have no redundancy • Network split into two bonding groups Andrea Dell’Amico CNR-ISTI InfraScience laboratory 10/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph InfraScience infrastructure, early 2018 - continued 2 OpenStack Regions on the GARR cluster • 100 VMs • local instances of ldap, DNS resolver and Prometheus • 1.3 TB of RAM in each region • 650 CPU cores in each region Andrea Dell’Amico CNR-ISTI InfraScience laboratory 11/23 InfraScience architecture when the migration started Resources reserved to the migration experiment when we started OpenStack hardware resources • 3 physical hosts reserved to OpenStack Ceph resources • Spare disks on the current SANs to be used as Ceph OSD Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph InfraScience Data Center: Ceph Storage 7 servers Ceph cluster • Based on Luminous • Block devices • Object storage (testing) • Posix FS (testing not started yet) • Almost half of the disks still used by AoE Andrea Dell’Amico CNR-ISTI InfraScience laboratory 14/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph OpenStack: software choices • RDO as the OpenStack distribution • Ceph as storage • Global GUI cloud management using ManageIQ Andrea Dell’Amico CNR-ISTI InfraScience laboratory 15/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph First steps, based on PackStack • Install PackStack • Configure the private network • Configure the floating IPs network • Add a compute node • Repeat Andrea Dell’Amico CNR-ISTI InfraScience laboratory 16/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph TripleO quickstart • Test the installation • Experiment with the configuration options • Test deploying the overcloud • Add nagios checks Andrea Dell’Amico CNR-ISTI InfraScience laboratory 17/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph TripleO production installation • Install on baremetal on three physical hosts • Undercloud • Overcloud controller • Overcloud compute node • (A fourth node, the block storage, installed on a ceph node) Andrea Dell’Amico CNR-ISTI InfraScience laboratory 18/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph Starting the migration WMs that • Can be reprovisioned from scratch • Without significant local storage • Already redundant (behind a load balancer) Andrea Dell’Amico CNR-ISTI InfraScience laboratory 19/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph Hardware migration • When a hypervisor is emptied from the old VMs, convert it to a compute node • Start a new batch of VMs migration Still a work in progress. Some VMs are too old to be migrated, a couple of Xen hypervisors are going to be operational until we can dismiss those VMs Andrea Dell’Amico CNR-ISTI InfraScience laboratory 20/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph OpenStack upgrade • Not tested yet • Plan: install a new TripleO undercloud and test on it Andrea Dell’Amico CNR-ISTI InfraScience laboratory 21/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph Monitoring • Nagios checks and Prometheus exporters on the VMs • Plan: integrate Nagios and hopefully Prometheus into the OpenStack components Andrea Dell’Amico CNR-ISTI InfraScience laboratory 22/23 Current Infrastructure when the project started ISTI-CNR, NeMIS laboratory, InfraScience group Infrastructure state at early 2018 InfraScience Data Center Migration to OpenStack and Ceph That’s all Questions? Andrea Dell’Amico CNR-ISTI InfraScience laboratory 23/23.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us