Indexes for Distributed File/Storage Systems As a Large Scale Virtual Machine Disk Image Storage in a Wide Area Network

Indexes for Distributed File/Storage Systems As a Large Scale Virtual Machine Disk Image Storage in a Wide Area Network

Indexes for Distributed File/Storage Systems as a Large Scale Virtual Machine Disk Image Storage in a Wide Area Network Keiichi Shima Nam Dang IIJ Innovation Institute Tokyo Institute of Technology Chiyoda-ku, Toky¯ o¯ 101-0051, Japan Meguro-ku, Toky¯ o¯ 152-8550, Japan Email: [email protected] Email: [email protected] Abstract—In this paper, we will show throughput measurement strategy among multiple datacenters ([1], [2], [3]). One of the results of I/O operations of Ceph, Sheepdog, GlusterFS, and essential technologies to achieve this goal is a storage system XtreemFS, especially when they are used as virtual disk image for virtual machines. When a virtual machine moves from stores in a large scale virtual machine hosting environment. When used as a backend storage infrastructure for a virtual machine one physical computer to another computer, both physical hosting environment, we need different evaluation indexes other computers must keep providing the same virtual disk image than just I/O performance since the number of machines are to the virtual machine. We are typically using NFS or iSCSI huge and latency between machines and storage infrastructure technologies at this moment, however it is difficult to design may be longer in such environment. The simple I/O performance a redundant and robust storage infrastructure in a wide area measurement results show that GlusterFS and XtreemFS per- form better in read operation cases than Ceph and Sheepdog service operation. Recent research papers show that distributed even though both are supported by the virtualization software storage management systems ([4], [5], [6], [7]) are becoming (QEMU/KVM) natively. For write operation cases Sheepdog mature. We are now at the stage to start considering other performed better than others. This paper introduces two in- choices when designing a virtual storage infrastructure. dexes, P arallelismImpactRatio and LatencyImpactRation, to When considering distributed storage systems, just focusing evaluate robustness against the number of virtual machines in operation and network latency. We found that GlusterFS and on the I/O performance is not enough, if we use them as back- XtreemFS are more robust against both increasing number of end storage systems for virtual machines. We need evaluation virtual machines and network latency than Ceph and Sheepdog. indexes which are suitable to the virtualization environment. In this paper, we investigate the possibility and potential of recent distributed file/storage systems as virtual machine I. BACKGROUND backend storage systems and evaluate their I/O throughput The hosting service is one of the major Internet services in a realistic large scale storage infrastructure which consists which has been provided by many Internet service providers of 88 distributed storage nodes. We then define two evalu- for long time. The service was originally using physical ation indexes for the virtualization environment; Parallelis- computers located in a datacenter to host services of their mImpactRatio and LatencyImpactRatio. We evaluate impact customers, however, the virtualization technology changed to the throughput which may be affected by the number the hosting infrastructure drastically. Many hosting services, of virtual machines running in parallel and by the network except some mission critical services or services which require latency assuming that virtual machines are operated in multiple high performance, are now providing their services using datacenters geographically distributed. virtual host computers. The important point of virtualization in service operation is II. TARGET SYSTEMS that the efficiency in multiplexing computing resources. We A. Virtualization want to put as many virtual machines as possible as long as we can keep the service level agreement. A virtual machine The virtual machine hosting environment we used in this migration technology contributes this goal. If we have multiple measurement was QEMU ([8], [9]) and KVM ([10], [11]). virtual machines which does not consume a lot of resources The host operating system was Linux (Ubuntu 12.04.1 LTS) in multiple physical machines, we can aggregate those virtual and the version of QEMU/KVM software was 1.0 (the kvm machines to fewer number of physical machines using a live package provided for the Ubuntu 12.04.1 LTS system). migration technology without stopping them. Usually, such a migration operation is utilized within a sin- B. File/Storage Systems gle datacenter only, however, considering the total efficiency of We selected the following four different stores in this service provider operations, we need global resource mobility experiment, with the first three belonging to the hash-based TABLE II distribution family, and the last one belonging to the metadata- THE SPECIFICATION OF SERVER NODES based family. The reason of the choice is that these four Model Cisco UCS C200 M2 systems are well-known and the implementation of these CPUs Intel(R) Xeon(R) CPU X5670 @ 2.93GHz × 2 file/storage systems are publicly available. We are focusing Cores 12 (24 when Hyper-threading is on) on the performance of real implementations in this paper. Memory 48GB HDD SATA 500G × 2 • Ceph NIC 0, 1, 2, 3 Broadcom BCM5709 Gigabit Ether Ceph ([4], [12]) is a distributed object-based storage NIC 4, 5 Intel 82576 Gigabit Ether system designed to be POSIX-compatible, and highly distributed without a single point of failure. On March GlusterFS configuration 2010, Ceph was merged into Linux kernel 2.6.34, en- Management Network Experiment Network * 3 replicas x 2 stripes 172.16.23.0/24 10.1.0.0/24 * Cluster: k012-k019, k021-k100 * Non brick holders: k040, k060, k080, k100 abling easy integration with popular Linux distributions, * Connected to local GlusterFS entity and provide a virtual machine storage as a k011 1@eth0 although the software is still under development. Ceph 11@eth4 filesystem provides multiple interfaces to access the data, including 12@eth4 k012 12@eth0 Ceph configuration 1 * 2 replicas a file system, rbd , RADOS [13], and C/C++ binding. * Object storage nodes: k012-k019, k021-k100 In this experiment, instead of the file system interface, * Metadata servers: k012, k014, k016, k018 * Monitor servers: k013, k015, k017, k019 * KVM directly uses Ceph storage as a we utilized the rbd interface, which is also supported 13@eth4 k019 13@eth0 virtual machine storage by QEMU. This rbd interface relies on CRUSH [14], a XtremeFS configuration * 3 replicas k021 21@eth0 Replication Under Scalable Hashing [15] variant with the 21@eth4 * Directory server: k013 * Metadata and Replica Catalog: k012 * Connected to local XtremeFS entity and support of uniform data distribution according to device provide a virtual machine storage as a weights. filesystem 100@eth4 k100 100@eth0 sheepdog configuration • Sheepdog * 3 replicas Sheepdog ([5], [16]) is a distributed object-based storage * Zookeeper: k012-k019 system specifically designed for QEMU. Sheepdog uti- lizes a cluster management framework such as Corosync [17] or Apache ZooKeeper [18] for easy maintenance of Fig. 1. Node layout of distributed file/storage systems storage node grouping. Data distribution is based on the consistent hashing algorithm. QEMU supports sheepdog’s 4 object interface natively (the sheepdog method2) similar Japan . StarBED provides more than 1000 server nodes inter- to Ceph’s rbd interface. connected via 1Gbps switches which can be reconfigured to • GlusterFS topologies suitable for our experiment plans. More information GlusterFS ([6], [19]) is a distributed filesystem based is available from the web page. on a stackable user space design. It relies on consistent In this experiment, we used 89 server nodes to build hashing to distribute the data based on a file name. The distributed file/storage system infrastructure. The server spec- data is stored on a disk using native formats of a backend ification is shown in TABLE II. storage node. One of the interesting features of GlusterFS The upstream switch of the servers were Brocade MLXe-32. is that metadata management is fully distributed, therefore IV. LAYOUT OF SERVICE ENTITIES there is no special nodes for metadata management. GlusterFS provides POSIX semantics to its client nodes. A. Measurement Management • XtreemFS Fig. 1 shows the layout of service entities of each distributed XtreemFS ([7], [20]) is an open source object-based, file/storage system. The measurement topology consisted of 89 distributed file system for wide area network. The file nodes. system replicates objects for fault tolerance and caches The k011 node was used as a controller node from where we data and metadata to improve performance over high- sent measurement operation commands. k011 was also used as latency links. XtreemFS relies on a centralized point a NFS server to provide shared space to record measurement for metadata management and for data access. The file result taken on each server node. All the traffic to control system provided by XtreemFS is guaranteed to have measurement scenarios and all the NFS traffic were exchanged POSIX semantics even in the presence of concurrent using the Management Network (the left hand side network of access. Fig. 1) to avoid any affect to the measurement. TABLE I summarizes the properties of target systems. B. Distributed File/Storage Systems III. TESTBED ENVIRONMENT All the nodes were attached to the Experiment Network We utilized StarBED3 [21] which is operated by National using a BCM5709 network interface port (the right hand side Institute

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us