Gmount: Building Ad-Hoc Distributed File Systems by GXP and SSHFS-MUX

Gmount: Building Ad-Hoc Distributed File Systems by GXP and SSHFS-MUX

Vol. 49 No. 4 IPSJ Journal Apr. 2008 Regular Paper GMount: Building Ad-hoc Distributed File Systems by GXP and SSHFS-MUX Nan Dun,†1 Kenjiro Taura†1 and Akinori Yonezawa †1 GMount is a file sharing utility built by using GXP and SSHFS-MUX for the wide-area Grid environments. By GMount, non-privilege users are able to build their own ad-hoc distributed file systems in seconds. The file lookup in GMount can be efficient in terms of communication cost if the network topology is given. In this paper, we present the design and implementation of GMount, as well as its evaluation on a large scale Grid platform — InTrigger. 2.2 SSHFS-MUX 1. Introduction 2.2.1 Overview Network file system1) and conventional dis- SSHFS (Secure Shell FileSystem)7) enables tributed file systems2)–5) provide data sharing users to seamlessly manipulate files on remote approaches for parallel data processing in clus- machines as local ones via SSH channel. SSHFS ter or Grid environments. They are usually de- is implemented by using FUSE (Filesystem in ployed in advance by system administrators and Userspace)8) and SFTP (Secure File Transfer non-privilege users can hardly change the sys- Protocol)9). Since recent Linux kernel (version tem configurations to meet their demand for >= 2.6.14) includes FUSE by default, the in- specific applications. For example, users are stallation or configuration of SSHFS does not unable to add or remove storage nodes, or to need superuser privilege. change the previously assigned global names- SSHFS-MUX (SSHFS Mutiplex) is an exten- pace. sion of SSHFS. It includes all merits of SSHFS GMount is an ad-hoc distributed file system as well as extra features that make it more con- built by using GXP and SSHFS-MUX. GMount venient for users to handle with multiple remote works in userspace and any user can easily cre- machines. The difference between SSHFS and ate his/her own distributed file systems on arbi- SSHFS-MUX can be intuitively illustrated by trary nodes across multiple clusters in seconds. their usages shown in Fig. 1. Since it uses SSH as communication channel, GMount is adaptable in NAT or firewall condi- A$ sshfs B:/data /mnt/B tions. If network topology is available, GMount A$ sshfs C:/data /mnt/C becomes more efficient in terms of communi- A$ unionfs /mnt/B /mnt/C /mnt/all cation cost because it utilizes this information (a) SSHFS with UnionFS to make file lookup behave in a locality-aware A$ sshfsm B:/data C:/data /mnt/all manner. (b) SSHFS-MUX Merge Operation 2. Building Blocks B$ sshfsm C:/data /inter 2.1 GXP A$ sshfsm B:/inter /mnt 6) GXP (Grid and Cluster Shell) is a par- (c) SSHFS-MUX Relay Operation allel shell that allows user interactively ex- ecute commands on many remote machines Fig. 1 Usages of SSHFS and SSHFS-MUX across multiple clusters. It provides a master- worker style parallel processing framework by SSHFS can mount only one host to a which users can quickly develop scalable paral- mount point once, or otherwise combinely using lel programs. We employed this framework as UnionFS10) as in Fig. 1(a). Instead, mounting GMount’s workflow and let GXP be the front- multiple hosts to the same mount point at a end via which users can manipulate GMount. time is common in SSHFS-MUX, which is re- ferred as Op-Merge in Fig. 1(b). In Op-Merge, mount target appears later in arguments will †1 Graduate School of Information Science and be first checked on file request. For example Technology, the University of Tokyo 1234 Vol. 49 No. 4 GMount: Building Ad-hoc Distributed File Systems by GXP and SSHFS-MUX 1235 30 in Fig. 1(b), a file lookup request destined to SSHFS SSHFS+UnionFS SSHFS+unionfs-FUSE /mnt/all will be redirected to branch C:/data SSHFS-MUX 25 first. Only when the target file is not found in C:/data, the request will go next to branch 20 B:/data. Another usage scenario of SSHFS-MUX is 15 called Op-Relay as shown in Fig. 1(c). It 10 suggests that SSHFS-MUX can mount target Throughput (MB/sec) host indirectly via intermediate mount point on 5 which the target host has been mounted. There 0 are two reasons to do so: 1) The load of tar- Read Write get host serving many SFTP clients can be mi- I/O Performance Tests Fig. 2 I/O Performance Comparison between grated to intermediate hosts; 2) If target host is SSHFS+unionfs and SSHFS-MUX behind NAT router or firewall, we can take ad- vantage of gateway as intermediate node and let it relay the mount. But meanwhile, Op-Relay 3. Design Details will introduce overheads, such as context-switch in FUSE and potential network delay, that need 3.1 Overview to be taken into consideration. In GXP master-worker framework, the root 2.2.2 SSHFS + Union File System node of GXP login spanning tree will become SSHFS-MUX can be replaced by the combi- the master and all nodes are workers. Figure 3 nation of SSHFS and unionfs (union file sys- demonstrates the three stages of GMount exe- tem) to implement the same functionality, but cution. it provides advantages over SSHFS+unionfs in following aspects. First, SSHFS-MUX is easy to use. It can save users creating intermediate mount points for SSHFS branches merged by UnionFS. Be- sides, kernel space implementation of union file system (e.g., UnionFS10)) requires additional privilege to perform mount operation. Second, SSHFS-MUX has better perfor- Fig. 3 GMount Workflow mance. To compare the I/O performance of SSHFS+unionfs and SSHFS-MUX, we let them mount localhost to remove the network la- Planning Stage Master first gathers the tency and perform read/write tests under the workers information such as total nodes num- same condition. In this experiment, we chose ber, hostname, ip address, etc, and generates UnionFS10) as the test candidate for kernel a mount spanning tree (different from GXP space implementation of union file system and spanning tree) based on specified algorithms. UnionFS-FUSE11) for user space one. Then master broadcasts this tree structure Figure 2 shows the results. Here, SSHFS- along with mount flag to all of workers. MUX is slightly lower than SSHFS in read Executing Stage After receiving instruc- performance and has the best write perfor- tions from master, workers (include master mance. SSHFS+unionfs-FUSE has the lowest itself) can determine their mount phase and overall performance since it needs to perform mount targets according to their position costly context switch twice through FUSE ker- in mount spanning tree. When every node nel module. is ready, all nodes synchronize their execu- Third, SSHFS-MUX compacts the interac- tion by using GXP barrier and then carry tion among components, reduces the effort out proper operations (i.e., SSHFS-MUX Op- of deployment, and makes it easier to debug Merge and Op-Relay) according to predefined GMount. mount algorithms. Aggregating Stage Each worker will send their exit status and error message, if any, back to master after mount phase is finished. 1236 IPSJ Journal Apr. 2008 Finally, master aggregates the results and prompt user for success or failure of the exe- OneMountsAll cution of GMount. 01 if self is leaf then Figure 4 shows the usage of GMount. First, 02 noop user selects as many nodes as they like by GXP. 03 else: Then user can specify GMount actions (e.g., 04 if self is root then ”aa” for all-mount-all and ”u” for umount) with 05 mount point ← “/mnt” arguments in GXP mw command. 06 else ′′ 07 mount point ← “/inter $ gxpc explore node[[000-010]].cluster.net 08 targets ← φ $ gxpc mw gmnt -a aa /share /mnt 09 t ← self : /share $ gxpc mw gmnt -a u 10 targets ← targets ∪ t Fig. 4 Usages of GMount 11 foreach child in self.children 13 if child is leaf then 12 ← In order to facilitate the description, the de- t child : /share 15 else tails of mount phase will be first given in 3.2, 12 t ← child : /inter and the GMount spanning tree construction al- 16 targets ← targets ∪ t gorithms will be discussed in 3.3. 17 exec(“sshfsm targets mount point”) 3.2 Mount Phase Fig. 6 Algorithms in One-Mounts-All Phase There are four kinds of mount phase in GMount: one-mounts-all, all-mount-one, all- mount-all, and umount. Throughout this section, we use a simple spanning tree as shown in Fig. 5 to illustrate A$ sshfsm B:/inter C:/share A:/share /mnt B$ sshfsm E:/inter D:/share B:/share /inter the details of executions of SSHFS-MUX on E$ sshfsm F:/share E:/share /inter each host. We assume that there are three C, D, F$ No Operation directories for each host: sharing directory Fig. 7 Examplary Executions in One-Mounts-All /share, mount point /mnt, and intermediate Phase directory /inter. A AllMountOne B C 01 mount point ← “/mnt” 02 targets ← φ D E 03 if self is root then 04 t ← self : /share F 05 targets ← targets ∪ t 06 else: Fig. 5 Examplary Spanning Tree 07 if self.parent is root then 08 t ← self.parent : /share 3.2.1 One-Mounts-All Phase 09 else Figure 6 shows the algorithms and Figure 7 10 t ← self.parent : /mnt shows the executions in one-mounts-all phase. 11 targets ← targets ∪ t After one-mounts-all phase, root node is able 12 exec(sshfsm targets mount point) to see all contents in /share of all nodes via its Fig.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us