High Performace AFS

Total Page:16

File Type:pdf, Size:1020Kb

High Performace AFS RechenZentrum Garching of the Max Planck Society High Performace AFS Hartmut Reuter [email protected] · Supercomputing environment at RZG · Why AFS is slow compared to NFS and SAN-filesystems · Direct I/O from the client to the fileserver partition · Implementation in MR-AFS and OpenAFS · Performance Measurements and results · „fs import“ · MR-AFS and Castor Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society RZG RZG is the supercomputing center of the Max Planck Society in Germany It also acts as the local computing center for a number of Max Planck institutes located at Garching, specially for IPP (Institut für Plasmaphysik) The local AFS-cell therefore historically has the name ipp-garching.mpg.de Using MR-AFS this AFS cell provides also archival space for the MPG Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society MPI for Polymer Research, Mainz Multiscale model of bisphenole-A-polycarbonate (BPA-PC) on nickel (a) The coarse grained representation of a BPA-PC segment (b) Coarse grained model of a N=20 BPA-PC molecule (c) Phenole adsorbed on the bridge site of a (111) nickel surface Code: CPMD Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society MPI for Astrophysics Garching Core-collapse supernova simulation: Snapshots of the hydro- dynamic evolution of a rotating massive star, 0.25 s after the start of the explosion Code: Rady/2D Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society MPI for Metals Research, Stuttgart Large-scale atomistic study of the inertia properties of mode I cracks. A crack propagating at several kilometers per second is suddenly brought to rest. Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society MPI of Plasmaphysics Garching and Greifswald Simulation of the time development of the turbulent radial heat flux Code: TORB Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society 4 Decades of Supercomputing Tradition at RZG 1962: IBM 7090 1969: IBM 360/91 1979: Cray-1 1998: Cray T3E/816 2002/2003: IBM p690 0.1 Mflop/s, 128 kB RAM 15 Mflop/s, 2 MB RAM 80 Mflop/s, 8 MB RAM 0.47 TFlop/s, 104 GB RAM 4 TFlop/s, 2 TB RAM Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society IBM p690 4 TFlop/s, 2 TB RAM 64 GB 64 GB 64 GB 64 GB 64 GB 64 GB 64 GB Federation 256 GB Switch 256 GB 22 TB FC disks I/O I/O 5 TB SSA disks 24 compute nodes, 2 I/O nodes. Each node has 32 power4 processors. Federation switch: measured throughput: 4.4 GB/s bidirectional between 2 nodes measured latency is 12 µs. Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society AFS is too slow on the Regatta cluster For large files AFS is much slower than GPFS on the Regatta cluster + GPFS stripes data over multiple nodes. - AFS exchanges data with a single fileserver - with AFS all data go through the AFS cache. - AFS is also slower than NFS for protocol reasons Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society Why AFS is slow compared to NFS · Disk caches on local disks are slower than the network. · write() sleeps while data are transfered to the server. · Unnecessary read rpcs before a chunk is written. · Memory mapping of cache files breaks large I/O down to hundreds of requests · Rx-protocol is considered sub-optimal Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society How to make AFS faster for large files · Use fastest filesystem for /vicep-partition on server ± On Regatta cluster use GPFS · Bypass AFS caching on the client by direct I/O to the fileserver's /vicep-partitions. ± helps on all fileserver machines for files in volumes stored there ± helps in clusters, if /vicep-partitions are cluster wide mounted. ± requires modifications in the client and server code. ± Should be done only on trusted hosts Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society Writing a new file to AFS 1) create_file RPC 1 2) write chunks into cache 4 This process is interrupted and followed 5 by store_data RPCs each one doing: 3 2 3) read from cache /vicepa 4) transfer over network cache 5) write to /vicepa fileserver client Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society Writing a file directly to the AFS server partition 1) create file 1 2) check meta-data, permissions, and quota and 2 4 return the file's path in the /vicepa. 3) write the file into /vicepa. 3 4) update meta-data on the server. /vicepa fileserver client Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society Design of direct I/O to /vicep-partitions · Fileservers owning /vicep-partitions are identified by a sysid file in the partition. · afsd with option “-vicepaccess“ informs AFS kernel extension (new subcall). · Volumes with instances on fileservers with visible partitions are flagged · Open of files in these volumes tries first new RPC to get path-information from fileserver ± If that or the open of the vnode/dentry failes, open resumes in the old way. · I/O is done directly using the opened vnode/dentry. · Close for write informs the fileserver about new file-length Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society Implementation of direct I/O in MR-AFS and OpenAFS · Why MR-AFS ± because RZG runs only MR-AFS fileservers ± because existing ResidencyCmd RPC could be used without changing afsint.xg ± because MR-AFS has large file support · Which version of OpenAFS ± CVS-version from July 2003 where my last patches regarding the AIX 5.2 port had been comitted Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society MR-AFS server modifications · partition.c copies /usr/afs/local/sysid into all active /vicep-partitions · In src/viced/afsfileprocs.c: ± For direct read and write new RPC subcalls of SAFS_ResidencyCmd were implemented which return the path in the /vicep-partition as a string. ± They need the same checks as all flavours of SAFS_StoreData and SAFS_FetchData. ± Therefor the common code was put into generic routines StoreData() and FetchData(). ± In the long run new RPCs SAFS_DirectStore and SAFS_DirectFetch should be implemented, also in the OpenAFS fileserver. Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society Open() on the client After all other is done has GetServerPath volume's server vis. src/afs/VNOPS/afs_vnop_open.c yes RPC to fileserver part.? Open file in no success? yes vicep-partition no success? Save dentry yes pointer in vcache no done Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society OpenAFS client modification for open() · /vicep-partitions are scanned for sysid files. Uuids found in there are handed over to the kernel (afsd.c, afs_call.c). · Some additional flag bits in some structs allow to identify the AFS files which might be read or written directly in a visible /vicep-partition. · If these flag bits are set the open vnode operation tries After all other is done to get the path-information from the fileserver using the has GetServerPath volume's server vis. new RPC (afs_vnop_open.c). yes RPC part.? Open file in vicep- no success? ± partition If the RPC succeeds the file's vnode/dentry is no success? Save dentry yes looked up and the pointer stored in the vcache pointer in vcache no struct. done ± If the RPC or the lookup of the files path failes the old way of open is resumed. Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society write() on the client Before anything else is done has · vcache pointer to exchange dentry ptr. in srtuct file src/afs/LINUX/osi_vnodeops.c · dentry? yes call generic_file_write() Whenever a pointer to a restore dentry ptr. In struct file vnode/dentry in struct vcache no is available it is used to do the success? no I/O directly bypassing the AFS yes cache and the RPCs to the do it the old way fileserver. return · If this failes the old way is used return instead. Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society read() on the client Before anything else is done has · vcache pointer to exchange dentry ptr. in srtuct file src/afs/LINUX/osi_vnodeops.c · dentry? yes call generic_file_read() Whenever a pointer to a restore dentry ptr. In struct file vnode/dentry in struct vcache no is available it is used to do the success? no I/O directly bypassing the AFS yes cache and the RPCs to the do it the old way fileserver. return · If this failes the old way is used return instead. Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society close() on the client After everything else is done src/afs/VNOPS/afs_vnop_write.c has · Close for write triggers a vcache pointer to dput(dentry pointer) dummy StoreData64 RPC to yes dentry? update the meta-data (files no size, modification time). · Any close() releases the storemini() does StoreData RPC Was file vnode/dentry and clears the which updates file length open for write? field in vcache. in the afs vnode of the file src/afs/afs_segments.c · Does a dummy RPC SAFS_StoreData after close write Geneva, February 5, 2004 Hartmut Reuter RechenZentrum Garching of the Max Planck Society How we started · 1st successful implementation on my laptop for Linux 2.4.
Recommended publications
  • Early Experiences with Storage Area Networks and CXFS John Lynch
    Early Experiences with Storage Area Networks and CXFS John Lynch Aerojet 6304 Spine Road Boulder CO 80516 Abstract This paper looks at the design, integration and application issues involved in deploying an early access, very large, and highly available storage area network. Covered are topics from filesystem failover, issues regarding numbers of nodes in a cluster, and using leading edge solutions to solve complex issues in a real-time data processing network. 1 Introduction SAN technology can be categorized in two distinct approaches. Both Aerojet designed and installed a highly approaches use the storage area network available, large scale Storage Area to provide access to multiple storage Network over spring of 2000. This devices at the same time by one or system due to it size and diversity is multiple hosts. The difference is how known to be one of a kind and is the storage devices are accessed. currently not offered by SGI, but would serve as a prototype system. The most common approach allows the hosts to access the storage devices across The project’s goal was to evaluate Fibre the storage area network but filesystems Channel and SAN technology for its are not shared. This allows either a benefits and applicability in a second- single host to stripe data across a greater generation, real-time data processing number of storage controllers, or to network. SAN technology seemed to be share storage controllers among several the technology of the future to replace systems. This essentially breaks up a the traditional SCSI solution. large storage system into smaller distinct pieces, but allows for the cost-sharing of The approach was to conduct and the most expensive component, the evaluation of SAN technology as a storage controller.
    [Show full text]
  • CXFSTM Client-Only Guide for SGI® Infinitestorage
    CXFSTM Client-Only Guide for SGI® InfiniteStorage 007–4507–016 COPYRIGHT © 2002–2008 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source) as specified in the FAR 52.227-19 and/or the DFAR 227.7202, or successive sections. Use beyond license provisions is a violation of worldwide intellectual property laws, treaties and conventions. This document is provided with limited rights as defined in 52.227-14. TRADEMARKS AND ATTRIBUTIONS SGI, Altix, the SGI cube and the SGI logo are registered trademarks and CXFS, FailSafe, IRIS FailSafe, SGI ProPack, and Trusted IRIX are trademarks of SGI in the United States and/or other countries worldwide. Active Directory, Microsoft, Windows, and Windows NT are registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. AIX and IBM are registered trademarks of IBM Corporation. Brocade and Silkworm are trademarks of Brocade Communication Systems, Inc. AMD, AMD Athlon, AMD Duron, and AMD Opteron are trademarks of Advanced Micro Devices, Inc. Apple, Mac, Mac OS, Power Mac, and Xserve are registered trademarks of Apple Computer, Inc. Disk Manager is a registered trademark of ONTRACK Data International, Inc. Engenio, LSI Logic, and SANshare are trademarks or registered trademarks of LSI Corporation.
    [Show full text]
  • Silicon Graphics, Inc. Scalable Filesystems XFS & CXFS
    Silicon Graphics, Inc. Scalable Filesystems XFS & CXFS Presented by: Yingping Lu January 31, 2007 Outline • XFS Overview •XFS Architecture • XFS Fundamental Data Structure – Extent list –B+Tree – Inode • XFS Filesystem On-Disk Layout • XFS Directory Structure • CXFS: shared file system ||January 31, 2007 Page 2 XFS: A World-Class File System –Scalable • Full 64 bit support • Dynamic allocation of metadata space • Scalable structures and algorithms –Fast • Fast metadata speeds • High bandwidths • High transaction rates –Reliable • Field proven • Log/Journal ||January 31, 2007 Page 3 Scalable –Full 64 bit support • Large Filesystem – 18,446,744,073,709,551,615 = 264-1 = 18 million TB (exabytes) • Large Files – 9,223,372,036,854,775,807 = 263-1 = 9 million TB (exabytes) – Dynamic allocation of metadata space • Inode size configurable, inode space allocated dynamically • Unlimited number of files (constrained by storage space) – Scalable structures and algorithms (B-Trees) • Performance is not an issue with large numbers of files and directories ||January 31, 2007 Page 4 Fast –Fast metadata speeds • B-Trees everywhere (Nearly all lists of metadata information) – Directory contents – Metadata free lists – Extent lists within file – High bandwidths (Storage: RM6700) • 7.32 GB/s on one filesystem (32p Origin2000, 897 FC disks) • >4 GB/s in one file (same Origin, 704 FC disks) • Large extents (4 KB to 4 GB) • Request parallelism (multiple AGs) • Delayed allocation, Read ahead/Write behind – High transaction rates: 92,423 IOPS (Storage: TP9700)
    [Show full text]
  • CXFSTM Administration Guide for SGI® Infinitestorage
    CXFSTM Administration Guide for SGI® InfiniteStorage 007–4016–025 CONTRIBUTORS Written by Lori Johnson Illustrated by Chrystie Danzer Engineering contributions to the book by Vladmir Apostolov, Rich Altmaier, Neil Bannister, François Barbou des Places, Ken Beck, Felix Blyakher, Laurie Costello, Mark Cruciani, Rupak Das, Alex Elder, Dave Ellis, Brian Gaffey, Philippe Gregoire, Gary Hagensen, Ryan Hankins, George Hyman, Dean Jansa, Erik Jacobson, John Keller, Dennis Kender, Bob Kierski, Chris Kirby, Ted Kline, Dan Knappe, Kent Koeninger, Linda Lait, Bob LaPreze, Jinglei Li, Yingping Lu, Steve Lord, Aaron Mantel, Troy McCorkell, LaNet Merrill, Terry Merth, Jim Nead, Nate Pearlstein, Bryce Petty, Dave Pulido, Alain Renaud, John Relph, Elaine Robinson, Dean Roehrich, Eric Sandeen, Yui Sakazume, Wesley Smith, Kerm Steffenhagen, Paddy Sreenivasan, Roger Strassburg, Andy Tran, Rebecca Underwood, Connie Woodward, Michelle Webster, Geoffrey Wehrman, Sammy Wilborn COPYRIGHT © 1999–2007 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source) as specified in the FAR 52.227-19 and/or the DFAR 227.7202, or successive sections. Use beyond
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • CXFS Customer Presentation
    CXFS Clustered File System from SGI April 1999 R. Kent Koeninger Strategic Technologist, SGI Software [email protected] www.sgi.com File Systems Technology Briefing UNIX (Irix) • Clustered file system features: CXFS Applications CXFS • File System features: XFS XFS XVM FC driver • Volume management: XVM Page 2 Clustered File Systems CXFS Page 3 CXFS — Clustered SAN File System High resiliency and availability Reduced storage costs Scalable high Streamlined performance LAN-free backups Fibre Channel Storage Area Network (SAN) Page 4 CXFS: Clustered XFS • Clustered XFS (CXFS) attributes: – A shareable high-performance XFS file system • Shared among multiple IRIX nodes in a cluster • Near-local file system performance. – Direct data channels between disks and nodes. – A resilient file system • Failure of a node in the cluster does not prevent access to the disks from other nodes – A convenient interface • Users see standard Unix filesystems – Single System View (SSV) – Coherent distributed buffers Page 5 Comparing LANs and SANs LAN LAN: Data path through server (Bottleneck, Single point of failure) SAN: Data path direct to disk (Resilient scalable performance) SAN Page 6 CXFS Client-Server Cluster Technology CXFS Server Node CXFS Client Node Coherent Coherent System System Data Buffers Data Buffers Metadata Token Path Token Metadata Protected CXFS CXFS Shared Protected Server IP-Network Data Client Shared XFS’ Data XFS Log Direct Channels Shared Disks Page 7 Fully Resilient - High Availability CXFS Server Client machines CXFS Client act as redundant backup servers Backup Server-2 CXFS Client CXFS Client Backup Server-1 Backup Server-3 Page 8 CXFS Interface and Performance • Interface is the same as multiple processes reading and writing shared files on an SMP – Same open, read, write, create, delete, lock-range, etc.
    [Show full text]
  • SGI's Complex Data Management Architectures
    SGI® Complex Data Management Strategy and Solutions LaNet Merrill SGI SAN Product Manager Topics –Overview of SGI® Complex Data Management –Architectures and Solutions –SGI Solutions in Action SGI® Delivers Solutions with Unmatched Capabilities and Power SGI is focused on the demanding requirements of technical and creative professionals and extending our technology into heterogeneous environments Hiiigh- Performance Computiiing Advanced Viiisuallliiizatiiion Complllex Data Management Data Explosion - The Problem ‘There are Hiiigh-Ava‘Aiiilllsa bmiiiyllliii tryequiiirements changbea, sIII iiimc ay have to move between DAcS,a NpAabSiiillliiitiiies III and SANDsa.ta wiiillllll alllways Shariiing III don’t want to get lllocked niiine -ed …’ Backu Dead-ends, box upgrades, pbroken appllliiicatiiions and lllots of data HS format changes are NOT M ‘... Anadc cIII edpotanb’lllte ’want to lllose them as my requiiirements for compute, bandwiiidth, and capaciiity change’ Problem Statement: The Data Explosion— What WOULD Solve the Problem? My basiiic, core capa“bIIIfiii lllIIIiii tniiieese sdh Cooulllmdpute or system “My techn“oIIIflll oIII gnye esdh otuolll dc haadnaSgpMet storage movbea wndiiitwh iiidmteh. III shoullld be Habllle to me, noatr cthheiiit eoctthuerre ws paIII yshoullldn’t to jjjust add iiit. Samkue wiiith round” have to woacrry about buying have to w Borry about buyii ng storage capagc iiity” all newiii n equipment and all ll nearw equii pment and Sh Compute cah angiiing my appllliiicatiiions” at Bandwiiidth D y Capaciiity lll iii t biii iii lll a va A gh Hiii DAS NAS SAN SAN/NAS Hybriiid Storage IIInfrastructures SGI® Complex Data Management Strategy High-Performance, Integrated Storage Solutions Performance: Integrating the bandwidth and data needs of servers, visualization and storage providing ultimate bandwidth and capacity.
    [Show full text]
  • Designing High-Performance and Scalable Clustered Network Attached Storage with Infiniband
    DESIGNING HIGH-PERFORMANCE AND SCALABLE CLUSTERED NETWORK ATTACHED STORAGE WITH INFINIBAND DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ranjit Noronha, MS * * * * * The Ohio State University 2008 Dissertation Committee: Approved by Dhabaleswar K. Panda, Adviser Ponnuswammy Sadayappan Adviser Feng Qin Graduate Program in Computer Science and Engineering c Copyright by Ranjit Noronha 2008 ABSTRACT The Internet age has exponentially increased the volume of digital media that is being shared and distributed. Broadband Internet has made technologies such as high quality streaming video on demand possible. Large scale supercomputers also consume and cre- ate huge quantities of data. This media and data must be stored, cataloged and retrieved with high-performance. Researching high-performance storage subsystems to meet the I/O demands of applications in modern scenarios is crucial. Advances in microprocessor technology have given rise to relatively cheap off-the-shelf hardware that may be put together as personal computers as well as servers. The servers may be connected together by networking technology to create farms or clusters of work- stations (COW). The evolution of COWs has significantly reduced the cost of ownership of high-performance clusters and has allowed users to build fairly large scale machines based on commodity server hardware. As COWs have evolved, networking technologies like InfiniBand and 10 Gigabit Eth- ernet have also evolved. These networking technologies not only give lower end-to-end latencies, but also allow for better messaging throughput between the nodes. This allows us to connect the clusters with high-performance interconnects at a relatively lower cost.
    [Show full text]
  • CXFS • CXFS XFS • File System Features: XFS XVM • Volume Management: XVM FC Driver
    Brian Gaffey File Systems Engineering UNIX (Irix) Applications • Clustered file system features CXFS • CXFS XFS • File System features: XFS XVM • Volume management: XVM FC driver High resiliency and availability Reduced storage costs Scalable high Streamlined performance LAN-free backups Fibre Channel Storage Area Network (SAN) • Clustered XFS (CXFS) attributes: –A shareable high-performance XFS file system • Shared among multiple IRIX nodes in a cluster • Near-local file system performance. – Direct data channels between disks and nodes. –A resilient file system • Failure of a node in the cluster does not prevent access to the disks from other nodes –A convenient interface • Users see standard Unix filesystems – Single System View (SSV) – Coherent distributed buffers CXFS Server Client machines CXFS Client act as redundant backup servers Backup Server-2 CXFS Client CXFS Client Backup Server-1 Backup Server-3 • Interface is the same as multiple processes reading and writing shared files on an SMP – Same open, read, write, create, delete, lock-range, etc. • Multiple clients can share files at local file speeds – Processes on the same host reading and writing (buffered) – Processes on multiple hosts reading (buffered) – Processes on multiple hosts reading and writing, using direct-access IO (non-buffered) • Transactions slower than with local-files: – Shared writes flush distributed buffers related to that file – Metadata transactions (file creation and size changes) /a server /a client /b client /b client Multiple servers Add channels to
    [Show full text]
  • Analyzing Metadata Performance in Distributed File Systems
    Inaugural-Dissertation zur Erlangung der Doktorwurde¨ der Naturwissenschaftlich-Mathematischen Gesamtfakultat¨ der Ruprecht-Karls-Universitat¨ Heidelberg vorgelegt von Diplom-Informatiker Christoph Biardzki aus Thorn Tag der mundlichen¨ Prufung:¨ 19.1.2009 Analyzing Metadata Performance in Distributed File Systems Gutachter: Prof. Dr. Thomas Ludwig Abstract Distributed file systems are important building blocks in modern computing environments. The challenge of increasing I/O bandwidth to files has been largely resolved by the use of parallel file systems and sufficient hardware. However, determining the best means by which to manage large amounts of metadata, which contains information about files and directories stored in a distributed file system, has proved a more difficult challenge. The objective of this thesis is to analyze the role of metadata and present past and current implementations and access semantics. Understanding the development of the current file system interfaces and functionality is a key to understanding their performance limitations. Based on this analysis, a distributed metadata benchmark termed DMetabench is presented. DMetabench significantly improves on existing benchmarks and allows stress on meta- data operations in a distributed file system in a parallelized manner. Both intra-node and inter-node parallelity, current trends in computer architecture, can be explicitly tested with DMetabench. This is due to the fact that a distributed file system can have different seman- tics inside a client node rather than semantics between multiple nodes. As measurements in larger distributed environments may exhibit performance artifacts difficult to explain by reference to average numbers, DMetabench uses a time-logging tech- nique to record time-related changes in the performance of metadata operations and also protocols additional details of the runtime environment for post-benchmark analysis.
    [Show full text]
  • IRIX® 6.5.15 Update Guide
    IRIX® 6.5.15 Update Guide 1600 Amphitheatre Pkwy. Mountain View, CA 94043-1351 Telephone (650) 960-1980 FAX (650) 961-0595 February 2002 Dear Valued Customer, SGI is pleased to present the new IRIX 6.5.15 maintenance and feature release. Starting with IRIX 6.5, SGI created a new software upgrade release strategy, which delivers both the maintenance (6.5.15m) and feature (6.5.15f) streams. This upgrade is part of a family of releases that enhance IRIX 6.5. There are several benefits to this strategy: it provides periodic fixes to IRIX, it assists in managing upgrades, and it supports all platforms. Additional information on this strategy and how it affects you is included in the updated Installation Instructions manual contained in this package. If you need assistance, please visit the Supportfolio Online Web site at http://support.sgi.com or contact your local support provider. In conjunction with the release of IRIX 6.5.10, SGI adopted expanded life cycle management categories to customize the services we deliver to our users. We now offer seven (7) modes of service on SGI software: Active, Maintenance, Legacy, Retired, Courtesy, Divested, and Expired. 2 Active Mode is our highest level of service and applies to products that are being actively developed and maintained. Software fixes for all levels of problems can be expected. Active Mode is for the current release. Prior releases are in Maintenance Mode or Legacy Mode. Maintenance Mode software is maintained and is still an important part of our product mix, but no new functionality is added to it.
    [Show full text]
  • Guide to OS X SAN Software
    A Survey of the Landscape: Technology for Shared Storage A Guide to SAN Management Software for Mac OS X Updated August 2005 Copyright© 2004-2005 by CommandSoft, Inc. Abstract: This paper gathers and discusses known facts regarding different software products. Authored by the CommandSoft® people, who make the FibreJet® product line, it was written in the “work together” spirit and is encouraged background reading for all customers researching technology for shared storage. Customers are encouraged to perform their own due diligence and ask questions. This paper will be updated from time to time to include new relevant information. We would like to hear from you so send your questions, comments or suggestions to [email protected]. Version 1.0 Released January 2004 Version 1.0.1 Released March 2004 Per readers comments, updated FibreJet section to describe more details about its ability to “claim” and also “unclaim” storage. Also updated this section to describe some database failure situations. This should clarify how FibreJet behaves in these situations. Version 1.0.2 Released August 2005 Added Xsan section. Update FibreJet, ImageSAN, SANmp and Charismac sections to reflect latest findings. SAN Background A Storage Area Network (SAN) is a network that allows computers to be connected to storage devices in a fashion that allows many computers to connect to many storage devices. Before SANs, storage was normally connected to only one computer at any given time, with few exceptions. ONE-TO-MANY — STORAGE CONNECTED TO ONE COMPUTER As more general purpose, and longer distance connection technologies such as fiber optics and the Fibre Channel protocol were developed, these SAN options became available to general computer users.
    [Show full text]