Guide to OS X SAN Software

Total Page:16

File Type:pdf, Size:1020Kb

Guide to OS X SAN Software A Survey of the Landscape: Technology for Shared Storage A Guide to SAN Management Software for Mac OS X Updated August 2005 Copyright© 2004-2005 by CommandSoft, Inc. Abstract: This paper gathers and discusses known facts regarding different software products. Authored by the CommandSoft® people, who make the FibreJet® product line, it was written in the “work together” spirit and is encouraged background reading for all customers researching technology for shared storage. Customers are encouraged to perform their own due diligence and ask questions. This paper will be updated from time to time to include new relevant information. We would like to hear from you so send your questions, comments or suggestions to [email protected]. Version 1.0 Released January 2004 Version 1.0.1 Released March 2004 Per readers comments, updated FibreJet section to describe more details about its ability to “claim” and also “unclaim” storage. Also updated this section to describe some database failure situations. This should clarify how FibreJet behaves in these situations. Version 1.0.2 Released August 2005 Added Xsan section. Update FibreJet, ImageSAN, SANmp and Charismac sections to reflect latest findings. SAN Background A Storage Area Network (SAN) is a network that allows computers to be connected to storage devices in a fashion that allows many computers to connect to many storage devices. Before SANs, storage was normally connected to only one computer at any given time, with few exceptions. ONE-TO-MANY — STORAGE CONNECTED TO ONE COMPUTER As more general purpose, and longer distance connection technologies such as fiber optics and the Fibre Channel protocol were developed, these SAN options became available to general computer users. Version 1.0.2 August 2005 1 MANY-TO-MANY — A SAN, STORAGE CONNECTED TO MANY COMPUTERS Although there are different hardware and protocols you can use to build a SAN, one popular method uses fiber optic cabling, and the ANSI Fibre Channel (FC) protocol standard. A network is built using FC switches that connect the computers and storage into what is known as a FC fabric. Connection to the computers is often done with a PCI based Host Bus Adapter (HBA) that connects via the fiber optic cabling to the FC fabric switches. Version 1.0.2 August 2005 2 A FIBRE CHANNEL FABRIC WITH CONNECTIONS TO COMPUTERS AND STORAGE The SAN Problem: Operating Systems grab all disks The problem with many computers to many storage networks is that the computers and their Operation Systems (OS) are used to assuming that all the storage the computer can access is fair game for the OS. This creates a multiple-writer situation in which a computer writing data to the storage is not aware of any other computer also writing data to the same storage, at the same time and at the same locations. This quickly results in corrupted data to the file systems. For SANs to work, something is needed to manage the file system traffic so that data is safe and doesn’t get clobbered when everyone tries to write to the storage. That something is SAN management software that controls how the SAN uses shared storage connected computers. How SANs benefit customers For many environments, SANs allow significant gains in efficiency and cost savings as compared to storage directly attached to a single computer. Utilization of resources — Storage and Server Consolidation As computers utilize storage, SAN storage pools are ideal for changing demands for storage capacity by the various applications. It is much easier to allocate a piece of SAN storage to a computer through software verses the old method that would require upgrading the computer with more direct attached storage. POOLS OF STORAGE ALLOW GREATER UTILIZATION A SAN also addresses the problem in which some computers have unused storage that goes wasted as it is not being used. Because all the storage is in the SAN pool, utilization at all times is maximized. By consolidating storage resources into a dynamically reconfigurable, high-speed shared pool, the storage investment is optimized. Version 1.0.2 August 2005 3 SAN Implementation Types Shared Nothing - Storage Islands Some systems approach the SAN problem by implementing a method called “shared-nothing”. In this approach, the physical network connects everything, but access is restricted either by software, or hardware such as “switch” zoning, such that each computer would be allowed to see only that portion of storage it was granted access. The following summarizes disadvantages to this approach and why this approach is not appropriate in some situations: • It defeats the ability to actually share the storage, at the same time, with others. If sharing data at the same time is one of the goals, then the remaining reasons are moot. • Changes are not dynamic, and often require restarting one or more computers. • It can be a management nightmare, requiring trained, full-time administration, to make changes. • The granularity of control typically doesn’t match real world use of storage, at the individual file system level, but is limited to an entire storage device, or storage device logical unit (LU) per access permission. This is not practical in today’s world where a single LUN can easily be 1 tera-byte (TB) or larger. So the main issue with this approach is that it defeats the main benefit of a workgroup SAN, which is the ability to actually share the storage, at the same time, with others. Another equally important issue with this approach is management, and flexibility. With this approach, in almost all circumstances today, changing the configuration, access permissions, or zones actually requires the editing station to do something before the changes will be seen. This inevitably involves restarting the computer to reflect the changes! Even worse, while the changes are being made, some computers involved might need to shutdown and thus stop working! Also, simply making the changes is complex and requires a trained administrator to manage the storage system, its allocations, and its permissions full-time. Lastly, these approaches greatly suffer because the granularity of control is often an entire storage device, and at best a single logical unit (LU) of the storage device, as opposed to an individual file system. Nonetheless, some SANs that are designed for consolidation purposes utilize the storage pool and restrict access to pieces of storage to specific computers. Often done with switch zoning, this method prevents other computers from accessing any other storage that is outside its zone membership. This prevents the multiple-writer data corruption problem. In this environment the storage pool Version 1.0.2 August 2005 4 resources are available for allocation to many computers but nothing is actually shared at any one time. SHARED NOTHING — STORAGE ISLANDS COURTESY OF SWITCHED ZONING As mentioned earlier, most switch based zoning available today is limited in granularity to an entire storage device (see the section later on called “Granularity of control — LUN structure background” for examples). This fact severely limits utilization choices as an entire storage device may be many TB in size, which is not always practical to assign to a single computer. Some expensive higher end switches allow zoning at the sub-device level known as Logical Unit (LU) zoning, but a LU is still a very large chunk for a single computer. The ANSI Small Computer System Interface (SCSI) storage protocol that storage devices utilize, allow for a sub-device addressing unit called Logical Unit Number (LUN). LUN addressing is mostly utilized by expensive storage devices known as RAID (Redundant Array of Independent Disks). A user configures the storage device and creates a number of LUs. This process is similar to partitioning a storage device using the OS, which is required before a file system can be placed on the device. Once a LU is created, it still needs to be partitioned in the OS, and is often broken down into multiple smaller pieces, each of which contain a separate file system. This is because a LU is usually limited to a minimum size, which is usually very large. Shared Everything - Shared Storage Another approach to the SAN problem and the idea behind Shared Storage is to allow a means for everything to be shared. What is needed is a method to transparently and dynamically reconfigure the storage networks at will without restarting any computers. Also the method would prevent multiple writers to a single file system at any given time while still allowing multiple computers to read the same data at the same time. Another desirable attribute would allow unrestricted multiple writers to the same file system at the same time without corrupting any storage. Version 1.0.2 August 2005 5 Storage capacity can be sized to your shared data requirements, not individual computer requirements. The cost of the SAN is paid for many times over in terms of time saved and storage efficiency. Users can work together, share their work with others, and all access the same data at the same time at incredible speeds. SHARED EVERYTHING — PROVIDING THE IMPORTANT OPTION TO WORK TOGETHER Client / Server Architectures — Multiple-writers to file system at a time In a variation on traditional client / server file sharing, some SAN architectures utilize third party transfer which enables the client itself to directly transfer file system data to and from the SAN storage. One must have a basic understanding of traditional Network Attached Storage (NAS) and how it fits into the client / server model before the SAN architecture modifications can be understood. TRADITIONAL CLIENT / SERVER NETWORK FOR FILE SYSTEM SHARING VIA NAS Version 1.0.2 August 2005 6 To understand the overhead involved with this type of network, and thus the SAN optimizations that can be done with it, we give some illustrated examples.
Recommended publications
  • Early Experiences with Storage Area Networks and CXFS John Lynch
    Early Experiences with Storage Area Networks and CXFS John Lynch Aerojet 6304 Spine Road Boulder CO 80516 Abstract This paper looks at the design, integration and application issues involved in deploying an early access, very large, and highly available storage area network. Covered are topics from filesystem failover, issues regarding numbers of nodes in a cluster, and using leading edge solutions to solve complex issues in a real-time data processing network. 1 Introduction SAN technology can be categorized in two distinct approaches. Both Aerojet designed and installed a highly approaches use the storage area network available, large scale Storage Area to provide access to multiple storage Network over spring of 2000. This devices at the same time by one or system due to it size and diversity is multiple hosts. The difference is how known to be one of a kind and is the storage devices are accessed. currently not offered by SGI, but would serve as a prototype system. The most common approach allows the hosts to access the storage devices across The project’s goal was to evaluate Fibre the storage area network but filesystems Channel and SAN technology for its are not shared. This allows either a benefits and applicability in a second- single host to stripe data across a greater generation, real-time data processing number of storage controllers, or to network. SAN technology seemed to be share storage controllers among several the technology of the future to replace systems. This essentially breaks up a the traditional SCSI solution. large storage system into smaller distinct pieces, but allows for the cost-sharing of The approach was to conduct and the most expensive component, the evaluation of SAN technology as a storage controller.
    [Show full text]
  • CXFSTM Client-Only Guide for SGI® Infinitestorage
    CXFSTM Client-Only Guide for SGI® InfiniteStorage 007–4507–016 COPYRIGHT © 2002–2008 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source) as specified in the FAR 52.227-19 and/or the DFAR 227.7202, or successive sections. Use beyond license provisions is a violation of worldwide intellectual property laws, treaties and conventions. This document is provided with limited rights as defined in 52.227-14. TRADEMARKS AND ATTRIBUTIONS SGI, Altix, the SGI cube and the SGI logo are registered trademarks and CXFS, FailSafe, IRIS FailSafe, SGI ProPack, and Trusted IRIX are trademarks of SGI in the United States and/or other countries worldwide. Active Directory, Microsoft, Windows, and Windows NT are registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. AIX and IBM are registered trademarks of IBM Corporation. Brocade and Silkworm are trademarks of Brocade Communication Systems, Inc. AMD, AMD Athlon, AMD Duron, and AMD Opteron are trademarks of Advanced Micro Devices, Inc. Apple, Mac, Mac OS, Power Mac, and Xserve are registered trademarks of Apple Computer, Inc. Disk Manager is a registered trademark of ONTRACK Data International, Inc. Engenio, LSI Logic, and SANshare are trademarks or registered trademarks of LSI Corporation.
    [Show full text]
  • Silicon Graphics, Inc. Scalable Filesystems XFS & CXFS
    Silicon Graphics, Inc. Scalable Filesystems XFS & CXFS Presented by: Yingping Lu January 31, 2007 Outline • XFS Overview •XFS Architecture • XFS Fundamental Data Structure – Extent list –B+Tree – Inode • XFS Filesystem On-Disk Layout • XFS Directory Structure • CXFS: shared file system ||January 31, 2007 Page 2 XFS: A World-Class File System –Scalable • Full 64 bit support • Dynamic allocation of metadata space • Scalable structures and algorithms –Fast • Fast metadata speeds • High bandwidths • High transaction rates –Reliable • Field proven • Log/Journal ||January 31, 2007 Page 3 Scalable –Full 64 bit support • Large Filesystem – 18,446,744,073,709,551,615 = 264-1 = 18 million TB (exabytes) • Large Files – 9,223,372,036,854,775,807 = 263-1 = 9 million TB (exabytes) – Dynamic allocation of metadata space • Inode size configurable, inode space allocated dynamically • Unlimited number of files (constrained by storage space) – Scalable structures and algorithms (B-Trees) • Performance is not an issue with large numbers of files and directories ||January 31, 2007 Page 4 Fast –Fast metadata speeds • B-Trees everywhere (Nearly all lists of metadata information) – Directory contents – Metadata free lists – Extent lists within file – High bandwidths (Storage: RM6700) • 7.32 GB/s on one filesystem (32p Origin2000, 897 FC disks) • >4 GB/s in one file (same Origin, 704 FC disks) • Large extents (4 KB to 4 GB) • Request parallelism (multiple AGs) • Delayed allocation, Read ahead/Write behind – High transaction rates: 92,423 IOPS (Storage: TP9700)
    [Show full text]
  • CXFSTM Administration Guide for SGI® Infinitestorage
    CXFSTM Administration Guide for SGI® InfiniteStorage 007–4016–025 CONTRIBUTORS Written by Lori Johnson Illustrated by Chrystie Danzer Engineering contributions to the book by Vladmir Apostolov, Rich Altmaier, Neil Bannister, François Barbou des Places, Ken Beck, Felix Blyakher, Laurie Costello, Mark Cruciani, Rupak Das, Alex Elder, Dave Ellis, Brian Gaffey, Philippe Gregoire, Gary Hagensen, Ryan Hankins, George Hyman, Dean Jansa, Erik Jacobson, John Keller, Dennis Kender, Bob Kierski, Chris Kirby, Ted Kline, Dan Knappe, Kent Koeninger, Linda Lait, Bob LaPreze, Jinglei Li, Yingping Lu, Steve Lord, Aaron Mantel, Troy McCorkell, LaNet Merrill, Terry Merth, Jim Nead, Nate Pearlstein, Bryce Petty, Dave Pulido, Alain Renaud, John Relph, Elaine Robinson, Dean Roehrich, Eric Sandeen, Yui Sakazume, Wesley Smith, Kerm Steffenhagen, Paddy Sreenivasan, Roger Strassburg, Andy Tran, Rebecca Underwood, Connie Woodward, Michelle Webster, Geoffrey Wehrman, Sammy Wilborn COPYRIGHT © 1999–2007 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source) as specified in the FAR 52.227-19 and/or the DFAR 227.7202, or successive sections. Use beyond
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • CXFS Customer Presentation
    CXFS Clustered File System from SGI April 1999 R. Kent Koeninger Strategic Technologist, SGI Software [email protected] www.sgi.com File Systems Technology Briefing UNIX (Irix) • Clustered file system features: CXFS Applications CXFS • File System features: XFS XFS XVM FC driver • Volume management: XVM Page 2 Clustered File Systems CXFS Page 3 CXFS — Clustered SAN File System High resiliency and availability Reduced storage costs Scalable high Streamlined performance LAN-free backups Fibre Channel Storage Area Network (SAN) Page 4 CXFS: Clustered XFS • Clustered XFS (CXFS) attributes: – A shareable high-performance XFS file system • Shared among multiple IRIX nodes in a cluster • Near-local file system performance. – Direct data channels between disks and nodes. – A resilient file system • Failure of a node in the cluster does not prevent access to the disks from other nodes – A convenient interface • Users see standard Unix filesystems – Single System View (SSV) – Coherent distributed buffers Page 5 Comparing LANs and SANs LAN LAN: Data path through server (Bottleneck, Single point of failure) SAN: Data path direct to disk (Resilient scalable performance) SAN Page 6 CXFS Client-Server Cluster Technology CXFS Server Node CXFS Client Node Coherent Coherent System System Data Buffers Data Buffers Metadata Token Path Token Metadata Protected CXFS CXFS Shared Protected Server IP-Network Data Client Shared XFS’ Data XFS Log Direct Channels Shared Disks Page 7 Fully Resilient - High Availability CXFS Server Client machines CXFS Client act as redundant backup servers Backup Server-2 CXFS Client CXFS Client Backup Server-1 Backup Server-3 Page 8 CXFS Interface and Performance • Interface is the same as multiple processes reading and writing shared files on an SMP – Same open, read, write, create, delete, lock-range, etc.
    [Show full text]
  • SGI's Complex Data Management Architectures
    SGI® Complex Data Management Strategy and Solutions LaNet Merrill SGI SAN Product Manager Topics –Overview of SGI® Complex Data Management –Architectures and Solutions –SGI Solutions in Action SGI® Delivers Solutions with Unmatched Capabilities and Power SGI is focused on the demanding requirements of technical and creative professionals and extending our technology into heterogeneous environments Hiiigh- Performance Computiiing Advanced Viiisuallliiizatiiion Complllex Data Management Data Explosion - The Problem ‘There are Hiiigh-Ava‘Aiiilllsa bmiiiyllliii tryequiiirements changbea, sIII iiimc ay have to move between DAcS,a NpAabSiiillliiitiiies III and SANDsa.ta wiiillllll alllways Shariiing III don’t want to get lllocked niiine -ed …’ Backu Dead-ends, box upgrades, pbroken appllliiicatiiions and lllots of data HS format changes are NOT M ‘... Anadc cIII edpotanb’lllte ’want to lllose them as my requiiirements for compute, bandwiiidth, and capaciiity change’ Problem Statement: The Data Explosion— What WOULD Solve the Problem? My basiiic, core capa“bIIIfiii lllIIIiii tniiieese sdh Cooulllmdpute or system “My techn“oIIIflll oIII gnye esdh otuolll dc haadnaSgpMet storage movbea wndiiitwh iiidmteh. III shoullld be Habllle to me, noatr cthheiiit eoctthuerre ws paIII yshoullldn’t to jjjust add iiit. Samkue wiiith round” have to woacrry about buying have to w Borry about buyii ng storage capagc iiity” all newiii n equipment and all ll nearw equii pment and Sh Compute cah angiiing my appllliiicatiiions” at Bandwiiidth D y Capaciiity lll iii t biii iii lll a va A gh Hiii DAS NAS SAN SAN/NAS Hybriiid Storage IIInfrastructures SGI® Complex Data Management Strategy High-Performance, Integrated Storage Solutions Performance: Integrating the bandwidth and data needs of servers, visualization and storage providing ultimate bandwidth and capacity.
    [Show full text]
  • Designing High-Performance and Scalable Clustered Network Attached Storage with Infiniband
    DESIGNING HIGH-PERFORMANCE AND SCALABLE CLUSTERED NETWORK ATTACHED STORAGE WITH INFINIBAND DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ranjit Noronha, MS * * * * * The Ohio State University 2008 Dissertation Committee: Approved by Dhabaleswar K. Panda, Adviser Ponnuswammy Sadayappan Adviser Feng Qin Graduate Program in Computer Science and Engineering c Copyright by Ranjit Noronha 2008 ABSTRACT The Internet age has exponentially increased the volume of digital media that is being shared and distributed. Broadband Internet has made technologies such as high quality streaming video on demand possible. Large scale supercomputers also consume and cre- ate huge quantities of data. This media and data must be stored, cataloged and retrieved with high-performance. Researching high-performance storage subsystems to meet the I/O demands of applications in modern scenarios is crucial. Advances in microprocessor technology have given rise to relatively cheap off-the-shelf hardware that may be put together as personal computers as well as servers. The servers may be connected together by networking technology to create farms or clusters of work- stations (COW). The evolution of COWs has significantly reduced the cost of ownership of high-performance clusters and has allowed users to build fairly large scale machines based on commodity server hardware. As COWs have evolved, networking technologies like InfiniBand and 10 Gigabit Eth- ernet have also evolved. These networking technologies not only give lower end-to-end latencies, but also allow for better messaging throughput between the nodes. This allows us to connect the clusters with high-performance interconnects at a relatively lower cost.
    [Show full text]
  • CXFS • CXFS XFS • File System Features: XFS XVM • Volume Management: XVM FC Driver
    Brian Gaffey File Systems Engineering UNIX (Irix) Applications • Clustered file system features CXFS • CXFS XFS • File System features: XFS XVM • Volume management: XVM FC driver High resiliency and availability Reduced storage costs Scalable high Streamlined performance LAN-free backups Fibre Channel Storage Area Network (SAN) • Clustered XFS (CXFS) attributes: –A shareable high-performance XFS file system • Shared among multiple IRIX nodes in a cluster • Near-local file system performance. – Direct data channels between disks and nodes. –A resilient file system • Failure of a node in the cluster does not prevent access to the disks from other nodes –A convenient interface • Users see standard Unix filesystems – Single System View (SSV) – Coherent distributed buffers CXFS Server Client machines CXFS Client act as redundant backup servers Backup Server-2 CXFS Client CXFS Client Backup Server-1 Backup Server-3 • Interface is the same as multiple processes reading and writing shared files on an SMP – Same open, read, write, create, delete, lock-range, etc. • Multiple clients can share files at local file speeds – Processes on the same host reading and writing (buffered) – Processes on multiple hosts reading (buffered) – Processes on multiple hosts reading and writing, using direct-access IO (non-buffered) • Transactions slower than with local-files: – Shared writes flush distributed buffers related to that file – Metadata transactions (file creation and size changes) /a server /a client /b client /b client Multiple servers Add channels to
    [Show full text]
  • Analyzing Metadata Performance in Distributed File Systems
    Inaugural-Dissertation zur Erlangung der Doktorwurde¨ der Naturwissenschaftlich-Mathematischen Gesamtfakultat¨ der Ruprecht-Karls-Universitat¨ Heidelberg vorgelegt von Diplom-Informatiker Christoph Biardzki aus Thorn Tag der mundlichen¨ Prufung:¨ 19.1.2009 Analyzing Metadata Performance in Distributed File Systems Gutachter: Prof. Dr. Thomas Ludwig Abstract Distributed file systems are important building blocks in modern computing environments. The challenge of increasing I/O bandwidth to files has been largely resolved by the use of parallel file systems and sufficient hardware. However, determining the best means by which to manage large amounts of metadata, which contains information about files and directories stored in a distributed file system, has proved a more difficult challenge. The objective of this thesis is to analyze the role of metadata and present past and current implementations and access semantics. Understanding the development of the current file system interfaces and functionality is a key to understanding their performance limitations. Based on this analysis, a distributed metadata benchmark termed DMetabench is presented. DMetabench significantly improves on existing benchmarks and allows stress on meta- data operations in a distributed file system in a parallelized manner. Both intra-node and inter-node parallelity, current trends in computer architecture, can be explicitly tested with DMetabench. This is due to the fact that a distributed file system can have different seman- tics inside a client node rather than semantics between multiple nodes. As measurements in larger distributed environments may exhibit performance artifacts difficult to explain by reference to average numbers, DMetabench uses a time-logging tech- nique to record time-related changes in the performance of metadata operations and also protocols additional details of the runtime environment for post-benchmark analysis.
    [Show full text]
  • IRIX® 6.5.15 Update Guide
    IRIX® 6.5.15 Update Guide 1600 Amphitheatre Pkwy. Mountain View, CA 94043-1351 Telephone (650) 960-1980 FAX (650) 961-0595 February 2002 Dear Valued Customer, SGI is pleased to present the new IRIX 6.5.15 maintenance and feature release. Starting with IRIX 6.5, SGI created a new software upgrade release strategy, which delivers both the maintenance (6.5.15m) and feature (6.5.15f) streams. This upgrade is part of a family of releases that enhance IRIX 6.5. There are several benefits to this strategy: it provides periodic fixes to IRIX, it assists in managing upgrades, and it supports all platforms. Additional information on this strategy and how it affects you is included in the updated Installation Instructions manual contained in this package. If you need assistance, please visit the Supportfolio Online Web site at http://support.sgi.com or contact your local support provider. In conjunction with the release of IRIX 6.5.10, SGI adopted expanded life cycle management categories to customize the services we deliver to our users. We now offer seven (7) modes of service on SGI software: Active, Maintenance, Legacy, Retired, Courtesy, Divested, and Expired. 2 Active Mode is our highest level of service and applies to products that are being actively developed and maintained. Software fixes for all levels of problems can be expected. Active Mode is for the current release. Prior releases are in Maintenance Mode or Legacy Mode. Maintenance Mode software is maintained and is still an important part of our product mix, but no new functionality is added to it.
    [Show full text]
  • The Evolution of File Systems
    The Evolution of File Systems Thomas Rivera, Hitachi Data Systems Craig Harmer, April 2011 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the Author nor the Presenter is an attorney and nothing in this presentation is intended to be nor should be construed as legal advice or opinion. If you need legal advice or legal opinion please contact an attorney. The information presented herein represents the Author's personal opinion and current understanding of the issues involved. The Author, the Presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. The Evolution of File Systems 2 © 2012 Storage Networking Industry Association. All Rights Reserved. 2 Abstract The File Systems Evolution Over time additional file systems appeared focusing on specialized requirements such as: data sharing, remote file access, distributed file access, parallel files access, HPC, archiving, security, etc. Due to the dramatic growth of unstructured data, files as the basic units for data containers are morphing into file objects, providing more semantics and feature- rich capabilities for content processing This presentation will: Categorize and explain the basic principles of currently available file system architectures (e.g.
    [Show full text]