Panache: a Parallel File System Cache for Global File Access

Total Page:16

File Type:pdf, Size:1020Kb

Panache: a Parallel File System Cache for Global File Access Panache: A Parallel File System Cache for Global File Access Marc Eshel Roger Haskin Dean Hildebrand Manoj Naik Frank Schmuck Renu Tewari IBM Almaden Research {eshel, roger, manoj, schmuck}@almaden.ibm.com, {dhildeb, tewarir}@us.ibm.com Abstract Traditionally, NFS (for Unix) and CIFS (for Win- dows) have been the protocols of choice for remote file Cloud computing promises large-scale and seamless ac- serving. Originally designed for local area access, both cess to vast quantities of data across the globe. Appli- are rather “chatty” and therefore unsuited for wide-area cations will demand the reliability, consistency, and per- access. NFSv4 has numerous optimizations for wide- formance of a traditional cluster file system regardless area use, but its scalability continues to suffer from of the physical distance between data centers. the ”single server” design. NFSv4.1, which includes Panache is a scalable, high-performance,clustered file pNFS, improves I/O performance by enabling parallel system cache for parallel data-intensive applications that data transfers between clients and servers. Unfortu- require wide area file access. Panache is the first file nately, while NFSv4 and pNFS can improve network and system cache to exploit parallelism in every aspect of I/O performance, they cannot completely mask WAN la- its design—parallel applications can access and update tencies nor operate during intermittent network outages. the cache from multiple nodes while data and metadata As “storage cloud” architectures evolve from a single is pulled into and pushed out of the cache in parallel. high bandwidth data-center towards a larger multi-tiered Data is cached and updated using pNFS, which performs storage delivery architecture, e.g., Nirvanix SDN [7], parallel I/O between clients and servers, eliminating the file data needs to be efficiently moved across loca- single-server bottleneck of vanilla client-server file ac- tions and be accessible using standard file system APIs. cess protocols. Furthermore, Panache shields applica- Moreover, for data-intensive applications to function tions from fluctuating WAN latencies and outages and seamlessly in “compute clouds”, the data needs to be is easy to deploy as it relies on open standards for high- cached closer to or at the site of the computation. Con- performancefile serving and does not require any propri- sider a typical multi-site compute cloud architecture that etary hardware or software to be installed at the remote presents a virtualized environment to customer applica- cluster. tions running at multiple sites within the cloud. Applica- In this paper, we present the overall design and imple- tions run inside a virtual machine (VM) and access data mentation of Panache and evaluate its key features with from a virtual LUN, which is typically stored as a file, multiple workloads across local and wide area networks. e.g., VMware’s .vmdk file, in one of the data centers. Today, whenever a new virtual machine is configured, 1 Introduction migrated, or restarted on failure, the OS image and its Next generation data centers, global enterprises, and virtual LUN (greater than 80 GB of data) must be trans- distributed cloud storage all require sharing of massive ferred between sites causing long delays before the ap- amounts of file data in a consistent, efficient, and re- plication is ready to be online. A better solution would liable manner across a wide-area network. The two store all files at a central core site and then dynamically emerging trends of offloading data to a distributed stor- cache the OS image and its virtual LUN at an edge site age cloud and using the MapReduce [11] framework closer to the physical machine. The machine hosting the for building highly parallel data-intensive applications, VMs (e.g., the ESX server) would connect to the edge have highlighted the need for an extremely scalable in- site to access the virtual LUNs over NFS while the data frastructure for moving, storing, and accessing mas- would move transparently between the core and edge sive amounts of data across geographically distributed sites on demand. This enormously simplifies both the sites. While large cluster file systems, e.g., GPFS [26], time and complexity of configuring new VMs and dy- Lustre [3], PanFS [29] and Internet-scale file systems, namically moving them across a WAN. e.g., GFS [14], HDFS [6] can scale in capacity and ac- Research efforts on caching file system data have cess bandwidth to support a large number of clients and mostly been limited to improving the performance of petabytes of data, they cannot mask the latency and fluc- a single client machine [18, 25, 22]. Moreover, most tuating performance of accessing data across a WAN. available solutions are NFS client based caches [15, 18] and cannot function as a standalone file system (with- and implementation of Panache and evaluate its key fea- out network connectivity) that can be used by a POSIX- tures with multiple workloads across local and wide area dependent application. What is needed is the ability networks. to pull and push data in parallel, across a wide-area The rest of the paper is organized as follows. In network, store it in a scalable underlying infrastructure the next two sections we provide a brief background while guaranteeing file system consistency semantics. of pNFS and GPFS, the two essential components of In this paperwe describe Panache, a read-write, multi- Panache. Section 4 provides an overview of the Panache node file system cache built for scalability and perfor- architecture. The details of how synchronous and asyn- mance. The distributed and parallel nature of the sys- chronous operations are handled are described in Sec- tem completely changes the design space and requires tion 5 and Section 6. Section 7 presents the evaluation re-architecting the entire stack to eliminate bottlenecks. of Panache using different workloads. Finally, Section 8 The key contribution of Panache is a fully parallelizable discusses the related work and Section 9 presents our design that allows every aspect of the file system cache conclusions. to operate in parallel. These include: 2 Background • parallel ingest wherein, on a miss, multiple files and multiple chunks of a file are pulled into the In order to better understand the design of Panache let cache in parallel from multiple nodes, us review its two basic components: GPFS, the paral- lel cluster file system used to store the cached data, and • parallel access wherein a cached file is accessible immediately from all the nodes of the cache, pNFS, the nascent industry-standard protocol for trans- ferring data between the cache and the remote site. • parallel update where all nodes of the cache can write and queue, for remote execution, updates to GPFS: General Parallel File System [26] is IBM’s the same file in parallelor updatethe data and meta- high-performance shared-disk cluster file system. GPFS data of multiple files in parallel, achieves its extreme scalability through a shared-disk ar- • parallel delayed data write-back wherein the writ- chitecture. Files are wide-striped across all disks in the ten file data is asynchronously flushed in parallel file system where the number of disks can range from frommultiple nodes of the cache to the remote clus- tens to several thousand disks in the largest GPFS instal- ter, and lations. In addition to balancing the load on the disks, • parallel delayed metadata write-back where all striping achieves the full throughput that the disk sub- metadata updates (file creates, removes etc.) can system is capable of by reading and writing data blocks be made from any node of the cache and asyn- in parallel. chronously flushed back in parallel from multiple The switching fabric that connects file system nodes nodes of the cache. The multi-node flush preserves to disks may consist of a storage area network (SAN), the order in which dependent operations occurred e.g., Fibre Channel, iSCSI, or, a general-purpose net- to maintain correctness. work by using I/O server nodes. GPFS uses distributed There is, by design, no single metadata server and no locking to synchronize access to shared disks where all single network end point to limit scalability as is the nodes share responsibility for data and metadata consis- case in typical NAS systems. In addition, all data and tency. GPFS distributed locking protocols ensure file metadata updates made to the cache are asynchronous. system consistency is maintained regardless of the num- This is essential to support WAN latencies and outages ber of nodes simultaneously reading from and writing as high performance applications cannot function if ev- to the file system, while at the same time allowing the ery update operation requires a WAN round-trip (with parallelism necessary to achieve maximum throughput. latencies running from 30ms to more than 200ms). pNFS: The pNFS protocol, now an integral part of While the focus in this paper is on the parallel as- NFSv4.1, enables clients for direct and parallel access pects of the design, Panache is a fully functioning to storage while preserving operating system, hardware POSIX-compliant caching file system with additional platform, and file system independence [16]. pNFS features including disconnected operations, persistence clients and servers are responsible for control and file across failures, and consistency management, that are management operations, but delegate I/O functionality all needed for a commercial deployment. Panache also to a storage-specific layout driver on the client. borrows from Coda [25] the basic premise of conflict To perform direct and parallel I/O, a pNFS client first handling and conflict resolution when supporting dis- requests layout information from a pNFS server. A lay- connected mode operations and manages them in a clus- out contains the information required to access any byte tered setting.
Recommended publications
  • Oracle® ZFS Storage Appliance Security Guide, Release OS8.6.X
    ® Oracle ZFS Storage Appliance Security Guide, Release OS8.6.x Part No: E76480-01 September 2016 Oracle ZFS Storage Appliance Security Guide, Release OS8.6.x Part No: E76480-01 Copyright © 2014, 2016, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]
  • HP Storageworks Clustered File System Command Line Reference
    HP StorageWorks Clustered File System 3.0 Command Line reference guide *392372-001* *392372–001* Part number: 392372–001 First edition: May 2005 Legal and notice information © Copyright 1999-2005 PolyServe, Inc. Portions © 2005 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Neither PolyServe nor Hewlett-Packard shall be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or translated into another language without the prior written consent of Hewlett-Packard. The information is provided “as is” without warranty of any kind and is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Neither PolyServe nor HP shall be liable for technical or editorial errors or omissions contained herein. The software this document describes is PolyServe confidential and proprietary. PolyServe and the PolyServe logo are trademarks of PolyServe, Inc. PolyServe Matrix Server contains software covered by the following copyrights and subject to the licenses included in the file thirdpartylicense.pdf, which is included in the PolyServe Matrix Server distribution. Copyright © 1999-2004, The Apache Software Foundation. Copyright © 1992, 1993 Simmule Turner and Rich Salz.
    [Show full text]
  • Virtfs—A Virtualization Aware File System Pass-Through
    VirtFS—A virtualization aware File System pass-through Venkateswararao Jujjuri Eric Van Hensbergen Anthony Liguori IBM Linux Technology Center IBM Research Austin IBM Linux Technology Center [email protected] [email protected] [email protected] Badari Pulavarty IBM Linux Technology Center [email protected] Abstract operations into block device operations and then again into host file system operations. This paper describes the design and implementation of In addition to performance improvements over a tradi- a paravirtualized file system interface for Linux in the tional virtual block device, exposing guest file system KVM environment. Today’s solution of sharing host activity to the hypervisor provides greater insight to the files on the guest through generic network file systems hypervisor about the workload the guest is running. This like NFS and CIFS suffer from major performance and allows the hypervisor to make more intelligent decisions feature deficiencies as these protocols are not designed with respect to I/O caching and creates new opportuni- or optimized for virtualization. To address the needs of ties for hypervisor-based services like de-duplification. the virtualization paradigm, in this paper we are intro- ducing a new paravirtualized file system called VirtFS. In Section 2 of this paper, we explore more details about This new file system is currently under development and the motivating factors for paravirtualizing the file sys- is being built using QEMU, KVM, VirtIO technologies tem layer. In Section 3, we introduce the VirtFS design and 9P2000.L protocol. including an overview of the 9P protocol, which VirtFS is based on, along with a set of extensions introduced for greater Linux guest compatibility.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Managing Network File Systems in Oracle® Solaris 11.4
    Managing Network File Systems in ® Oracle Solaris 11.4 Part No: E61004 August 2021 Managing Network File Systems in Oracle Solaris 11.4 Part No: E61004 Copyright © 2002, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Chapter 15: File System Internals
    Chapter 15: File System Internals Operating System Concepts – 10th dition Silberschatz, Galvin and Gagne ©2018 Chapter 15: File System Internals File Systems File-System Mounting Partitions and Mounting File Sharing Virtual File Systems Remote File Systems Consistency Semantics NFS Operating System Concepts – 10th dition 1!"2 Silberschatz, Galvin and Gagne ©2018 Objectives Delve into the details of file systems and their implementation Explore "ooting and file sharing Describe remote file systems, using NFS as an example Operating System Concepts – 10th dition 1!"# Silberschatz, Galvin and Gagne ©2018 File System $eneral-purpose computers can have multiple storage devices Devices can "e sliced into partitions, %hich hold volumes Volumes can span multiple partitions Each volume usually formatted into a file system # of file systems varies, typically dozens available to choose from Typical storage device organization) Operating System Concepts – 10th dition 1!"$ Silberschatz, Galvin and Gagne ©2018 Example Mount Points and File Systems - Solaris Operating System Concepts – 10th dition 1!"! Silberschatz, Galvin and Gagne ©2018 Partitions and Mounting Partition can be a volume containing a file system +* cooked-) or ra& – just a sequence of "loc,s %ith no file system Boot bloc, can point to boot volume or "oot loader set of "loc,s that contain enough code to ,now how to load the ,ernel from the file system 3r a boot management program for multi-os booting 'oot partition contains the 3S, other partitions can hold other
    [Show full text]
  • File-System Implementation
    CHAPTER File-System 11 Implementation As we saw in Chapter 10, the ®le system provides the mechanism for on-line storage and access to ®le contents, including data and programs. The ®le system resides permanently on secondary storage, which is designed to hold a large amount of data permanently. This chapter is primarily concerned with issues surrounding ®le storage and access on the most common secondary-storage medium, the disk. We explore ways to structure ®le use, to allocate disk space, to recover freed space, to track the locations of data, and to interface other parts of the operating system to secondary storage. Performance issues are considered throughout the chapter. Bibliographical Notes The MS-DOS FAT system is explained in[Norton and Wilton (1988)]. The internals of the BSD UNIX system are covered in full in[McKusick and Neville- Neil (2005)]. Details concerning ®le systems for Linux can be found in[Love (2010)]. The Google ®le system is described in[Ghemawat et al. (2003)]. FUSE can be found at http://fuse.sourceforge.net. Log-structured ®le organizations for enhancing both performance and consistency are discussed in[Rosenblum and Ousterhout (1991)], [Seltzer et al. (1993)], and[Seltzer et al. (1995)]. Algorithms such as balanced trees (and much more) are covered by[Knuth (1998)] and [Cormen et al. (2009)].[Silvers (2000)] discusses implementing the page cache in the NetBSD operating system. The ZFS source code for space maps can be found at http://src.opensolaris.org/source/xref/onnv/onnv- gate/usr/src/uts/common/fs/zfs/space map.c. The network ®le system (NFS) is discussed in[Callaghan (2000)].
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • Zfs-Ascalabledistributedfilesystemusingobjectdisks
    zFS-AScalableDistributedFileSystemUsingObjectDisks Ohad Rodeh Avi Teperman [email protected] [email protected] IBM Labs, Haifa University, Mount Carmel, Haifa 31905, Israel. Abstract retrieves the data block from the remote machine. zFS also uses distributed transactions and leases, instead of group- zFS is a research project aimed at building a decentral- communication and clustering software. We intend to test ized file system that distributes all aspects of file and stor- and show the effectiveness of these two features in our pro- age management over a set of cooperating machines inter- totype. connected by a high-speed network. zFS is designed to be zFS has six components: a Front End (FE), a Cooper- a file system that scales from a few networked computers to ative Cache (Cache), a File Manager (FMGR), a Lease several thousand machines and to be built from commodity Manager (LMGR), a Transaction Server (TSVR), and an off-the-shelf components. Object Store (OSD). These components work together to The two most prominent features of zFS are its coop- provide applications/users with a distributed file system. erative cache and distributed transactions. zFS integrates The design of zFS addresses, and is influenced by, issues the memory of all participating machines into one coher- of fault tolerance, security and backup/mirroring. How- ent cache. Thus, instead of going to the disk for a block ever, in this article, we focus on the zFS high-level archi- of data already in one of the machine memories, zFS re- tecture and briefly describe zFS’s fault tolerance character- trieves the data block from the remote machine.
    [Show full text]
  • Sun Storedge™ QFS and Sun Storedge SAM-FS Software Installation and Configuration Guide
    Sun StorEdge™ QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide Release 4.2 Sun Microsystems, Inc. www.sun.com Part No. 817-7722-10 September 2004, Revision A Submit comments about this document at: http://www.sun.com/hwdocs/feedback Copyright 2004 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries. This document and the product to which it pertains are distributed under licenses restricting their use, copying, distribution, and decompilation. No part of the product or of this document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, AnswerBook2, docs.sun.com, Solaris, SunOS, SunSolve, Java, JavaScript, Solstice DiskSuite, and StorEdge are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and in other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc.
    [Show full text]
  • Distributed Storage
    Distributed and Federated Storage How to store things… in… many places... (maybe) CS2510 Presented by: wilkie [email protected] University of Pittsburgh Recommended Reading (or Skimming) • NFS: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.473 • WAFL: https://dl.acm.org/citation.cfm?id=1267093 • Hierarchical File Systems are Dead (Margo Seltzer, 2009): https://www.eecs.harvard.edu/margo/papers/hotos09/paper.pdf • Chord (Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan, 2001): https://pdos.csail.mit.edu/papers/chord:sigcomm01/chord_sigcomm.pdf • Kademlia (Petar Maymounkov, David Mazières, 2002): https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf • BitTorrent Overview: http://web.cs.ucla.edu/classes/cs217/05BitTorrent.pdf • IPFS (Juan Benet, 2014): https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ ipfs.draft3.pdf (served via IPFS, neat) Network File System NFS: A Traditional and Classic Distributed File System Problem • Storage is cheap. • YES. This is a problem in a classical sense. • People are storing more stuff and want very strong storage guarantees. • Networked (web) applications are global and people want strong availability and stable speed/performance (wherever in the world they are.) Yikes! • More data == Greater probability of failure • We want consistency (correct, up-to-date data) • We want availability (when we need it) • We want partition tolerance (even in the presence of downtime) • Oh. Hmm. Well, heck. • That’s hard (technically impossible) so what can we do? Lightning Round: Distributed Storage • Network File System (NFS) • We will gloss over details, here, Unreliable but the papers are definitely worth a read. • It invented the Virtual File System (VFS) • Basically, though, it is an early attempt to investigate the trade-offs for client/server file consistency Most Reliable?? NFS System Model • Each client connects directly to the server.
    [Show full text]
  • Ops Center Protector File System Support Matrix
    Hitachi Ops Center Protector v7.0 Support Matrix: File Systems Introduction: Agent-Based and Agentless Protection Protector can provide consistent protection either with or without the use of a host agent. • Agent-based protection performs the application interaction required to provide application-aware consistency. • Agentless protection ensures the application data is protected on disk and therefore provides crash consistency. Pre-execution script- and post-execution scripts can be called as part of a policy to enable application aware consistency. This support matrix describes the environments supported by either method of protection. Agent-Based Protection Agent-based protection provides application consistent protection support with the following operating systems and configurations. Operating System(1) Supported Protection Types Host Based Storage Based Gen 1 Gen 2 Hitachi Repository Repository, Block Gen 2 HCP, Amazon S3 Microsoft Windows Windows Server 2012 (64-bit) Windows Server 2012 R2 (64-bit) Windows Server 2016 (64-bit) (2) (NTFS, ReFS) (NTFS, ReFS) (NTFS, ReFS) Windows Server 2019(2) Microsoft Windows (Desktop) Windows 8 and 8.1 Windows 10 (NTFS) (NTFS) Linux (3)(4) RHEL 6 x64 (6.3 and newer) RHEL 7 x64 (7.0 and newer) RHEL 8 x64 (8.0 and 8.1 Only) OEL 6 x64 (6.3 and newer) (EXT3, EXT4, (EXT3, EXT4, OEL 7 x64 (7.0 and newer) (EXT3, LVM, NFS, LVM, NFS, OEL 8 x64 (8.0 and 8.1 Only) EXT4,LVM,ASM) SUSE 11 x64 (11.3 and newer) XFS,ASM) XFS,ASM)) SUSE 12 x64 (12.0 and newer) SUSE 15 x64(15.0 SP1 Only) NFS = network file system,
    [Show full text]