IBM Spectrum Scale 5.0.3: Concepts, Planning, and Installation Guide Configuration of an IBM Spectrum Scale Stretch Chapter 8

Total Page:16

File Type:pdf, Size:1020Kb

IBM Spectrum Scale 5.0.3: Concepts, Planning, and Installation Guide Configuration of an IBM Spectrum Scale Stretch Chapter 8 IBM Spectrum Scale Version 5.0.3 Concepts, Planning, and Installation Guide IBM SC27-9567-03 IBM Spectrum Scale Version 5.0.3 Concepts, Planning, and Installation Guide IBM SC27-9567-03 Note Before using this information and the product it supports, read the information in “Notices” on page 477. This edition applies to version 5 release 0 modification 3 of the following products, and to all subsequent releases and modifications until otherwise indicated in new editions: v IBM Spectrum Scale Data Management Edition ordered through Passport Advantage® (product number 5737-F34) v IBM Spectrum Scale Data Access Edition ordered through Passport Advantage (product number 5737-I39) v IBM Spectrum Scale Erasure Code Edition ordered through Passport Advantage (product number 5737-J34) v IBM Spectrum Scale Data Management Edition ordered through AAS (product numbers 5641-DM1, DM3, DM5) v IBM Spectrum Scale Data Access Edition ordered through AAS (product numbers 5641-DA1, DA3, DA5) v IBM Spectrum Scale Data Management Edition for IBM ESS (product number 5765-DME) v IBM Spectrum Scale Data Access Edition for IBM ESS (product number 5765-DAE) Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. IBM welcomes your comments; see the topic “How to send your comments” on page xxv. When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright IBM Corporation 2015, 2019. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures .............. vii Data restore options in IBM Spectrum Scale .. 97 Data mirroring in IBM Spectrum Scale .... 98 Tables ............... ix Protecting file data using snapshots ..... 98 Introduction to Scale Out Backup and Restore (SOBAR) .............. 98 About this information ........ xi Introduction to protocols cluster disaster recovery Prerequisite and related information ..... xxiv (DR) ................ 98 Conventions used in this information ..... xxiv Commands for data protection and recovery in How to send your comments ........ xxv IBM Spectrum Scale .......... 99 Introduction to IBM Spectrum Scale GUI .... 100 Summary of changes ....... xxvii IBM Spectrum Scale management API ..... 103 Functional overview .......... 104 Chapter 1. Introducing IBM Spectrum API requests............. 105 Scale ................ 1 API responses ............ 109 List of IBM Spectrum Scale management API Overview of IBM Spectrum Scale ....... 1 commands ............. 111 The strengths of IBM Spectrum Scale ..... 1 Introduction to Cloud services........ 116 The basic IBM Spectrum Scale structure .... 5 How Transparent cloud tiering works .... 118 IBM Spectrum Scale cluster configurations ... 7 How Cloud data sharing works ...... 119 GPFS architecture............. 9 How Write Once Read Many (WORM) storage Special management functions ....... 9 works ............... 122 Use of disk storage and file structure within a Supported cloud providers ........ 123 GPFS file system ........... 12 Interoperability of Transparent cloud tiering GPFS and memory ........... 15 with other IBM Spectrum Scale features ... 123 GPFS and network communication ..... 17 Interoperability of Cloud data sharing with Application and user interaction with GPFS .. 19 other IBM Spectrum Scale features ..... 125 NSD disk discovery .......... 25 Introduction to file audit logging ....... 127 Failure recovery processing ........ 25 Producers in file audit logging ...... 127 Cluster configuration data files ....... 26 The message queue in file audit logging ... 127 GPFS backup data ........... 27 Consumers in file audit logging ...... 128 Clustered configuration repository...... 27 The file audit logging fileset ....... 128 Protocols support overview: Integration of protocol File audit logging records ........ 129 access methods with GPFS ......... 28 File audit logging events' descriptions .... 130 NFS support overview.......... 30 JSON attributes in file audit logging..... 130 SMB support overview ......... 32 Remotely mounted file systems in file audit Object storage support overview ...... 33 logging .............. 132 Cluster Export Services overview ...... 38 Introduction to watch folder ........ 133 CES iSCSI support ........... 40 | Introduction to clustered watch ....... 133 Active File Management .......... 41 | Producers in clustered watch ....... 134 Introduction to Active File Management (AFM) 41 | The message queue in clustered watch .... 134 Overview and concepts ......... 42 | Conduits in clustered watch ....... 135 Active File Management (AFM) features.... 53 | The message queue services monitor and AFM Limitations ........... 77 | clustered watch............ 136 AFM-based Asynchronous Disaster Recovery (AFM | Interaction between clustered watch and the DR) ................. 78 | external Kafka sink .......... 136 Introduction ............. 79 | JSON attributes in clustered watch ..... 137 Recovery time objective (RTO) ....... 81 IBM Spectrum Scale in an OpenStack cloud Modes and concepts .......... 81 deployment .............. 139 AFM-based Asynchronous Disaster Recovery IBM Spectrum Scale product editions ..... 142 features............... 81 IBM Spectrum Scale license designation .... 143 AFM DR Limitations .......... 87 Capacity-based licensing ......... 146 AFM DR deployment considerations and best IBM Spectrum Storage Suite ........ 147 practices .............. 89 Data protection and disaster recovery in IBM Spectrum Scale ............. 97 Data backup options in IBM Spectrum Scale .. 97 © Copyright IBM Corp. 2015, 2019 iii Chapter 2. Planning for IBM Spectrum Chapter 3. Steps for establishing and Scale ............... 149 starting your IBM Spectrum Scale Planning for GPFS............ 149 cluster .............. 243 Hardware requirements ......... 149 Software requirements ......... 150 Chapter 4. Installing IBM Spectrum Recoverability considerations ....... 151 Scale on Linux nodes and deploying GPFS cluster creation considerations..... 158 Disk considerations .......... 162 protocols ............. 245 File system creation considerations ..... 167 Deciding whether to install IBM Spectrum Scale Fileset considerations for creating protocol data and deploy protocols manually or with the exports .............. 183 installation toolkit ............ 246 Backup considerations for using IBM Spectrum Installation prerequisites.......... 246 Protect .............. 185 Preparing the environment on Linux nodes ... 251 Planning for Quality of Service for I/O IBM Spectrum Scale packaging overview .... 252 operations (QoS) ........... 194 Preparing to install the GPFS software on Linux Planning for extended attributes ...... 195 nodes ................ 252 Planning for the Highly Available Write Cache Accepting the electronic license agreement on feature (HAWC) ........... 198 Linux nodes ............. 253 Planning for systemd.......... 201 Extracting the IBM Spectrum Scale software on Planning for protocols .......... 202 Linux nodes ............. 253 Authentication considerations ....... 202 Extracting IBM Spectrum Scale patches (update Planning for NFS ........... 214 SLES or Red Hat Enterprise Linux RPMs or Planning for SMB ........... 216 Ubuntu Linux packages) ........ 255 Planning for object storage deployment.... 221 Installing the IBM Spectrum Scale man pages on Planning for Cloud services ........ 227 Linux nodes ............. 255 Hardware requirements for Cloud services .. 227 For Linux on Z: Changing the kernel settings 255 Software requirements for Cloud services ... 228 Manually installing the IBM Spectrum Scale Network considerations for Cloud services .. 229 software packages on Linux nodes ...... 256 Cluster node considerations for Cloud services 229 Building the GPFS portability layer on Linux IBM Cloud Object Storage considerations ... 230 nodes ............... 261 Firewall recommendations for Cloud services 232 Manually installing IBM Spectrum Scale and Performance considerations........ 233 deploying protocols on Linux nodes..... 263 Security considerations ......... 237 Verifying the IBM Spectrum Scale installation on Planning for maintenance activities ..... 237 Ubuntu Linux nodes .......... 270 Backup considerations for Transparent cloud Verifying the IBM Spectrum Scale installation on tiering ............... 238 SLES and Red Hat Enterprise Linux nodes .. 270 Quota support for tiering ........ 239 Manually installing IBM Spectrum Scale for | Client-assisted recalls.......... 240 object storage on Red Hat Enterprise Linux 7.x Planning for AFM ............ 240 nodes ............... 271 Requirements for UID and GID on the cache Manually installing IBM Spectrum Scale for and home clusters ........... 240 object storage on Ubuntu 16.04 LTS and 18.04 Recommended workerthreads on cache cluster 240 LTS nodes ............. 272 Inode limits to set at cache and home .... 240 Manually installing the performance monitoring Planning for AFM DR .......... 241 tool................ 273 Requirements for UID/GID on primary and Manually installing IBM Spectrum Scale secondary clusters ........... 241 management GUI ........... 276 Recommended worker1threads on primary Installing IBM Spectrum Scale on Linux nodes with cluster............... 241 the installation toolkit .......... 283 NFS setup on the secondary cluster ..... 241 Overview of the installation toolkit ....
Recommended publications
  • A Versatile Persistent Caching Framework for File Systems Gopalan Sivathanu and Erez Zadok Stony Brook University
    A Versatile Persistent Caching Framework for File Systems Gopalan Sivathanu and Erez Zadok Stony Brook University Technical Report FSL-05-05 Abstract IDE RAID array that has an Ext2 file system can cache the recently-accessed data into a smaller but faster SCSI We propose and evaluate an approach for decoupling disk to improve performance. The same could be done persistent-cache management from general file system for a local disk; it can be cached to a faster flash drive. design. Several distributed file systems maintain a per- The second problem with the persistent caching sistent cache of data to speed up accesses. Most of these mechanisms of present distributed file systems is that file systems retain complete control over various aspects they have a separate name space for the cache. Hav- of cache management, such as granularity of caching, ing the persistent cache directory structure as an exact and policies for cache placement and eviction. Hard- replica of the source file system is useful even when coding cache management into the file system often re- xCachefs is not mounted. For example, if the cache has sults in sub-optimal performance as the clients of the file the same structure as the source, disconnected reads are system are prevented from exploiting information about possible directly from the cache, as in Coda [2]. their workload in order to tune cache management. In this paper we present a decoupled caching mech- We introduce xCachefs, a framework that allows anism called xCachefs. With xCachefs, data can be clients to transparently augment the cache management cached from any file system to a faster file system.
    [Show full text]
  • Implementing Nfsv4 in the Enterprise: Planning and Migration Strategies
    Front cover Implementing NFSv4 in the Enterprise: Planning and Migration Strategies Planning and implementation examples for AFS and DFS migrations NFSv3 to NFSv4 migration examples NFSv4 updates in AIX 5L Version 5.3 with 5300-03 Recommended Maintenance Package Gene Curylo Richard Joltes Trishali Nayar Bob Oesterlin Aniket Patel ibm.com/redbooks International Technical Support Organization Implementing NFSv4 in the Enterprise: Planning and Migration Strategies December 2005 SG24-6657-00 Note: Before using this information and the product it supports, read the information in “Notices” on page xi. First Edition (December 2005) This edition applies to Version 5, Release 3, of IBM AIX 5L (product number 5765-G03). © Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . xi Trademarks . xii Preface . xiii The team that wrote this redbook. xiv Acknowledgments . xv Become a published author . xvi Comments welcome. xvii Part 1. Introduction . 1 Chapter 1. Introduction. 3 1.1 Overview of enterprise file systems. 4 1.2 The migration landscape today . 5 1.3 Strategic and business context . 6 1.4 Why NFSv4? . 7 1.5 The rest of this book . 8 Chapter 2. Shared file system concepts and history. 11 2.1 Characteristics of enterprise file systems . 12 2.1.1 Replication . 12 2.1.2 Migration . 12 2.1.3 Federated namespace . 13 2.1.4 Caching . 13 2.2 Enterprise file system technologies. 13 2.2.1 Sun Network File System (NFS) . 13 2.2.2 Andrew File System (AFS) .
    [Show full text]
  • System Administration Guide Devices and File Systems
    System Administration Guide: Devices and File Systems Part No: 817–5093–24 January 2012 Copyright © 2004, 2012, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS. Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms setforth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
    [Show full text]
  • IBM Spectrum Scale 5.1.0: Concepts, Planning, and Installation Guide Summary of Changes
    IBM Spectrum Scale Version 5.1.0 Concepts, Planning, and Installation Guide IBM SC28-3161-02 Note Before using this information and the product it supports, read the information in “Notices” on page 551. This edition applies to version 5 release 1 modification 0 of the following products, and to all subsequent releases and modifications until otherwise indicated in new editions: • IBM Spectrum Scale Data Management Edition ordered through Passport Advantage® (product number 5737-F34) • IBM Spectrum Scale Data Access Edition ordered through Passport Advantage (product number 5737-I39) • IBM Spectrum Scale Erasure Code Edition ordered through Passport Advantage (product number 5737-J34) • IBM Spectrum Scale Data Management Edition ordered through AAS (product numbers 5641-DM1, DM3, DM5) • IBM Spectrum Scale Data Access Edition ordered through AAS (product numbers 5641-DA1, DA3, DA5) • IBM Spectrum Scale Data Management Edition for IBM® ESS (product number 5765-DME) • IBM Spectrum Scale Data Access Edition for IBM ESS (product number 5765-DAE) Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. IBM welcomes your comments; see the topic “How to send your comments” on page xxxii. When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2015, 2021. US Government Users Restricted Rights – Use,
    [Show full text]
  • Dell EMC Avamar Administration Guide
    Dell EMC Avamar Administration Guide 19.2 Dell Inc. June 2020 Rev. 04 Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2001 - 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. Contents Figures......................................................................................................................................... 12 Tables.......................................................................................................................................... 13 Preface........................................................................................................................................ 20 Chapter 1: Introduction..................................................................................................................23 Avamar system overview................................................................................................................................................... 23 Avamar server................................................................................................................................................................23 Avamar
    [Show full text]
  • A File System Component Compiler
    A File System Component Compiler Erez Zadok [email protected] Computer Science Department Columbia University New York, NY 10027 CUCS-033-97 April 28, 1997 THESIS PROPOSAL File System development is a difficult and time consuming task, the results of which are rarely portable across operating systems. Several proposals to improve the vnode interface to allow for more flexible file system design and implementation have been made in recent years, but none is used in practice because they require costly fundamental changes to kernel interfaces, only operating systems vendors can make those changes, are still non-portable, tend to degrade performance, and do not appear to provide immediate return on such an investment. This proposal advocates a language for describing file systems, called FiST. The associated translator can generate portable C code — kernel resident or not — that implements the described file system. No kernel source code is needed and no existing vnode interface must change. The performance of the file systems automatically generated by FiST can be within a few percent of comparable hand-written file systems. The main benefits to automation are that development and maintenance costs are greatly reduced, and that it becomes practical to prototype, implement, test, debug, and compose a vastly larger set of such file systems with different properties. The proposed thesis will describe the language and its translator, use it to implement a few file systems on more than one platform, and evaluate the performance of the automatically generated code. i Contents 1 Introduction 1 1.1TheProblem.......................................... 2 1.2MySolution........................................... 2 1.3AdvantagesofMySolution.................................
    [Show full text]
  • AIX 5L Differences Guide Version 5.1 Edition
    AIX 5L Differences Guide Version 5.1 Edition AIX 5L - The industrial strength UNIX operating system Intel Itanium-based and IBM POWER-based platform support Version 5.0 and Version 5.1 enhancements explained René Akeret Anke Hollanders Stuart Lane Antony Peterson ibm.com/redbooks SG24-5765-01 International Technical Support Organization AIX 5L Differences Guide Version 5.1 Edition June 2001 Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix B, “Special notices” on page 473. Second Edition (June 2001) This edition applies to AIX 5L for POWER Version 5.1, program number 5765-E61 and for Itanium-based systems Version 5.1, program number 5799-EAR available as an PRPQ. This document was updated on September 28, 2001. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2001. All rights reserved. Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Contents Figures. .xiii Tables. .xix Preface. .xxi The team that wrote this redbook. xxi Comments welcome. xxvi Chapter 1. AIX 5L introduction and overview . 1 1.1 AIX 5L kernel and application development differences summary .
    [Show full text]
  • FS-Cache Aware CIFS
    FS-Cache aware CIFS SUSE Labs Conference 2010 Jul 14, 2010 Suresh Jayaraman SUSE Labs Overview • Why Network filesystem caching? • Some use-cases/scenarios • Caching data flow • FS-Cache • Cache back-ends • CIFS current implementation • Q &A 2 © Novell, Inc. All rights reserved. Why Network filesystem caching? • Network based filesystems over slow network – Re-reading over loaded Network/Server can be slow – Local caching can help > Reduce Network traffic and Network latency > Reduce Server load > Improve Performance and Scalability • First step towards making disconnected operations a reality! – Still lot needed from Network filesystem • Not for every workload or I/O pattern – Trade offs 3 © Novell, Inc. All rights reserved. Some use-cases/scenarios • Render farms in Entertainment industry – Use to distribute textures to individual rendering units • Read only multimedia workloads • Accelerate distributed web-servers – Web server cluster nodes serve content from the cache • /usr distributed by a network file system – Avoid spamming Servers when there is a power outage • Caching Server with SSDs reexporting netfs data • Persistent cache remains across reboots 4 © Novell, Inc. All rights reserved. Caching data flow Source: http://www.linux-mag.com 5 © Novell, Inc. All rights reserved. FS-Cache FS-Cache (1) • Terminology - FS-Cache, CacheFS • Caching facility for network filesystems, slow media – Mediates between netfs and cache back-end – Allows persistent local storage for caching data and metadata – Cache is transparent – Serves out pages, does not load file entirely – Hides cache I/O errors from the netfs – Allows netfs to decide on index hierarchy – Index hierarchy used to locate file objects/discard subset of files cached • Written by David Howells, upstream since 2.6.30 7 © Novell, Inc.
    [Show full text]
  • Oracle Solaris Administration Network Services
    Oracle® Solaris Administration: Network Services Part No: 821–1454–10 November 2011 Copyright © 2002, 2011, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract,and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
    [Show full text]
  • Fist: a System for Stackable File-System Code Generation Erez
    FiST: A System for Stackable File-System Code Generation Erez Zadok Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY May, 2001 c 2001 Erez Zadok All Rights Reserved ABSTRACT FiST: A System for Stackable File-System Code Generation Erez Zadok File systems often need to evolve and require many changes to support new features. Traditional file-system development is difficult because most of the work is done in the kernel—a hostile development environment where progress is slow, debugging is difficult, and simple mistakes can crash systems. Kernel work also requires deep understanding of system internals, resulting in developers spending a lot of time becoming familiar with the system’s details. Furthermore, any file system written for one system requires significant effort to port to another system. Stackable file systems promise to ease the development of file systems by offering a mechanism for incremental development building on existing file systems. Unfortunately, existing stacking methods often require writing complex low-level kernel code that is specific to a single operating system platform and also difficult to port. We propose a new language, FiST, to describe stackable file systems. FiST uses operations common to file-system interfaces and familiar to user-level developers: creating a directory, reading a file, removing a file, listing the contents of a directory, etc. From a single description, FiST’s compiler produces file-system modules for multiple platforms. FiST does that with the assistance of platform-specific stackable templates. The templates handle many of the internal details of operating systems, and free developers from dealing with these internals.
    [Show full text]
  • File System Virtualization and Service for Grid Data Management
    FILE SYSTEM VIRTUALIZATION AND SERVICE FOR GRID DATA MANAGEMENT By MING ZHAO A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF THE PHILOSOPHY UNIVERSITY OF FLORIDA 2008 1 c 2008 Ming Zhao 2 To my wife and my parents 3 ACKNOWLEDGMENTS First and foremost, I would like to express my deepest gratitude to my advisor, Prof. Renato Figueiredo, for his excellent guidance throughout my Ph.D. study. I am greatly indebted to him for providing me such an exciting research opportunity, constantly supporting me on things no matter big or small, and patiently giving me time and helping me grow. I would also like to gratefully and sincerely thank Prof. Jos´eFortes for his exceptional leadership of the ACIS lab and his invaluable advice in every aspect of my academic pursuit. I have learned enormously from them about research, teaching, and advising, and they will always be examples for me to follow in my future career. I am very grateful to the other members of my supervisory committee, Prof. Sanjay Ranka and Prof. Tao Li, for taking time out of their busy schedules to review my work. Their constructive criticism and comments are very helpful to improving my work and are highly appreciated. I also wish to thank Prof. Alan George and Prof. Oscar Boykin for their advice and help. My heartfelt thanks and appreciation are extended to my current and former fellow ACIS lab members. The lab is where I have obtained solid support of my research, gained precious knowledge and experience, and grown from a student to a professional.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 8.458,239 B2 Ananthanarayanan Et Al
    USOO8458239B2 (12) United States Patent (10) Patent No.: US 8.458,239 B2 Ananthanarayanan et al. (45) Date of Patent: Jun. 4, 2013 (54) DIRECTORY TRAVERSAL IN A SCALABLE s A 3. 3. Alley al MULTI-NODE FILE SYSTEM CACHE FOR A 5,940,841- W - A 8, 1999 Schmucketarter et al. al. REMOTE CLUSTER FILE SYSTEM 6,012,085 A 1/2000 Yohe et al. 6,023,706 A 2/2000 Schmucket al. (75) Inventors: Rajagopol Ananthanarayanan, San 6,122,629 A * 9/2000 Walker et al. ................. 707,613 CA SS T y R San 6.219,6936, 192.408 B1*B1 4/20012/2001 VahaliaNapolitano et al. et al................. 709,229 ose, CA (US); Roger L. Haskin, 6,490,615 B1 12/2002 Dias et al. Morgan Hill, CA (US); Dean 6,535,970 B1* 3/2003 Bills et al. ..................... 711/2O7 Hildebrand, San Jose, CA (US); Manoj 6,816,891 B1 * 1 1/2004 Vahalia et al. ................ TO9,214 P. Naik, San Jose, CA (US); Frank B. 3. R $398 ps al 1 Schmuck, Campbell, CA (US); Renu 6,965,972 B2 11/2005 NandaOSCO et al.a Tewari, San Jose, CA (US) 7,043,478 B2 5/2006 Martin et al. (73) Assignee: International Business Machines (Continued) Corporation, Armonk, NY (US) OTHER PUBLICATIONS (*) Notice: Subject to any disclaimer, the term of this Batsakis et al., “NFS-CD: Write-Enabled Cooperative Caching in patent is extended or adjusted under 35 NFS. IEEE Transactions on Parallel and Distributed Systems, Mar. U.S.C. 154(b) by 165 days.
    [Show full text]