Nache: Design and Implementation of a Caching Proxy for Nfsv4

Total Page:16

File Type:pdf, Size:1020Kb

Nache: Design and Implementation of a Caching Proxy for Nfsv4 Nache: Design and Implementation of a Caching Proxy for NFSv4 Ajay Gulati Manoj Naik Renu Tewari Rice University IBM Almaden Research Center IBM Almaden Research Center [email protected] [email protected] [email protected] Abstract wide area network. Other filesystem architectures such as AFS [17] and DCE/DFS[20] have attempted to solve In this paper, we present Nache, a caching proxy for the WAN file sharing problem through a distributed ar- NFSv4 that enables a consistent cache of a remote NFS chitecture that provides a shared namespace by uniting server to be maintained and shared across multiple lo- disparate file servers at remote locations into a single log- cal NFS clients. Nache leverages the features of NFSv4 ical filesystem. However, these technologies with propri- to improve the performance of file accesses in a wide- etary clients and protocols incur substantial deployment area distributed setting by bringing the data closer to the expense and have not been widely adopted for enterprise- client. Conceptually, Nache acts as an NFS server to the wide file sharing. In more controlled environments, data local clients and as an NFS client to the remote server. sharing can also be facilitated by a clustered filesystem To provide cache consistency, Nache exploits the read such as GPFS[31] or Lustre[1]. While these are designed and write delegations support in NFSv4. Nache enables for high performance and strong consistency, they are ei- the cache and the delegation to be shared among a set of ther expensive or difficult to deploy and administer or local clients, thereby reducing conflicts and improving both. performance. We have implemented Nache in the Linux 2.6 kernel. Using Filebench workloads and other bench- Recently, a new market has emerged to primarily marks, we present the evaluation of Nache and show that serve the file access requirements of enterprises with out- it can reduce the NFS operations at the server by 10-50%. sourced partnerships where knowledge workers are ex- pected to interact across a number of locations across a WAN. Wide Area File Services (WAFS) is fast gaining 1 Introduction momentum and recognition with leading storage and net- working vendors integrating WAFS solutions into new Most medium to large enterprises store their unstructured product offerings[5][3]. File access is provided through data in filesystems spread across multiple file servers. standard NFS or CIFS protocols with no modifications With ever increasing network bandwidths, enterprises are required to the clients or the server. In order to compen- moving toward distributed operations where sharing pre- sate for high latency of WAN accesses, low bandwidth sentations and documents across office locations, multi- and lossy links, the WAFS offerings rely on custom de- site collaborations and joint product development have vices at both the client and server with a custom protocol become increasingly common. This requires sharing optimized for WAN access in between. data in a uniform, secure, and consistent manner across One approach often used to reduce WAN latency is the global enterprise with reasonably good performance. to cache the data closer to the client. Another is to Data and file sharing has long been achieved through use a WAN-friendly access protocol. The version 4 of traditional file transfer mechanisms such as FTP or dis- the NFS protocol added a number of features to make tributed file sharing protocols such as NFS and CIFS. it more suitable for WAN access [29]. These include: While the former are mostly adhoc, the latter tend to be batching multiple operations in a single RPC call to the “chatty” with multiple round-trips across the network for server, enabling read and write file delegations for reduc- every access. Both NFS and CIFS were originally de- ing cache consistency checks, and support for redirecting signed to work for a local area network involving low la- clients to other, possibly closer, servers. In this paper, we tency and high bandwidth access among the servers and discuss the design and implementation of a caching file clients and as such are not optimized for access over a server proxy called Nache. Nache leverages the features USENIX Association FAST ’07: 5th USENIX Conference on File and Storage Technologies 199 of NFSv4 to improve the performance of file serving permissions on every open by sending a GETATTR in a wide-area distributed setting. Basically, the Nache or ACCESS operation to the server. If the file at- proxy sits in between a local NFS client and a remote tributes are the same as those just after the previ- NFS server bringing the remote data closer to the client. ous close, the client will assume its data cache is Nache acts as an NFS server to the local client and as an still valid; otherwise, the cache is purged. On every NFS client to the remote server. To provide cache con- close, the client writes back any pending changes sistency, Nache exploits the read and write delegations to the file so that the changes are visible on the support in NFSv4. Nache is ideally suited for environ- next open. NFSv3 introduced weak cache con- ments where data is commonly shared across multiple sistency [28] which provides a way, albeit imper- clients. It provides a consistent view of the data by al- fect, of checking a file’s attributes before and after lowing multiple clients to share a delegation, thereby re- an operation to allow a client to identify changes moving the overhead of a recall on a conflicting access. that could have been made by other clients. NFS, Sharing files across clients is common for read-only data however, never implemented distributed cache co- front-ended by web servers, and is becoming widespread herency or concurrent write management [29] to for presentations, videos, documents and collaborative differentiate between updates from one client and projects across a distributed enterprise. Nache is bene- those from multiple clients. Spritely NFS [35] and ficial even when the degree of sharing is small as it re- NQNFS [23] added stronger consistency semantics duces both the response time of a WAN access and the to NFS by adding server callbacks and leases but overhead of recalls. these never made it to the official protocol. In this paper, we highlight our three main contribu- • Andrew File System (AFS): Compared to NFS, AFS tions. First, we explore the performance implications is better suited for WAN accesses as it relies heav- of read and write open delegations in NFSv4. Second, ily on client-side caching for performance [18]. An we detail the implementation of an NFSv4 proxy cache AFS client does whole file caching (or large chunks architecture in the Linux 2.6 kernel. Finally, we dis- in later versions). Unlike an NFS client that checks cuss how delegations are leveraged to provide consistent with the server on a file open, in AFS, the server caching in the proxy. Using our testbed infrastructure, establishes a callback to notify the client of other we demonstrate the performance benefits of Nache using updates that may happen to the file. The changes the Filebench benchmark and other workloads. For these made to a file are made visible at the server when workloads, the Nache is shown to reduce the number of the client closes the file. When there is a conflict NFS operations seen at the server by 10-50%. with a concurrent close done by multiple clients, the The rest of the paper is organized as follows. In the file at the server reflects the data of the last client’s next section we provide a brief background of consis- close. DCE/DFS improves upon AFS caching by tency support in various distributed filesystems. Sec- letting the client specify the type of file access (read, tion 3 analyzes the delegation overhead and benefits in write) so that the callback is issued only when there NFSv4. Section 4 provides an overview of the Nache ar- is an open mode conflict. chitecture. Its implementation is detailed in section 5 and • Common Internet File System (CIFS): CIFS enables evaluated in section 6 using different workloads. We dis- stronger cache consistency [34] by using oppor- cuss related work in section 7. Finally, section 8 presents tunistic locks (OpLocks) as a mechanism for cache our conclusions and describes future work. coherence. The CIFS protocol allows a client to request three types of OpLocks at the time of file open: exclusive, batch and level II. An exclusive 2 Background: Cache Consistency oplock enables it to do all file operations on the file without any communication with the server till the An important consideration in the design of any caching file is closed. In case another client requests access solution is cache consistency. Consistency models in to the same file, the oplock is revoked and the client caching systems have been studied in depth in various is required to flush all modified data back to the large distributed systems and databases [9]. In this sec- server. A batch oplock permits the client to hold on tion, we review the consistency characteristics of some to the oplock even after a close if it plans to reopen widely deployed distributed filesystems. the file very soon. Level II oplocks are shared and • Network File System (NFS): Since perfect co- are used for read-only data caching where multiple herency among NFS clients is expensive to achieve, clients can simultaneously read the locally cached the NFS protocol falls back to a weaker model version of the file. known as close-to-open consistency [12]. In this model, the NFS client checks for file existence and 200 FAST ’07: 5th USENIX Conference on File and Storage Technologies USENIX Association 3 Delegations and Caching in NFSv4 14 No delegations With Read Delegations ) s 0 0 12 0 The design of the version 4 of the NFS protocol [29] in- ' n i ( 10 cludes a number of features to improve performance in a r e v r wide area network with high latency and low bandwidth e 8 S t links.
Recommended publications
  • A Versatile Persistent Caching Framework for File Systems Gopalan Sivathanu and Erez Zadok Stony Brook University
    A Versatile Persistent Caching Framework for File Systems Gopalan Sivathanu and Erez Zadok Stony Brook University Technical Report FSL-05-05 Abstract IDE RAID array that has an Ext2 file system can cache the recently-accessed data into a smaller but faster SCSI We propose and evaluate an approach for decoupling disk to improve performance. The same could be done persistent-cache management from general file system for a local disk; it can be cached to a faster flash drive. design. Several distributed file systems maintain a per- The second problem with the persistent caching sistent cache of data to speed up accesses. Most of these mechanisms of present distributed file systems is that file systems retain complete control over various aspects they have a separate name space for the cache. Hav- of cache management, such as granularity of caching, ing the persistent cache directory structure as an exact and policies for cache placement and eviction. Hard- replica of the source file system is useful even when coding cache management into the file system often re- xCachefs is not mounted. For example, if the cache has sults in sub-optimal performance as the clients of the file the same structure as the source, disconnected reads are system are prevented from exploiting information about possible directly from the cache, as in Coda [2]. their workload in order to tune cache management. In this paper we present a decoupled caching mech- We introduce xCachefs, a framework that allows anism called xCachefs. With xCachefs, data can be clients to transparently augment the cache management cached from any file system to a faster file system.
    [Show full text]
  • Implementing Nfsv4 in the Enterprise: Planning and Migration Strategies
    Front cover Implementing NFSv4 in the Enterprise: Planning and Migration Strategies Planning and implementation examples for AFS and DFS migrations NFSv3 to NFSv4 migration examples NFSv4 updates in AIX 5L Version 5.3 with 5300-03 Recommended Maintenance Package Gene Curylo Richard Joltes Trishali Nayar Bob Oesterlin Aniket Patel ibm.com/redbooks International Technical Support Organization Implementing NFSv4 in the Enterprise: Planning and Migration Strategies December 2005 SG24-6657-00 Note: Before using this information and the product it supports, read the information in “Notices” on page xi. First Edition (December 2005) This edition applies to Version 5, Release 3, of IBM AIX 5L (product number 5765-G03). © Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . xi Trademarks . xii Preface . xiii The team that wrote this redbook. xiv Acknowledgments . xv Become a published author . xvi Comments welcome. xvii Part 1. Introduction . 1 Chapter 1. Introduction. 3 1.1 Overview of enterprise file systems. 4 1.2 The migration landscape today . 5 1.3 Strategic and business context . 6 1.4 Why NFSv4? . 7 1.5 The rest of this book . 8 Chapter 2. Shared file system concepts and history. 11 2.1 Characteristics of enterprise file systems . 12 2.1.1 Replication . 12 2.1.2 Migration . 12 2.1.3 Federated namespace . 13 2.1.4 Caching . 13 2.2 Enterprise file system technologies. 13 2.2.1 Sun Network File System (NFS) . 13 2.2.2 Andrew File System (AFS) .
    [Show full text]
  • System Administration Guide Devices and File Systems
    System Administration Guide: Devices and File Systems Part No: 817–5093–24 January 2012 Copyright © 2004, 2012, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS. Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms setforth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
    [Show full text]
  • IBM Spectrum Scale 5.1.0: Concepts, Planning, and Installation Guide Summary of Changes
    IBM Spectrum Scale Version 5.1.0 Concepts, Planning, and Installation Guide IBM SC28-3161-02 Note Before using this information and the product it supports, read the information in “Notices” on page 551. This edition applies to version 5 release 1 modification 0 of the following products, and to all subsequent releases and modifications until otherwise indicated in new editions: • IBM Spectrum Scale Data Management Edition ordered through Passport Advantage® (product number 5737-F34) • IBM Spectrum Scale Data Access Edition ordered through Passport Advantage (product number 5737-I39) • IBM Spectrum Scale Erasure Code Edition ordered through Passport Advantage (product number 5737-J34) • IBM Spectrum Scale Data Management Edition ordered through AAS (product numbers 5641-DM1, DM3, DM5) • IBM Spectrum Scale Data Access Edition ordered through AAS (product numbers 5641-DA1, DA3, DA5) • IBM Spectrum Scale Data Management Edition for IBM® ESS (product number 5765-DME) • IBM Spectrum Scale Data Access Edition for IBM ESS (product number 5765-DAE) Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. IBM welcomes your comments; see the topic “How to send your comments” on page xxxii. When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2015, 2021. US Government Users Restricted Rights – Use,
    [Show full text]
  • Dell EMC Avamar Administration Guide
    Dell EMC Avamar Administration Guide 19.2 Dell Inc. June 2020 Rev. 04 Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2001 - 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. Contents Figures......................................................................................................................................... 12 Tables.......................................................................................................................................... 13 Preface........................................................................................................................................ 20 Chapter 1: Introduction..................................................................................................................23 Avamar system overview................................................................................................................................................... 23 Avamar server................................................................................................................................................................23 Avamar
    [Show full text]
  • A File System Component Compiler
    A File System Component Compiler Erez Zadok [email protected] Computer Science Department Columbia University New York, NY 10027 CUCS-033-97 April 28, 1997 THESIS PROPOSAL File System development is a difficult and time consuming task, the results of which are rarely portable across operating systems. Several proposals to improve the vnode interface to allow for more flexible file system design and implementation have been made in recent years, but none is used in practice because they require costly fundamental changes to kernel interfaces, only operating systems vendors can make those changes, are still non-portable, tend to degrade performance, and do not appear to provide immediate return on such an investment. This proposal advocates a language for describing file systems, called FiST. The associated translator can generate portable C code — kernel resident or not — that implements the described file system. No kernel source code is needed and no existing vnode interface must change. The performance of the file systems automatically generated by FiST can be within a few percent of comparable hand-written file systems. The main benefits to automation are that development and maintenance costs are greatly reduced, and that it becomes practical to prototype, implement, test, debug, and compose a vastly larger set of such file systems with different properties. The proposed thesis will describe the language and its translator, use it to implement a few file systems on more than one platform, and evaluate the performance of the automatically generated code. i Contents 1 Introduction 1 1.1TheProblem.......................................... 2 1.2MySolution........................................... 2 1.3AdvantagesofMySolution.................................
    [Show full text]
  • IBM Spectrum Scale 5.0.3: Concepts, Planning, and Installation Guide Configuration of an IBM Spectrum Scale Stretch Chapter 8
    IBM Spectrum Scale Version 5.0.3 Concepts, Planning, and Installation Guide IBM SC27-9567-03 IBM Spectrum Scale Version 5.0.3 Concepts, Planning, and Installation Guide IBM SC27-9567-03 Note Before using this information and the product it supports, read the information in “Notices” on page 477. This edition applies to version 5 release 0 modification 3 of the following products, and to all subsequent releases and modifications until otherwise indicated in new editions: v IBM Spectrum Scale Data Management Edition ordered through Passport Advantage® (product number 5737-F34) v IBM Spectrum Scale Data Access Edition ordered through Passport Advantage (product number 5737-I39) v IBM Spectrum Scale Erasure Code Edition ordered through Passport Advantage (product number 5737-J34) v IBM Spectrum Scale Data Management Edition ordered through AAS (product numbers 5641-DM1, DM3, DM5) v IBM Spectrum Scale Data Access Edition ordered through AAS (product numbers 5641-DA1, DA3, DA5) v IBM Spectrum Scale Data Management Edition for IBM ESS (product number 5765-DME) v IBM Spectrum Scale Data Access Edition for IBM ESS (product number 5765-DAE) Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. IBM welcomes your comments; see the topic “How to send your comments” on page xxv. When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright IBM Corporation 2015, 2019. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
    [Show full text]
  • AIX 5L Differences Guide Version 5.1 Edition
    AIX 5L Differences Guide Version 5.1 Edition AIX 5L - The industrial strength UNIX operating system Intel Itanium-based and IBM POWER-based platform support Version 5.0 and Version 5.1 enhancements explained René Akeret Anke Hollanders Stuart Lane Antony Peterson ibm.com/redbooks SG24-5765-01 International Technical Support Organization AIX 5L Differences Guide Version 5.1 Edition June 2001 Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix B, “Special notices” on page 473. Second Edition (June 2001) This edition applies to AIX 5L for POWER Version 5.1, program number 5765-E61 and for Itanium-based systems Version 5.1, program number 5799-EAR available as an PRPQ. This document was updated on September 28, 2001. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2001. All rights reserved. Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Contents Figures. .xiii Tables. .xix Preface. .xxi The team that wrote this redbook. xxi Comments welcome. xxvi Chapter 1. AIX 5L introduction and overview . 1 1.1 AIX 5L kernel and application development differences summary .
    [Show full text]
  • FS-Cache Aware CIFS
    FS-Cache aware CIFS SUSE Labs Conference 2010 Jul 14, 2010 Suresh Jayaraman SUSE Labs Overview • Why Network filesystem caching? • Some use-cases/scenarios • Caching data flow • FS-Cache • Cache back-ends • CIFS current implementation • Q &A 2 © Novell, Inc. All rights reserved. Why Network filesystem caching? • Network based filesystems over slow network – Re-reading over loaded Network/Server can be slow – Local caching can help > Reduce Network traffic and Network latency > Reduce Server load > Improve Performance and Scalability • First step towards making disconnected operations a reality! – Still lot needed from Network filesystem • Not for every workload or I/O pattern – Trade offs 3 © Novell, Inc. All rights reserved. Some use-cases/scenarios • Render farms in Entertainment industry – Use to distribute textures to individual rendering units • Read only multimedia workloads • Accelerate distributed web-servers – Web server cluster nodes serve content from the cache • /usr distributed by a network file system – Avoid spamming Servers when there is a power outage • Caching Server with SSDs reexporting netfs data • Persistent cache remains across reboots 4 © Novell, Inc. All rights reserved. Caching data flow Source: http://www.linux-mag.com 5 © Novell, Inc. All rights reserved. FS-Cache FS-Cache (1) • Terminology - FS-Cache, CacheFS • Caching facility for network filesystems, slow media – Mediates between netfs and cache back-end – Allows persistent local storage for caching data and metadata – Cache is transparent – Serves out pages, does not load file entirely – Hides cache I/O errors from the netfs – Allows netfs to decide on index hierarchy – Index hierarchy used to locate file objects/discard subset of files cached • Written by David Howells, upstream since 2.6.30 7 © Novell, Inc.
    [Show full text]
  • Oracle Solaris Administration Network Services
    Oracle® Solaris Administration: Network Services Part No: 821–1454–10 November 2011 Copyright © 2002, 2011, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract,and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
    [Show full text]
  • Fist: a System for Stackable File-System Code Generation Erez
    FiST: A System for Stackable File-System Code Generation Erez Zadok Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY May, 2001 c 2001 Erez Zadok All Rights Reserved ABSTRACT FiST: A System for Stackable File-System Code Generation Erez Zadok File systems often need to evolve and require many changes to support new features. Traditional file-system development is difficult because most of the work is done in the kernel—a hostile development environment where progress is slow, debugging is difficult, and simple mistakes can crash systems. Kernel work also requires deep understanding of system internals, resulting in developers spending a lot of time becoming familiar with the system’s details. Furthermore, any file system written for one system requires significant effort to port to another system. Stackable file systems promise to ease the development of file systems by offering a mechanism for incremental development building on existing file systems. Unfortunately, existing stacking methods often require writing complex low-level kernel code that is specific to a single operating system platform and also difficult to port. We propose a new language, FiST, to describe stackable file systems. FiST uses operations common to file-system interfaces and familiar to user-level developers: creating a directory, reading a file, removing a file, listing the contents of a directory, etc. From a single description, FiST’s compiler produces file-system modules for multiple platforms. FiST does that with the assistance of platform-specific stackable templates. The templates handle many of the internal details of operating systems, and free developers from dealing with these internals.
    [Show full text]
  • File System Virtualization and Service for Grid Data Management
    FILE SYSTEM VIRTUALIZATION AND SERVICE FOR GRID DATA MANAGEMENT By MING ZHAO A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF THE PHILOSOPHY UNIVERSITY OF FLORIDA 2008 1 c 2008 Ming Zhao 2 To my wife and my parents 3 ACKNOWLEDGMENTS First and foremost, I would like to express my deepest gratitude to my advisor, Prof. Renato Figueiredo, for his excellent guidance throughout my Ph.D. study. I am greatly indebted to him for providing me such an exciting research opportunity, constantly supporting me on things no matter big or small, and patiently giving me time and helping me grow. I would also like to gratefully and sincerely thank Prof. Jos´eFortes for his exceptional leadership of the ACIS lab and his invaluable advice in every aspect of my academic pursuit. I have learned enormously from them about research, teaching, and advising, and they will always be examples for me to follow in my future career. I am very grateful to the other members of my supervisory committee, Prof. Sanjay Ranka and Prof. Tao Li, for taking time out of their busy schedules to review my work. Their constructive criticism and comments are very helpful to improving my work and are highly appreciated. I also wish to thank Prof. Alan George and Prof. Oscar Boykin for their advice and help. My heartfelt thanks and appreciation are extended to my current and former fellow ACIS lab members. The lab is where I have obtained solid support of my research, gained precious knowledge and experience, and grown from a student to a professional.
    [Show full text]