A Model to Manage Shared Mutable Data in a Distributed Environment

Total Page:16

File Type:pdf, Size:1020Kb

A Model to Manage Shared Mutable Data in a Distributed Environment THE SEA OF STUFF: A MODEL TO MANAGE SHARED MUTABLE DATA IN A DISTRIBUTED ENVIRONMENT Simone Ivan Conte A Thesis Submitted for the Degree of PhD at the University of St Andrews 2018 Full metadata for this thesis is available in St Andrews Research Repository at: http://research-repository.st-andrews.ac.uk/ Please use this identifier to cite or link to this thesis: http://hdl.handle.net/10023/16827 This item is protected by original copyright This item is licensed under a Creative Commons Licence https://creativecommons.org/licenses/by-nc-nd/4.0/ The Sea of Stuff: a Model to Manage Shared Mutable Data in a Distributed Environment Simone Ivan Conte This thesis is submitted in partial fulfilment for the degree of Doctor of Philosophy (PhD) at the University of St Andrews August 2018 ii Abstract Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes. The code for this work can be found online at the following URL: https://github.com/sea-of-stuff (GNU GPL v3). iii iv Declaration Candidate's Declaration I, Simone Ivan Conte, do hereby certify that this thesis, submitted for the degree of PhD, which is approximately 66,400 words in length, has been written by me, and that it is the record of work carried out by me, or principally by myself in collaboration with others as acknowledged, and that it has not been submitted in any previous application for any degree. I was admitted as a research student at the University of St Andrews in September 2014. I received funding from an organisation or institution and have acknowledged the fun- der(s) in the full text of my thesis. Date Signature of candidate Supervisor's Declaration I hereby certify that the candidate has fulfilled the conditions of the Resolution and Regulations appropriate for the degree of PhD in the University of St Andrews and that the candidate is qualified to submit this thesis in application for that degree. Date Signature of supervisor v Permission for Electronic Publication In submitting this thesis to the University of St Andrews we understand that we are giving permission for it to be made available for use in accordance with the regulations of the University Library for the time being in force, subject to any copyright vested in the work not being affected thereby. We also understand, unless exempt by an award of an embargo as requested below, that the title and the abstract will be published, and that a copy of the work may be made and supplied to any bona fide library or research worker, that this thesis will be electronically accessible for personal or research use and that the library has the right to migrate this thesis into new electronic forms as required to ensure continued access to the thesis. I, Simone Ivan Conte, confirm that my thesis does not contain any third-party material that requires copyright clearance. The following is an agreed request by candidate and supervisor regarding the publica- tion of this thesis: • No embargo on print copy. • No embargo on electronic copy. Date Signature of candidate Date Signature of supervisor vi Underpinning Research Data or Digital Outputs Candidate's declaration I, Simone Ivan Conte, hereby certify that no requirements to deposit original research data or digital outputs apply to this thesis and that, where appropriate, secondary data used have been referenced in the full text of my thesis. Date Signature of candidate vii viii Acknowledgments I would like to thank my supervisors, Alan Dearle and Graham Kirby, for their guidance and endless patience throughout my doctoral studies. Thanks goes to Adrian O'Lenskie and Ian Paterson, from Adobe Systems Inc., for their precious advice and support. I would like to acknowledge the School of Computer Science, which has made my stay in St Andrews, over the last eight years, enjoyable academically and has given me the opportunity to meet people that have been a true inspiration to me. Thanks goes to Stuart Norcross and the Fixit team who have helped me with the provision and management of the testbed used for the experiments. Thanks to my office mates Masih, Tom, Ward, and Ryo who have provided me with challenging, interesting, and fun discussions on a daily basis. I would like to thank my aunt Giusy for inspiring me, when I was ten, to pursue a career in science. I am forever grateful to my parents and my brother for their immense and invaluable love and support, which helped me arrive where I am today. Finally, the biggest thanks goes to Giulia, who makes me smile everyday. Funding This work was supported by Adobe Systems, Inc. and EPSRC [grant number EP/M506631/1]. ix x Contents Abstract......................................... iii Declaration.......................................v Permission for Electronic Publication......................... vi Underpinning Research Data or Digital Outputs.................. vii Acknowledgments.................................... ix 1 Introduction1 1.1 Introduction....................................1 1.2 The Three Challenges..............................2 1.2.1 Limitation of the Current Data Storage Abstractions.........3 1.2.2 Transparent Data Storage........................5 1.2.3 Data Ownership, Protection and Control...............7 1.3 Hypothesis....................................8 1.4 Thesis Contributions............................... 10 1.5 Thesis Structure................................. 11 2 Background 13 2.1 Data Storage Concepts.............................. 13 2.1.1 Data.................................... 13 2.1.2 Location and Naming.......................... 14 2.1.3 Metadata................................. 17 2.1.4 Caching.................................. 19 2.1.5 CAP Theorem.............................. 20 2.1.6 Replication, Erasure Coding, and Resiliency.............. 20 xi Contents 2.1.7 Scalability................................. 24 2.1.8 Security.................................. 25 2.2 Data Management Systems........................... 32 2.2.1 File Systems............................... 33 2.2.2 Database Systems............................ 41 2.2.3 Versioning in Storage Systems...................... 46 2.2.4 Networked File Systems......................... 49 2.2.5 Cloud Storage.............................. 54 2.2.6 Object Storage.............................. 55 3 Literature Review 57 3.1 File Systems.................................... 57 3.1.1 Extended Attributes Support...................... 58 3.1.2 Tagged Files............................... 59 3.2 Networked File Systems............................. 59 3.2.1 The Hadoop File System and Google File System........... 60 3.2.2 GlusterFS................................. 63 3.3 Versioning in Storage Systems.......................... 64 3.3.1 Manual Data Versioning......................... 64 3.3.2 Versioning in Backup Applications................... 65 3.3.3 Version Control Systems......................... 67 3.4 Cloud Storage................................... 77 3.4.1 Infrastructure as a Service Storage................... 77 3.4.2 Software as a Service Storage...................... 80 3.4.3 Multi-Cloud Storage........................... 86 3.5 P2P........................................ 87 3.5.1 Overlay Networks............................ 87 3.5.2 P2P Storage Systems.......................... 88 3.6 Context-Aware Storage............................. 99 xii Contents 3.6.1 The Semantic File System........................ 101 3.6.2 The quFile................................ 101 3.7 Conclusions.................................... 102 4 Design Requirements 105 4.1 End-User Requirements............................. 105 4.2 Model Requirements............................... 106 4.3 Architecture Requirements............................ 107 5 The Sea of Stuff 109 5.1 The Sea of Stuff Model.............................
Recommended publications
  • Active @ UNDELETE Users Guide | TOC | 2
    Active @ UNDELETE Users Guide | TOC | 2 Contents Legal Statement..................................................................................................4 Active@ UNDELETE Overview............................................................................. 5 Getting Started with Active@ UNDELETE........................................................... 6 Active@ UNDELETE Views And Windows......................................................................................6 Recovery Explorer View.................................................................................................... 7 Logical Drive Scan Result View.......................................................................................... 7 Physical Device Scan View................................................................................................ 8 Search Results View........................................................................................................10 Application Log...............................................................................................................11 Welcome View................................................................................................................11 Using Active@ UNDELETE Overview................................................................. 13 Recover deleted Files and Folders.............................................................................................. 14 Scan a Volume (Logical Drive) for deleted files..................................................................15
    [Show full text]
  • Gluster Roadmap: Recent Improvements and Upcoming Features
    Gluster roadmap: Recent improvements and upcoming features Niels de Vos GlusterFS co-maintainer [email protected] Agenda ● Introduction into Gluster ● Quick Start ● Current stable releases ● History of feature additions ● Plans for the upcoming 3.8 and 4.0 release ● Detailed description of a few select features FOSDEM, 30 January 2016 2 What is GlusterFS? ● Scalable, general-purpose storage platform ● POSIX-y Distributed File System ● Object storage (swift) ● Distributed block storage (qemu) ● Flexible storage (libgfapi) ● No Metadata Server ● Heterogeneous Commodity Hardware ● Flexible and Agile Scaling ● Capacity – Petabytes and beyond ● Performance – Thousands of Clients FOSDEM, 30 January 2016 3 Terminology ● Brick ● Fundamentally, a filesystem mountpoint ● A unit of storage used as a capacity building block ● Translator ● Logic between the file bits and the Global Namespace ● Layered to provide GlusterFS functionality FOSDEM, 30 January 2016 4 Terminology ● Volume ● Bricks combined and passed through translators ● Ultimately, what's presented to the end user ● Peer / Node ● Server hosting the brick filesystems ● Runs the Gluster daemons and participates in volumes ● Trusted Storage Pool ● A group of peers, like a “Gluster cluster” FOSDEM, 30 January 2016 5 Scale-out and Scale-up FOSDEM, 30 January 2016 6 Distributed Volume ● Files “evenly” spread across bricks ● Similar to file-level RAID 0 ● Server/Disk failure could be catastrophic FOSDEM, 30 January 2016 7 Replicated Volume ● Copies files to multiple bricks ● Similar to file-level
    [Show full text]
  • Storage Virtualization for KVM – Putting the Pieces Together
    Storage Virtualization for KVM – Putting the pieces together Bharata B Rao – [email protected] Deepak C Shettty – [email protected] M Mohan Kumar – [email protected] (IBM Linux Technology Center, Bangalore) Balamurugan Aramugam - [email protected] Shireesh Anjal – [email protected] (RedHat, Bangalore) Aug 2012 LPC2012 Linux is a registered trademark of Linus Torvalds. Agenda ● Problems around storage in virtualization ● GlusterFS as virt-ready file system – QEMU-GlusterFS integration – GlusterFS – Block device translator ● Virtualization management - oVirt and VDSM – VDSM-GlusterFS integration ● Storage integration – libstoragemgmt Problems in storage/FS in KVM virtualization ● Multiple choices for file system and virtualization management ● Lack of virtualization aware file systems ● File systems/storage functionality implemented in other layers of virtualization stack – Snapshots, block streaming, image formats in QEMU ● No well defined interface points in the virtualization stack for storage integration ● No standard interface/APIs available for services like backup and restore ● Need for a single FS/storage solution that works for local, SAN and NAS storage – Mixing different types of storage into a single filesystem namespace GlusterFS ● User space distributed file system that scales to several petabytes ● Aggregates storage resources from multiple nodes and presents a unified file system namespace GlusterFS - features ● Replication ● Striping ● Distribution ● Geo-replication/sync ● Online volume extension
    [Show full text]
  • The Parallel File System Lustre
    The parallel file system Lustre Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State Rolandof Baden Laifer-Württemberg – Internal and SCC Storage Workshop National Laboratory of the Helmholtz Association www.kit.edu Overview Basic Lustre concepts Lustre status Vendors New features Pros and cons INSTITUTSLustre-, FAKULTÄTS systems-, ABTEILUNGSNAME at (inKIT der Masteransicht ändern) Complexity of underlying hardware Remarks on Lustre performance 2 16.4.2014 Roland Laifer – Internal SCC Storage Workshop Steinbuch Centre for Computing Basic Lustre concepts Client ClientClient Directory operations, file open/close File I/O & file locking metadata & concurrency INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Recovery,Masteransicht ändern)file status, Metadata Server file creation Object Storage Server Lustre componets: Clients offer standard file system API (POSIX) Metadata servers (MDS) hold metadata, e.g. directory data, and store them on Metadata Targets (MDTs) Object Storage Servers (OSS) hold file contents and store them on Object Storage Targets (OSTs) All communicate efficiently over interconnects, e.g. with RDMA 3 16.4.2014 Roland Laifer – Internal SCC Storage Workshop Steinbuch Centre for Computing Lustre status (1) Huge user base about 70% of Top100 use Lustre Lustre HW + SW solutions available from many vendors: DDN (via resellers, e.g. HP, Dell), Xyratex – now Seagate (via resellers, e.g. Cray, HP), Bull, NEC, NetApp, EMC, SGI Lustre is Open Source INSTITUTS-, LotsFAKULTÄTS of organizational-, ABTEILUNGSNAME
    [Show full text]
  • IPFS and Friends: a Qualitative Comparison of Next Generation Peer-To-Peer Data Networks Erik Daniel and Florian Tschorsch
    1 IPFS and Friends: A Qualitative Comparison of Next Generation Peer-to-Peer Data Networks Erik Daniel and Florian Tschorsch Abstract—Decentralized, distributed storage offers a way to types of files [1]. Napster and Gnutella marked the beginning reduce the impact of data silos as often fostered by centralized and were followed by many other P2P networks focusing on cloud storage. While the intentions of this trend are not new, the specialized application areas or novel network structures. For topic gained traction due to technological advancements, most notably blockchain networks. As a consequence, we observe that example, Freenet [2] realizes anonymous storage and retrieval. a new generation of peer-to-peer data networks emerges. In this Chord [3], CAN [4], and Pastry [5] provide protocols to survey paper, we therefore provide a technical overview of the maintain a structured overlay network topology. In particular, next generation data networks. We use select data networks to BitTorrent [6] received a lot of attention from both users and introduce general concepts and to emphasize new developments. the research community. BitTorrent introduced an incentive Specifically, we provide a deeper outline of the Interplanetary File System and a general overview of Swarm, the Hypercore Pro- mechanism to achieve Pareto efficiency, trying to improve tocol, SAFE, Storj, and Arweave. We identify common building network utilization achieving a higher level of robustness. We blocks and provide a qualitative comparison. From the overview, consider networks such as Napster, Gnutella, Freenet, BitTor- we derive future challenges and research goals concerning data rent, and many more as first generation P2P data networks, networks.
    [Show full text]
  • A Fog Storage Software Architecture for the Internet of Things Bastien Confais, Adrien Lebre, Benoît Parrein
    A Fog storage software architecture for the Internet of Things Bastien Confais, Adrien Lebre, Benoît Parrein To cite this version: Bastien Confais, Adrien Lebre, Benoît Parrein. A Fog storage software architecture for the Internet of Things. Advances in Edge Computing: Massive Parallel Processing and Applications, IOS Press, pp.61-105, 2020, Advances in Parallel Computing, 978-1-64368-062-0. 10.3233/APC200004. hal- 02496105 HAL Id: hal-02496105 https://hal.archives-ouvertes.fr/hal-02496105 Submitted on 2 Mar 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. November 2019 A Fog storage software architecture for the Internet of Things Bastien CONFAIS a Adrien LEBRE b and Benoˆıt PARREIN c;1 a CNRS, LS2N, Polytech Nantes, rue Christian Pauc, Nantes, France b Institut Mines Telecom Atlantique, LS2N/Inria, 4 Rue Alfred Kastler, Nantes, France c Universite´ de Nantes, LS2N, Polytech Nantes, Nantes, France Abstract. The last prevision of the european Think Tank IDATE Digiworld esti- mates to 35 billion of connected devices in 2030 over the world just for the con- sumer market. This deep wave will be accompanied by a deluge of data, applica- tions and services.
    [Show full text]
  • Privacy Enhancing Technologies 2003 an Analysis of Gnunet And
    Privacy Enhancing Technologies 2003 An Analysis of GNUnet and the Implications for Anonymous, Censorship-Resistant Networks Dennis Kügler Federal Office for Information Security, Germany [email protected] 1 Anonymous, Censorship-Resistant Networks • Anonymous Peer-to-Peer Networks – Gnutella • Searching is relatively anonymous • Downloading is not anonymous • Censorship-Resistant Networks – Eternity Service • Distributed storage medium • Attack resistant • Anonymous, Censorship-Resistant Networks – Freenet – GNUnet 2 GNUnet: Obfuscated, Distributed Filesystem Content Hash Key: [H(B),H(E (B))] • H(B) – Content encryption: H(B) – Unambiguous filename: H(E (B)) H(B) • Content replication – Caching while delivering – Based on unambiguous filename • Searchability – Keywords 3 GNUnet: Peer-to-Peer MIX Network • Initiating node – Downloads content • Supplying nodes – Store content unencrypted • Intermediary nodes – Forward and cache encrypted content – Plausible deniability due to encryption • Economic model – Based on credit Query A Priority=20 B – Charge for queries c =c -20 B B - – Pay for responses 4 GNUnet Encoding • DBlocks DBlock DBlock ... DBlock – 1KB of the content – Content hash encrypted • IBlocks IBlock ... IBlock – CHKs of 25 DBlocks – Organized as tree – Content hash encrypted IBlock • RBlock – Description of the content – CHK of the root IBlock RBlock – Keyword encrypted 5 The Attacker Model • Attacker – Controls malicious nodes that behave correctly – Prepares dictionary of interesting keywords – Observes queries and
    [Show full text]
  • Red Hat Data Analytics Infrastructure Solution
    TECHNOLOGY DETAIL RED HAT DATA ANALYTICS INFRASTRUCTURE SOLUTION TABLE OF CONTENTS 1 INTRODUCTION ................................................................................................................ 2 2 RED HAT DATA ANALYTICS INFRASTRUCTURE SOLUTION ..................................... 2 2.1 The evolution of analytics infrastructure ....................................................................................... 3 Give data scientists and data 2.2 Benefits of a shared data repository on Red Hat Ceph Storage .............................................. 3 analytics teams access to their own clusters without the unnec- 2.3 Solution components ...........................................................................................................................4 essary cost and complexity of 3 TESTING ENVIRONMENT OVERVIEW ............................................................................ 4 duplicating Hadoop Distributed File System (HDFS) datasets. 4 RELATIVE COST AND PERFORMANCE COMPARISON ................................................ 6 4.1 Findings summary ................................................................................................................................. 6 Rapidly deploy and decom- 4.2 Workload details .................................................................................................................................... 7 mission analytics clusters on 4.3 24-hour ingest ........................................................................................................................................8
    [Show full text]
  • Unlock Bigdata Analytic Efficiency with Ceph Data Lake
    Unlock Bigdata Analytic Efficiency With Ceph Data Lake Jian Zhang, Yong Fu, March, 2018 Agenda . Background & Motivations . The Workloads, Reference Architecture Evolution and Performance Optimization . Performance Comparison with Remote HDFS . Summary & Next Step 3 Challenges of scaling Hadoop* Storage BOUNDED Storage and Compute resources on Hadoop Nodes brings challenges Data Capacity Silos Costs Performance & efficiency Typical Challenges Data/Capacity Multiple Storage Silos Space, Spent, Power, Utilization Upgrade Cost Inadequate Performance Provisioning And Configuration Source: 451 Research, Voice of the Enterprise: Storage Q4 2015 *Other names and brands may be claimed as the property of others. 4 Options To Address The Challenges Compute and Large Cluster More Clusters Storage Disaggregation • Lacks isolation - • Cost of • Isolation of high- noisy neighbors duplicating priority workloads hinder SLAs datasets across • Shared big • Lacks elasticity - clusters datasets rigid cluster size • Lacks on-demand • On-demand • Can’t scale provisioning provisioning compute/storage • Can’t scale • compute/storage costs separately compute/storage costs scale costs separately separately Compute and Storage disaggregation provides Simplicity, Elasticity, Isolation 5 Unified Hadoop* File System and API for cloud storage Hadoop Compatible File System abstraction layer: Unified storage API interface Hadoop fs –ls s3a://job/ adl:// oss:// s3n:// gs:// s3:// s3a:// wasb:// 2006 2008 2014 2015 2016 6 Proposal: Apache Hadoop* with disagreed Object Storage SQL …… Hadoop Services • Virtual Machine • Container • Bare Metal HCFS Compute 1 Compute 2 Compute 3 … Compute N Object Storage Services Object Object Object Object • Co-located with gateway Storage 1 Storage 2 Storage 3 … Storage N • Dynamic DNS or load balancer • Data protection via storage replication or erasure code Disaggregated Object Storage Cluster • Storage tiering *Other names and brands may be claimed as the property of others.
    [Show full text]
  • Comptia A+ Acronym List Core 1 (220-1001) and Core 2 (220-1002)
    CompTIA A+ Acronym List Core 1 (220-1001) and Core 2 (220-1002) AC: Alternating Current ACL: Access Control List ACPI: Advanced Configuration Power Interface ADF: Automatic Document Feeder ADSL: Asymmetrical Digital Subscriber Line AES: Advanced Encryption Standard AHCI: Advanced Host Controller Interface AP: Access Point APIPA: Automatic Private Internet Protocol Addressing APM: Advanced Power Management ARP: Address Resolution Protocol ASR: Automated System Recovery ATA: Advanced Technology Attachment ATAPI: Advanced Technology Attachment Packet Interface ATM: Asynchronous Transfer Mode ATX: Advanced Technology Extended AUP: Acceptable Use Policy A/V: Audio Video BD-R: Blu-ray Disc Recordable BIOS: Basic Input/Output System BD-RE: Blu-ray Disc Rewritable BNC: Bayonet-Neill-Concelman BSOD: Blue Screen of Death 1 BYOD: Bring Your Own Device CAD: Computer-Aided Design CAPTCHA: Completely Automated Public Turing test to tell Computers and Humans Apart CD: Compact Disc CD-ROM: Compact Disc-Read-Only Memory CD-RW: Compact Disc-Rewritable CDFS: Compact Disc File System CERT: Computer Emergency Response Team CFS: Central File System, Common File System, or Command File System CGA: Computer Graphics and Applications CIDR: Classless Inter-Domain Routing CIFS: Common Internet File System CMOS: Complementary Metal-Oxide Semiconductor CNR: Communications and Networking Riser COMx: Communication port (x = port number) CPU: Central Processing Unit CRT: Cathode-Ray Tube DaaS: Data as a Service DAC: Discretionary Access Control DB-25: Serial Communications
    [Show full text]
  • Andrew File System (AFS) Google File System February 5, 2004
    Advanced Topics in Computer Systems, CS262B Prof Eric A. Brewer Andrew File System (AFS) Google File System February 5, 2004 I. AFS Goal: large-scale campus wide file system (5000 nodes) o must be scalable, limit work of core servers o good performance o meet FS consistency requirements (?) o managable system admin (despite scale) 400 users in the “prototype” -- a great reality check (makes the conclusions meaningful) o most applications work w/o relinking or recompiling Clients: o user-level process, Venus, that handles local caching, + FS interposition to catch all requests o interaction with servers only on file open/close (implies whole-file caching) o always check cache copy on open() (in prototype) Vice (servers): o Server core is trusted; called “Vice” o servers have one process per active client o shared data among processes only via file system (!) o lock process serializes and manages all lock/unlock requests o read-only replication of namespace (centralized updates with slow propagation) o prototype supported about 20 active clients per server, goal was >50 Revised client cache: o keep data cache on disk, metadata cache in memory o still whole file caching, changes written back only on close o directory updates are write through, but cached locally for reads o instead of check on open(), assume valid unless you get an invalidation callback (server must invalidate all copies before committing an update) o allows name translation to be local (since you can now avoid round-trip for each step of the path) Revised servers: 1 o move
    [Show full text]
  • HP Storageworks Clustered File System Command Line Reference
    HP StorageWorks Clustered File System 3.0 Command Line reference guide *392372-001* *392372–001* Part number: 392372–001 First edition: May 2005 Legal and notice information © Copyright 1999-2005 PolyServe, Inc. Portions © 2005 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Neither PolyServe nor Hewlett-Packard shall be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or translated into another language without the prior written consent of Hewlett-Packard. The information is provided “as is” without warranty of any kind and is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Neither PolyServe nor HP shall be liable for technical or editorial errors or omissions contained herein. The software this document describes is PolyServe confidential and proprietary. PolyServe and the PolyServe logo are trademarks of PolyServe, Inc. PolyServe Matrix Server contains software covered by the following copyrights and subject to the licenses included in the file thirdpartylicense.pdf, which is included in the PolyServe Matrix Server distribution. Copyright © 1999-2004, The Apache Software Foundation. Copyright © 1992, 1993 Simmule Turner and Rich Salz.
    [Show full text]