Fault Tolerance Protection and Raid Technology for Networks: a Primer

Total Page:16

File Type:pdf, Size:1020Kb

Fault Tolerance Protection and Raid Technology for Networks: a Primer 51-30-25 DATA COMMUNICATIONS MANAGEMENT FAULT TOLERANCE PROTECTION AND RAID TECHNOLOGY FOR NETWORKS: A PRIMER Jeff Leventhal INTRODUCTION According to a recent Computer Reseller News/Gallup poll, most net- works are down for at least 2 hours per week. The situation has not got- ten any better for most companies in the past 3 years. If an organization has 1000 users per network, this equals one man-year per week of lost productivity. Even if a network is a fraction of that size, this number is imposing. For nearly a decade, many companies responded by deploy- ing expensive fault-tolerant servers and peripherals. Until the early 1990s, the fault-tolerant label was generally affixed to expensive and proprietary hardware systems for mainframes and mini- computers where the losses associated with a system’s downtime were costly. The advent of client/server computing created a market for similar products created for local area networks (LANs) because the cost of net- work downtime can similarly be economically devastating. Network downtime can be caused by anything from a bad network card or a failed communication gateway to a tape drive failure or loss of a tape used for backing up critical data. The chances that a LAN may fail increase as more software applications, hardware components, and users are added to the network. This article describes products PAYOFF IDEA that offer fault tolerance at the sys- What can an organization do to increase server tem hardware level and those that uptime and reduce, or even eliminate, network downtime? In many cases, a RAID system — a use fault-tolerant methods to protect collection of disks in which data is copied onto the integrity of data stored on net- multiple drives — is added to a network to speed work servers. The discussion con- access to mission-critical data and protect it in cludes with a set of guidelines to the event of a hard disk crash. This article dis- help communications managers se- cusses RAID technology and the use of fault-tol- erance protection to preserve the availability and lect the right type of fault-tolerant integrity of data stored on network servers. 10/97 Auerbach Publications © 1997 CRC Press LLC solution for their network. This article also discusses RAID (redundant ar- ray of independent [formerly “inexpensive”] disks) technology, which is used to coordinate multiple disk drives to protect against loss of data availability if one of the drives fails. DEFINING FAULT TOLERANCE PC Week columnist Peter Coffee noted the proliferation of fault tolerance in vendor advertising and compiled a list of seven factors that define fault tolerance. Coffee’s list included safety, reliability, confidentiality, integri- ty, availability, trustworthiness, and correctness. Two of the factors — in- tegrity and availability — can be defined as follows: • Availability is expressed as the percentage of uptime and is related to reliability (which Coffee defined to be mean times between fail- ures) because infinite time between failure would mean 100% avail- ability. But when the inevitable occurs, and a failure does happen, how long does it take to get service back to normal? • Integrity refers to keeping data intact (as opposed to keeping data se- cret). Fault tolerance may mean rigorous logging of transactions, or the capacity to reverse any action so that data can always be returned to a known good state. This article uses Coffee’s descriptions of availability and integrity to distinguish between products that offer fault tolerance at the system hardware level and those that use fault-tolerant methods to protect the data stored on the network servers. Availability The proliferation of hardware products with fault-tolerant features may be attributable to the ease with which a vendor can package two or more copies of a hardware component in a system. Network servers are an ex- ample of this phenomenon. Supercharged personal computers equipped with multiple power supplies, processors, and input/output (I/O) buses provide greater dependability in the event that one power supply, pro- cessor, or I/O controller fails. In this case, it is relatively easy to synchro- nize multiple copies of each component so that one mechanism takes over if its twin fails. Cubix’s ERS/FT II. For example, Cubix’s ERS/FT II communications server has redundant, load-bearing, hot-swappable power supplies; mul- tiple cooling fans; and failure alerts that notify the administrator audibly and through management software. The product’s Intelligent Environ- mental Sensor tracks fluctuations in voltage and temperature and trans- mits an alert if conditions exceed a safe operating range. A hung or failed system will not adversely affect any of the other processors in the system. Vinca Corp.’s StandbyServer. Vinca Corp. has taken this super- charged PC/network server one step further by offering machines that duplicate any server on the network; if one crashes, an organization sim- ply moves all its users to its twin. Vinca’s StandbyServer exemplifies this process, known as mirroring. However, mirroring has a significant draw- back — if a software bug causes the primary server to crash, the same bug is likely to cause the secondary (mirrored) server also to crash. (Mir- roring is an iteration of RAID technology, which is explained in greater detail later in this article.) Network Integrity, Inc.’s LANtegrity. An innovative twist on the mir- rored server, without its bug-sensitivity drawback, is Network Integrity’s LANtegrity product in which hard disks are not directly mirrored. Instead, there is a many-to-one relationship, similar to a RAID system, which has the advantage of lower hardware cost. LANtegrity handles backup by maintaining current and previous versions of all files in its Intelligent Data Vault. The vault manages the most active files in disk storage and offloads the rest to the tape autoloader. Copies of files that were changed are made when LANtegrity polls the server every few minutes and any file can be retrieved as needed. If the primary server fails, the system can be smoothly running again in about 15 seconds without rebooting. Be- cause all the software is not replicated, any bugs that caused the first server to crash should not affect the second server. NetFRAME Servers. The fault tolerance built into NetFRAME’s servers is attributable to its distributed, parallel software architecture. This fault tol- erance allows the adding and changing of peripherals to be done with- out shutting down the server, allows for dynamic isolation and connection of I/O problems (which are prime downtime culprits), dis- tributes the processing load between the I/O server and the central pro- cessing unit (CPU), and prevents driver failures from bringing down the CPU. Compaq’s SMART. Many of Compaq’s PCs feature its SMART (Self- Monitoring Analysis and Reporting Technology) client technology, al- though it is limited to client hard drives. If a SMART client believes that a crash may occur on a hard disk drive, it begins backing up the hard drive to the NetWare file server backup device. The downside is that the software cannot predict disk failures that give off no warning signals or failures caused by the computer itself. DIAL RAID FOR INTEGRITY In each of the previous examples, the fault tolerance built into the sys- tems is generally designed to preserve the availability of the hardware system. RAID is a technology that is probably the most popular means of ensuring the integrity of corporate data. RAID (redundant arrays of independent disks) is a way of coordinat- ing multiple disk drives to protect against loss of data availability if one of the drives fails. RAID software: • Presents the array’s storage capacity to the host computer as one or more virtual disks with the desired balance of cost, data availability, and I/O performance. • Masks the array’s internal complexity from the host computer by transparently mapping its available storage capacity onto its member disks and converting I/O requests directed to virtual disks into oper- ations on member disks. • Recovers data from disk and path failures and provides continuous I/O service to the host computer. RAID technology is based on work that originated at the University of California at Berkeley in the late 1980s. Researchers analyzed various performance, throughput, and data protection aspects of the different ar- rangements of disk drives and different redundancy algorithms. The fol- lowing table describes the various RAID levels recognized by the RAID Advisory Board (RAB), which sets standards for the industry. RAID Level Description Benefits Disadvantages RAID 0 Disk stripping: Storage is maximized Has virtually no fault data is written across across all drives, tolerance multiple disk drives features good performance and low price RAID 1 Disk mirroring: data is Data redundancy is Slower write performance, copied from one drive to increased 100%; has fast but twice the disk drive the next read performance capacity, more expensive RAID 2 Spreads redundant data Has no physical benefits Has high overhead with across multiple disks; no significant reliability includes bit and parity data checking RAID 3 Data stripping at a bit level, Has increased fault Is limited to one write at requires a dedicated parity tolerance and fast a time drive performance RAID 4 Disk stripping of data Has increased fault Slower write performance, blocks, requires a tolerance and fast read not used very much dedicated parity drive performance RAID 5 Disk stripping of both data Features increased fault Write performance is slow and parity information tolerance, efficient performance, is very common The redundancy in RAID is achieved by dedicating parts of an array’s storage capacity to check data. Check data can be used to regenerate in- dividual blocks of data from a failed disk as they are requested by the ap- plications, or to reconstruct the entire contents of a failed disk to restore data protection after a failure.
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • The Title Title: Subtitle March 2007
    sub title The Title Title: Subtitle March 2007 Copyright c 2006-2007 BSD Certification Group, Inc. Permission to use, copy, modify, and distribute this documentation for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE DOCUMENTATION IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS DOCUMENTATION INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CON- SEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEG- LIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS DOCUMENTATION. NetBSD and pkgsrc are registered trademarks of the NetBSD Foundation, Inc. FreeBSD is a registered trademark of the FreeBSD Foundation. Contents Introduction vii 1 Installing and Upgrading the OS and Software 1 1.1 Recognize the installation program used by each operating system . 2 1.2 Recognize which commands are available for upgrading the operating system 6 1.3 Understand the difference between a pre-compiled binary and compiling from source . 8 1.4 Understand when it is preferable to install a pre-compiled binary and how to doso ...................................... 9 1.5 Recognize the available methods for compiling a customized binary . 10 1.6 Determine what software is installed on a system . 11 1.7 Determine which software requires upgrading . 12 1.8 Upgrade installed software . 12 1.9 Determine which software have outstanding security advisories .
    [Show full text]
  • The Google File System (GFS)
    The Google File System (GFS), as described by Ghemwat, Gobioff, and Leung in 2003, provided the architecture for scalable, fault-tolerant data management within the context of Google. These architectural choices, however, resulted in sub-optimal performance in regard to data trustworthiness (security) and simplicity. Additionally, the application- specific nature of GFS limits the general scalability of the system outside of the specific design considerations within the context of Google. SYSTEM DESIGN: The authors enumerate their specific considerations as: (1) commodity components with high expectation of failure, (2) a system optimized to handle relatively large files, particularly multi-GB files, (3) most writes to the system are concurrent append operations, rather than internally modifying the extant files, and (4) a high rate of sustained bandwidth (Section 1, 2.1). Utilizing these considerations, this paper analyzes the success of GFS’s design in achieving a fault-tolerant, scalable system while also considering the faults of the system with regards to data-trustworthiness and simplicity. Data-trustworthiness: In designing the GFS system, the authors made the conscious decision not prioritize data trustworthiness by allowing applications to access ‘stale’, or not up-to-date, data. In particular, although the system does inform applications of the chunk-server version number, the designers of GFS encouraged user applications to cache chunkserver information, allowing stale data accesses (Section 2.7.2, 4.5). Although possibly troubling
    [Show full text]
  • Introspection-Based Verification and Validation
    Introspection-Based Verification and Validation Hans P. Zima Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 E-mail: [email protected] Abstract This paper describes an introspection-based approach to fault tolerance that provides support for run- time monitoring and analysis of program execution. Whereas introspection is mainly motivated by the need to deal with transient faults in embedded systems operating in hazardous environments, we show in this paper that it can also be seen as an extension of traditional V&V methods for dealing with program design faults. Introspection—a technology supporting runtime monitoring and analysis—is motivated primarily by dealing with faults caused by hardware malfunction or environmental influences such as radiation and thermal effects [4]. This paper argues that introspection can also be used to handle certain classes of faults caused by program design errors, complementing the classical approaches for dealing with design errors summarized under the term Verification and Validation (V&V). Consider a program, P , over a given input domain, and assume that the intended behavior of P is defined by a formal specification. Verification of P implies a static proof—performed before execution of the program—that for all legal inputs, the result of applying P to a legal input conforms to the specification. Thus, verification is a methodology that seeks to avoid faults. Model checking [5, 3] refers to a special verification technology that uses exploration of the full state space based on a simplified program model. Verification techniques have been highly successful when judiciously applied under the right conditions in well-defined, limited contexts.
    [Show full text]
  • Fault-Tolerant Components on AWS
    Fault-Tolerant Components on AWS November 2019 This paper has been archived For the latest technical information, see the AWS Whitepapers & Guides page: Archivedhttps://aws.amazon.com/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved. Archived Contents Introduction .......................................................................................................................... 1 Failures Shouldn’t Be THAT Interesting ............................................................................. 1 Amazon Elastic Compute Cloud ...................................................................................... 1 Elastic Block Store ........................................................................................................... 3 Auto Scaling ....................................................................................................................
    [Show full text]
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • Network Reliability and Fault Tolerance
    Network Reliability and Fault Tolerance Muriel Medard´ [email protected] Laboratory for Information and Decision Systems Room 35-212 Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA 02139 Steven S. Lumetta [email protected] Coordinated Science Laboratory University of Illinois Urbana-Champaign 1308 W. Main Street, Urbana, IL 61801 1 Introduction The majority of communications applications, from cellular telephone conversations to credit card transactions, assume the availability of a reliable network. At this level, data are expected to tra- verse the network and to arrive intact at their destination. The physical systems that compose a network, on the other hand, are subjected to a wide range of problems, ranging from signal distor- tion to component failures. Similarly, the software that supports the high-level semantic interface 1 often contains unknown bugs and other latent reliability problems. Redundancy underlies all ap- proaches to fault tolerance. Definitive definitions for all concepts and terms related to reliability, and, more broadly, dependability, can be found in [AAC+92]. Designing any system to tolerate faults first requires the selection of a fault model, a set of possible failure scenarios along with an understanding of the frequency, duration, and impact of each scenario. A simple fault model merely lists the set of faults to be considered; inclusion in the set is decided based on a combination of expected frequency, impact on the system, and feasibility or cost of providing protection. Most reliable network designs address the failure of any single component, and some designs tolerate multiple failures. In contrast, few attempt to handle the adversarial conditions that might occur in a terrorist attack, and cataclysmic events are almost never addressed at any scale larger than a city.
    [Show full text]
  • Identify Storage Technologies and Understand RAID
    LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals IdentifyIdentify StorageStorage TechnologiesTechnologies andand UnderstandUnderstand RAIDRAID LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Lesson Overview In this lesson, you will learn: Local storage options Network storage options Redundant Array of Independent Disk (RAID) options LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Anticipatory Set List three different RAID configurations. Which of these three bus types has the fastest transfer speed? o Parallel ATA (PATA) o Serial ATA (SATA) o USB 2.0 LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options Local storage options can range from a simple single disk to a Redundant Array of Independent Disks (RAID). Local storage options can be broken down into bus types: o Serial Advanced Technology Attachment (SATA) o Integrated Drive Electronics (IDE, now called Parallel ATA or PATA) o Small Computer System Interface (SCSI) o Serial Attached SCSI (SAS) LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options SATA drives have taken the place of the tradition PATA drives. SATA have several advantages over PATA: o Reduced cable bulk and cost o Faster and more efficient data transfer o Hot-swapping technology LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options (continued) SAS drives have taken the place of the traditional SCSI and Ultra SCSI drives in server class machines. SAS have several
    [Show full text]
  • Increasing Reliability and Fault Tolerance of a Secure Distributed Cloud Storage
    Increasing reliability and fault tolerance of a secure distributed cloud storage Nikolay Kucherov1;y, Mikhail Babenko1;yy, Andrei Tchernykh2;z, Viktor Kuchukov1;zz and Irina Vashchenko1;yz 1 North-Caucasus Federal University,Stavropol,Russia 2 CICESE Research Center,Ensenada,Mexico E-mail: [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. The work develops the architecture of a multi-cloud data storage system based on the principles of modular arithmetic. This modification of the data storage system allows increasing reliability of data storage and fault tolerance of the cloud system. To increase fault- tolerance, adaptive data redistribution between available servers is applied. This is possible thanks to the introduction of additional redundancy. This model allows you to restore stored data in case of failure of one or more cloud servers. It is shown how the proposed scheme will enable you to set up reliability, redundancy, and reduce overhead costs for data storage by adapting the parameters of the residual number system. 1. Introduction Currently, cloud services, Google, Amazon, Dropbox, Microsoft OneDrive, providing cloud storage, and data processing services, are gaining high popularity. The main reason for using cloud products is the convenience and accessibility of the services offered. Thanks to the use of cloud technologies, it is possible to save financial costs for maintaining and maintaining servers for storing and securing information. All problems arising during the storage and processing of data are transferred to the cloud provider [1]. Distributed infrastructure represents the conditions in which competition for resources between high-priority computing tasks of data analysis occurs regularly [2].
    [Show full text]
  • • RAID, an Acronym for Redundant Array of Independent Disks Was Invented to Address Problems of Disk Reliability, Cost, and Performance
    RAID • RAID, an acronym for Redundant Array of Independent Disks was invented to address problems of disk reliability, cost, and performance. • In RAID, data is stored across many disks, with extra disks added to the array to provide error correction (redundancy). • The inventors of RAID, David Patterson, Garth Gibson, and Randy Katz, provided a RAID taxonomy that has persisted for a quarter of a century, despite many efforts to redefine it. 1 RAID 0: Striped Disk Array • RAID Level 0 is also known as drive spanning – Data is written in blocks across the entire array . 2 RAID 0 • Recommended Uses: – Video/image production/edition – Any app requiring high bandwidth – Good for non-critical storage of data that needs to be accessed at high speed • Good performance on reads and writes • Simple design, easy to implement • No fault tolerance (no redundancy) • Not reliable 3 RAID 1: Mirroring • RAID Level 1, also known as disk mirroring , provides 100% redundancy, and good performance. – Two matched sets of disks contain the same data. 4 RAID 1 • Recommended Uses: – Accounting, payroll, financial – Any app requiring high reliability (mission critical storage) • For best performance, controller should be able to do concurrent reads/writes per mirrored pair • Very simple technology • Storage capacity cut in half • S/W solutions often do not allow “hot swap” • High disk overhead, high cost 5 RAID 2: Bit-level Hamming Code ECC Parity • A RAID Level 2 configuration consists of a set of data drives, and a set of Hamming code drives. – Hamming code drives provide error correction for the data drives.
    [Show full text]
  • Summer Student Project Report
    Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June { September 2014 Abstract This report will outline two projects that were done as part of a three months long summer internship at CERN. In the first project we dealt with Worldwide LHC Computing Grid (WLCG) and its information sys- tem. The information system currently conforms to a schema called GLUE and it is evolving towards a new version: GLUE2. The aim of the project was to develop and adapt the current information system of the WLCG, used by the Large Scale Storage Systems at CERN (CASTOR and EOS), to the new GLUE2 schema. During the second project we investigated different RAID configurations so that we can get performance boost from CERN's disk systems in the future. RAID 1 that is currently in use is not an option anymore because of limited performance and high cost. We tried to discover RAID configurations that will improve the performance and simultaneously decrease the cost. 1 Information-provider scripts for GLUE2 1.1 Introduction The Worldwide LHC Computing Grid (WLCG, see also 1) is an international collaboration consisting of a grid-based computer network infrastructure incor- porating over 170 computing centres in 36 countries. It was originally designed by CERN to handle the large data volume produced by the Large Hadron Col- lider (LHC) experiments. This data is stored at CERN Storage Systems which are responsible for keeping and making available more than 100 Petabytes (105 Terabytes) of data to the physics community. The data is also replicated from CERN to the main computing centres within the WLCG.
    [Show full text]
  • RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018
    RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018 Copyright © 2018 ASUSTeK COMPUTER INC. All Rights Reserved. No part of this manual, including the products and software described in it, may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language in any form or by any means, except documentation kept by the purchaser for backup purposes, without the express written permission of ASUSTeK COMPUTER INC. (“ASUS”). Product warranty or service will not be extended if: (1) the product is repaired, modified or altered, unless such repair, modification of alteration is authorized in writing by ASUS; or (2) the serial number of the product is defaced or missing. ASUS PROVIDES THIS MANUAL “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ASUS, ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, LOSS OF BUSINESS, LOSS OF USE OR DATA, INTERRUPTION OF BUSINESS AND THE LIKE), EVEN IF ASUS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES ARISING FROM ANY DEFECT OR ERROR IN THIS MANUAL OR PRODUCT. SPECIFICATIONS AND INFORMATION CONTAINED IN THIS MANUAL ARE FURNISHED FOR INFORMATIONAL USE ONLY, AND ARE SUBJECT TO CHANGE AT ANY TIME WITHOUT NOTICE, AND SHOULD NOT BE CONSTRUED AS A COMMITMENT BY ASUS. ASUS ASSUMES NO RESPONSIBILITY OR LIABILITY FOR ANY ERRORS OR INACCURACIES THAT MAY APPEAR IN THIS MANUAL, INCLUDING THE PRODUCTS AND SOFTWARE DESCRIBED IN IT.
    [Show full text]