Disk Array Controller for Data Storage System

Total Page:16

File Type:pdf, Size:1020Kb

Disk Array Controller for Data Storage System ^ ^ ^ ^ I ^ ^ ^ ^ II ^ II ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ I ^ � European Patent Office Office europeen des brevets £P Q 508 604 B1 (12) EUROPEAN PATENT SPECIFICATION (45) Date of publication and mention (51) |nt CI.6: G06F 3/06, G06F 13/42, of the grant of the patent: G06F 11/10 17.11.1999 Bulletin 1999/46 (21) Application number: 92302098.6 (22) Date of filing: 11.03.1992 (54) Disk array controller for data storage system Speicherplattenanordnungsteuerungsvorrichtung fur eine Datenspeicherungsanordnung Dispositif de commande d'un reseau de disques pour un systeme de stockage de donnees (84) Designated Contracting States: • McCombs, Craig C. DE FR GB Wichita, KS 6721 3 (US) • Thompson, Kenneth J. (30) Priority: 13.03.1991 US 668660 Wichita, KS 67226 (US) (43) Date of publication of application: (74) Representative: Gill, David Alan et al 14.10.1992 Bulletin 1992/42 W.P. Thompson & Co., Celcon House, (73) Proprietors: 289-293 High Holborn • NCR International, Inc. London WC1V7HU (GB) Dayton, Ohio 45479 (US) • HYUNDAI ELECTRONICS AMERICA (56) References cited: Milpitas, California 95035 (US) EP-A- 0 294 287 EP-A- 0 320 107 • Symbios, Inc. EP-A- 0 369 707 US-A- 4 899 342 Fort Collins, Colorado 80525 (US) US-A-4 914 656 (72) Inventors: • Jibbe, Mahmoud K. Wichita, KS 67208 (US) DO ^- o CO 00 o 10 Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice the Patent Office of the Notice of shall be filed in o to European opposition to European patent granted. opposition a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. a. 99(1) European Patent Convention). LU Printed by Jouve, 75001 PARIS (FR) 1 EP 0 508 604 B1 2 Description paritydisk. In asimilarfashion, read operations typically need only access a single one of the N data disks, un- [0001] This invention relates to data storage systems less the data to be read exceeds the block length stored of the kind including host bus means adapted to be con- on each disk. As with RAID level 3 systems, the parity nected to a host device, and disk drive means adapted 5 disk is used to reconstruct information in the event of a to be connected to a plurality of disk drive devices. disk failure. [0002] Recent and continuing increases in computer [0007] RAID level 5 is similar to RAID level 4 except processing power and speed, in the speed and capacity that parity information, in addition to the data, is distrib- of primary memory, and in the size and complexity of uted across the N+1 disks in each group. Although each computer software has resulted in the need for faster 10 group contains N+1 disks, each disk includes some operating, larger capacity secondary memory storage blocks for storing data and some blocks for storing parity devices; magnetic disks forming the most common ex- information. Where parity information is stored is con- ternal or secondary memory storage means utilized in trolled by an algorithm implemented by the user. As in present day computer systems. Unfortunately, the rate RAID level 4 systems, RAID level 5 writes require ac- of improvement in the performance of large magnetic 15 cess to at least two disks; however, no longer does every disks has not kept pace with processor and main mem- write to a group require access to the same dedicated ory performance improvements. However, significant parity disk, as in RAID level 4 systems. This feature pro- secondary memory storage performance and cost im- vides the opportunity to perform concurrent write oper- provements may be obtained by the replacement of sin- ations. gle large expensive disk drives with a multiplicity of 20 [0008] In a known disk array system the host operates small, inexpensive disk drives interconnected in a par- as the RAID controller and performs the parity genera- allel array, which to the host appears as a single large tion and checking. Having the host computer parity is fast disk. expensive in host processing overhead. Furthermore, [0003] Several disk array design alternatives were the known system includes a fixed data path structure presented in an article titled "A Case for Redundant Ar- 25 interconnecting the plurality of disk drives with the host rays of Inexpensive Disks (RAID)" by David A. Patter- system. Rearrangement of the disk array system to ac- son, Garth Gibson and Randy H. Katz; University of Cal- commodate different quantities of disk drives or different ifornia Report No. UCB/CSD 87/391, December 1987. RAID configurations is not easily accomplished. This article discusses disk arrays and the improvements [0009] US-A-4,91 4,656 discloses a data storage sys- in performance, reliability, power consumption and scal- 30 tern including a host bus means adapted to be connect- ability that disk arrays provide in comparison to single ed to a host device, disk drive means adapted to be con- large magnetic disks. nected to a plurality of disk drive devices, selectively [0004] Five disk array arrangements, referred to as controllable coupling means connected to the host bus RAID levels, are described in the article. The first level means and to a plurality of buses and which includes RAID comprises N disks for storing data and N addition- 35 connecting means adapted to connect the host bus al "mirror" disks for storing copies of the information writ- means to a first group of buses. EP-A-0,369,707 dis- ten to the data disks. RAID level 1 write functions require closes a data storage system likewise including a host that data be written to two disks, the second "mirror" disk bus means for connection to a host device, disk drive receiving the same information provided to the first disk. means for connection to a plurality of disk drive devices When data is read, it can be read from either disk. 40 and selectively controllable coupling means connected [0005] RAID level 3 systems comprise one or more to the host bus means and the disk drive means. groups of N+1 disks. Within each group, N disks are [0010] EP-A-0,294,287and EP-A-0,320,107 relate to used to store data, and the additional disk is utilized to data storage systems of the type defined within the store parity information. During RAID level 3 write func- present application but all of these aforementioned sys- tions, each block of data is divided into N portions for 45 terns are disadvantageously restricted having regard to storage among the N data disks. The corresponding their adaptability and versatility for controlling data flow parity information is written to a dedicated parity disk. between a host and a disk array. When data is read, all N data disks must be accessed. [0011] It is an object of the present invention to pro- The parity disk is used to reconstruct information in the vide a data storage system of the kind specified which event of a disk failure. so may be readily adapted to support various disk drive ar- [0006] RAID level 4 systems are also comprised of ray configurations. one or more groups of N+1 disks wherein N disks are [0012] Therefore, according to the present invention, used to store data, and the additional disk is utilized to there is provided a data storage system, including host store parity information. RAID level 4 systems differ from bus means adapted to be connected to a host device, RAID level 3 systems in that data to be saved is divided 55 disk drive means adapted to be connected to a plurality into larger portions, consisting of one or many blocks of of disk drive devices, selectively controllable coupling data, for storage among the disks. Writes still require means connected to said host bus means and to a plu- access to two disks, i.e., one of the N data disks and the rality of array buses coupled to said disk drive means, 2 3 EP 0 508 604 B1 4 first connecting means adapted to connect said host bus bit line. Bus 16 includes 18 data lines, two of which are means to a first group of selected array buses charac- bus parity lines. Additional control, acknowledge and terized in that said first connecting means includes a plu- message lines, not shown, are also included with the rality of register means each associated with a respec- data busses. tive array bus and connected to said host bus means for 5 [0016] SCSI adapter 14, SCSI bus interface chips 31 receiving data therefrom, each register means having a through 36, SRAM 36 and microprocessor 51 are com- respective bus driver connected thereto and also con- mercially available items. For example, SCSI adapter nected to the associated array bus, the bus drivers being 1 4 may be a Fast SCSI 2 chip, SCSI bus interface chips selectively controllable to connect said host bus means 31 through 35 may be NCR 53C96 chips and microproc- to said first group of selected array buses, and in that 10 essor 51 could be a Motorola MC68020, 16 megahertz said coupling means includes parity generation means, microprocessor. Also residing on the microprocessor second connecting means adapted to connect a second bus are a one megabyte DRAM, a 1 28 kilobyte EPROM, group of selected array buses to the input of said parity an eight kilobyte EEPROM, a 68901 multifunction pe- generation means, and third connecting means adapted ripheral controller and various registers. to connect the output of said parity generation means to is [0017] SCSI data path chip 10 is an application spe- a third group of selected array buses.
Recommended publications
  • VIA RAID Configurations
    VIA RAID configurations The motherboard includes a high performance IDE RAID controller integrated in the VIA VT8237R southbridge chipset. It supports RAID 0, RAID 1 and JBOD with two independent Serial ATA channels. RAID 0 (called Data striping) optimizes two identical hard disk drives to read and write data in parallel, interleaved stacks. Two hard disks perform the same work as a single drive but at a sustained data transfer rate, double that of a single disk alone, thus improving data access and storage. Use of two new identical hard disk drives is required for this setup. RAID 1 (called Data mirroring) copies and maintains an identical image of data from one drive to a second drive. If one drive fails, the disk array management software directs all applications to the surviving drive as it contains a complete copy of the data in the other drive. This RAID configuration provides data protection and increases fault tolerance to the entire system. Use two new drives or use an existing drive and a new drive for this setup. The new drive must be of the same size or larger than the existing drive. JBOD (Spanning) stands for Just a Bunch of Disks and refers to hard disk drives that are not yet configured as a RAID set. This configuration stores the same data redundantly on multiple disks that appear as a single disk on the operating system. Spanning does not deliver any advantage over using separate disks independently and does not provide fault tolerance or other RAID performance benefits. If you use either Windows® XP or Windows® 2000 operating system (OS), copy first the RAID driver from the support CD to a floppy disk before creating RAID configurations.
    [Show full text]
  • D:\Documents and Settings\Steph
    P45D3 Platinum Series MS-7513 (v1.X) Mainboard G52-75131X1 i Copyright Notice The material in this document is the intellectual property of MICRO-STAR INTERNATIONAL. We take every care in the preparation of this document, but no guarantee is given as to the correctness of its contents. Our products are under continual improvement and we reserve the right to make changes without notice. Trademarks All trademarks are the properties of their respective owners. NVIDIA, the NVIDIA logo, DualNet, and nForce are registered trademarks or trade- marks of NVIDIA Corporation in the United States and/or other countries. AMD, Athlon™, Athlon™ XP, Thoroughbred™, and Duron™ are registered trade- marks of AMD Corporation. Intel® and Pentium® are registered trademarks of Intel Corporation. PS/2 and OS®/2 are registered trademarks of International Business Machines Corporation. Windows® 95/98/2000/NT/XP/Vista are registered trademarks of Microsoft Corporation. Netware® is a registered trademark of Novell, Inc. Award® is a registered trademark of Phoenix Technologies Ltd. AMI® is a registered trademark of American Megatrends Inc. Revision History Revision Revision History Date V1.0 First release for PCB 1.X March 2008 (P45D3 Platinum) Technical Support If a problem arises with your system and no solution can be obtained from the user’s manual, please contact your place of purchase or local distributor. Alternatively, please try the following help resources for further guidance. Visit the MSI website for FAQ, technical guide, BIOS updates, driver updates, and other information: http://global.msi.com.tw/index.php? func=faqIndex Contact our technical staff at: http://support.msi.com.tw/ ii Safety Instructions 1.
    [Show full text]
  • Designing Disk Arrays for High Data Reliability
    Designing Disk Arrays for High Data Reliability Garth A. Gibson School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsbugh PA 15213 David A. Patterson Computer Science Division Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 94720 hhhhhhhhhhhhhhhhhhhhhhhhhhhhh This research was funded by NSF grant MIP-8715235, NASA/DARPA grant NAG 2-591, a Computer Measurement Group fellowship, and an IBM predoctoral fellowship. 1 Proposed running head: Designing Disk Arrays for High Data Reliability Please forward communication to: Garth A. Gibson School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh PA 15213-3890 412-268-5890 FAX 412-681-5739 [email protected] ABSTRACT Redundancy based on a parity encoding has been proposed for insuring that disk arrays provide highly reliable data. Parity-based redundancy will tolerate many independent and dependent disk failures (shared support hardware) without on-line spare disks and many more such failures with on-line spare disks. This paper explores the design of reliable, redundant disk arrays. In the context of a 70 disk strawman array, it presents and applies analytic and simulation models for the time until data is lost. It shows how to balance requirements for high data reliability against the overhead cost of redundant data, on-line spares, and on-site repair personnel in terms of an array's architecture, its component reliabilities, and its repair policies. 2 Recent advances in computing speeds can be matched by the I/O performance afforded by parallelism in striped disk arrays [12, 13, 24]. Arrays of small disks further utilize advances in the technology of the magnetic recording industry to provide cost-effective I/O systems based on disk striping [19].
    [Show full text]
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • Identify Storage Technologies and Understand RAID
    LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals IdentifyIdentify StorageStorage TechnologiesTechnologies andand UnderstandUnderstand RAIDRAID LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Lesson Overview In this lesson, you will learn: Local storage options Network storage options Redundant Array of Independent Disk (RAID) options LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Anticipatory Set List three different RAID configurations. Which of these three bus types has the fastest transfer speed? o Parallel ATA (PATA) o Serial ATA (SATA) o USB 2.0 LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options Local storage options can range from a simple single disk to a Redundant Array of Independent Disks (RAID). Local storage options can be broken down into bus types: o Serial Advanced Technology Attachment (SATA) o Integrated Drive Electronics (IDE, now called Parallel ATA or PATA) o Small Computer System Interface (SCSI) o Serial Attached SCSI (SAS) LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options SATA drives have taken the place of the tradition PATA drives. SATA have several advantages over PATA: o Reduced cable bulk and cost o Faster and more efficient data transfer o Hot-swapping technology LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options (continued) SAS drives have taken the place of the traditional SCSI and Ultra SCSI drives in server class machines. SAS have several
    [Show full text]
  • 6 O--C/?__I RAID-II: Design and Implementation Of
    f r : NASA-CR-192911 I I /N --6 o--c/?__i _ /f( RAID-II: Design and Implementation of a/t 't Large Scale Disk Array Controller R.H. Katz, P.M. Chen, A.L. Drapeau, E.K. Lee, K. Lutz, E.L. Miller, S. Seshan, D.A. Patterson r u i (NASA-CR-192911) RAID-Z: DESIGN N93-25233 AND IMPLEMENTATION OF A LARGE SCALE u DISK ARRAY CONTROLLER (California i Univ.) 18 p Unclas J II ! G3160 0158657 ! I i I \ i O"-_ Y'O J i!i111 ,= -, • • ,°. °.° o.o I I Report No. UCB/CSD-92-705 "-----! I October 1992 _,'_-_,_ i i I , " Computer Science Division (EECS) University of California, Berkeley Berkeley, California 94720 RAID-II: Design and Implementation of a Large Scale Disk Array Controller 1 R. H. Katz P. M. Chen, A. L Drapeau, E. K. Lee, K. Lutz, E. L Miller, S. Seshan, D. A. Patterson Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA 94720 Abstract: We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5" disk chives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19" racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance.
    [Show full text]
  • Techsmart Representatives
    Wave TechSmart representatives RAID BASICS ARE YOUR SECURITY SOLUTIONS FAULT TOLERANT? Redundant Array of Independent Disks (RAID) is a Enclosure: The "box" which contains the controller, storage technology used to improve the processing drives/drive trays and bays, power supplies, and fans is capability of storage systems. This technology is called an "enclosure." The enclosure includes various designed to provide reliability in disk array systems and controls, ports, and other features used to connect the to take advantage of the performance gains offered by RAID to a host for example. an array of mulple disks over single-disk storage. Wave RepresentaCves has experience with both high- RAID’s two primary underlying concepts are (1) that performance compuCng and enterprise storage, providing distribuCng data over mulple hard drives improves soluCons to large financial instuCons to research performance and (2) that using mulple drives properly laboratories. The security industry adopted superior allows for any one drive to fail without loss of data and compuCng and storage technologies aGer the transiCon without system downCme. In the event of a disk from analog systems to IP based networks. This failure, disk access will conCnue normally and the failure evoluCon has created robust and resilient systems that will be transparent to the host system. can handle high bandwidth from video surveillance soluCons to availability for access control and emergency Originally designed and implemented for SCSI drives, communicaCons. RAID principles have been applied to SATA and SAS drives in many video systems. Redundancy of any system, especially of components that have a lower tolerance in MTBF makes sense.
    [Show full text]
  • Memory Systems : Cache, DRAM, Disk
    CHAPTER 24 Storage Subsystems Up to this point, the discussions in Part III of this with how multiple drives within a subsystem can be book have been on the disk drive as an individual organized together, cooperatively, for better reliabil- storage device and how it is directly connected to a ity and performance. This is discussed in Sections host system. This direct attach storage (DAS) para- 24.1–24.3. A second aspect deals with how a storage digm dates back to the early days of mainframe subsystem is connected to its clients and accessed. computing, when disk drives were located close to Some form of networking is usually involved. This is the CPU and cabled directly to the computer system discussed in Sections 24.4–24.6. A storage subsystem via some control circuits. This simple model of disk can be designed to have any organization and use any drive usage and confi guration remained unchanged of the connection methods discussed in this chapter. through the introduction of, fi rst, the mini computers Organization details are usually made transparent to and then the personal computers. Indeed, even today user applications by the storage subsystem presenting the majority of disk drives shipped in the industry are one or more virtual disk images, which logically look targeted for systems having such a confi guration. like disk drives to the users. This is easy to do because However, this simplistic view of the relationship logically a disk is no more than a drive ID and a logical between the disk drive and the host system does not address space associated with it.
    [Show full text]
  • Which RAID Level Is Right for Me?
    STORAGE SOLUTIONS WHITE PAPER Which RAID Level is Right for Me? Contents Introduction.....................................................................................1 RAID 10 (Striped RAID 1 sets) .................................................3 RAID Level Descriptions..................................................................1 RAID 50 (Striped RAID 5 sets) .................................................4 RAID 0 (Striping).......................................................................1 RAID 60 (Striped RAID 6 sets) .................................................4 RAID 1 (Mirroring).....................................................................2 RAID Level Comparison ..................................................................5 RAID 1E (Striped Mirror)...........................................................2 About Adaptec RAID .......................................................................5 RAID 5 (Striping with parity) .....................................................2 RAID 5EE (Hot Space).....................................................................3 RAID 6 (Striping with dual parity).............................................3 Data is the most valuable asset of any business today. Lost data of users. This white paper intends to give an overview on the means lost business. Even if you backup regularly, you need a performance and availability of various RAID levels in general fail-safe way to ensure that your data is protected and can be and may not be accurate in all user
    [Show full text]
  • Computational Scalability of Large Size Image Dissemination
    Computational Scalability of Large Size Image Dissemination Rob Kooper*a, Peter Bajcsya aNational Center for Super Computing Applications University of Illinois, 1205 W. Clark St., Urbana, IL 61801 ABSTRACT We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term ‘large’ is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.
    [Show full text]
  • Adaptec Disk Array Administrator User's Guide
    Adaptec Disk Array Administrator User’s Guide Copyright © 2001 Adaptec, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of Adaptec, Inc., 691 South Milpitas Blvd., Milpitas, CA 95035. Trademarks Adaptec and the Adaptec logo are trademarks of Adaptec, Inc., which may be registered in some jurisdictions. Windows 95, Windows 98, Windows NT, and Windows 2000 are trademarks of Microsoft Corporation in the US and other countries, used under license. All other trademarks are the property of their respective owners. Changes The material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, Adaptec, Inc. assumes no liability resulting from errors or omissions in this document, or from the use of the information contained herein. Adaptec reserves the right to make changes in the product design without reservation and without notification to its users. Disclaimer IF THIS PRODUCT DIRECTS YOU TO COPY MATERIALS, YOU MUST HAVE PERMISSION FROM THE COPYRIGHT OWNER OF THE MATERIALS TO AVOID VIOLATING THE LAW WHICH COULD RESULT IN DAMAGES OR OTHER REMEDIES. ii Contents 1 Getting Started About This Guide 1-1 Getting Online Help 1-2 Accessing Adaptec Disk Array Administrator 1-2 The Adaptec Disk Array Administrator Screen 1-4 Navigating Adaptec
    [Show full text]
  • Tuning IBM System X Servers for Performance
    Front cover Tuning IBM System x Servers for Performance Identify and eliminate performance bottlenecks in key subsystems Expert knowledge from inside the IBM performance labs Covers Windows, Linux, and VMware ESX David Watts Alexandre Chabrol Phillip Dundas Dustin Fredrickson Marius Kalmantas Mario Marroquin Rajeev Puri Jose Rodriguez Ruibal David Zheng ibm.com/redbooks International Technical Support Organization Tuning IBM System x Servers for Performance August 2009 SG24-5287-05 Note: Before using this information and the product it supports, read the information in “Notices” on page xvii. Sixth Edition (August 2009) This edition applies to IBM System x servers running Windows Server 2008, Windows Server 2003, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and VMware ESX. © Copyright International Business Machines Corporation 1998, 2000, 2002, 2004, 2007, 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Contents Notices . xvii Trademarks . xviii Foreword . xxi Preface . xxiii The team who wrote this book . xxiv Become a published author . xxix Comments welcome. xxix Part 1. Introduction . 1 Chapter 1. Introduction to this book . 3 1.1 Operating an efficient server - four phases . 4 1.2 Performance tuning guidelines . 5 1.3 The System x Performance Lab . 5 1.4 IBM Center for Microsoft Technologies . 7 1.5 Linux Technology Center . 7 1.6 IBM Client Benchmark Centers . 8 1.7 Understanding the organization of this book . 10 Chapter 2. Understanding server types . 13 2.1 Server scalability . 14 2.2 Authentication services . 15 2.2.1 Windows Server 2008 Active Directory domain controllers .
    [Show full text]