US007290102B2

(12) United States Patent (10) Patent N0.: US 7,290,102 B2 Lubbers et a]. (45) Date of Patent: Oct. 30, 2007

(54) POINT IN TIME STORAGE COPY 5,184,281 A 2/1993 Samarov et al. 5,513,314 A 4/1996 Kandasamy et al. (75) Inventors: Clark E. Lubbers, Colorado Springs, 5,815,371 A 9/ 1998 Jeiffies et a1~ CO (US); James M. Reiser, Colorado 5,815,649 A 9/1998 Utter et al' Springs’ CO (Us); Anuja Korgaonkars 5,822,777 A 10/1998 Les-hem et al. Colorado Springs CO Randy L 5,832,222 A 11/1998 DZladosZ et al. Roberson New £30m Richeg/ FL (US)_' 5,835,700 A 11/1998 Carbonneau et al. ’ ’ ’ 5,923,876 A 7/1999 Teague Robert G- Bean, Monument, CO (Us) 5,987,622 A 11/1999 L0 Verso et al. 5,996,089 A 11/1999 Mann et a1. (73) Assignee: Hewlett-Packard Development 6,033,639 A 3/2000 Schmidt et a1, Company, LP, Houston, TX (US) (Continued) ( * ) Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 OTHER PUBLICATIONS U-S-C- 154(1)) by 360 days- Smart Storage Inc., “SmartStor In?NetTM; Virtual Storage for Today’s E-economy,” Sep. 2000. (21) Appl. N0.: 11/081,061 _ (Contlnued) (22) Filed: Mar‘ 15’ 2005 Primary ExaminerAiary Portka (65) Prior Publication Data (57) ABSTRACT US 2005/0160243 A1 Jul. 21, 2005 A storage system permits Virtual storage of user data by Related US. Application Data implementing a logical disk mapping structure that provides (63) Continuation of application NO‘ 10/080 961 ?led on access to user data stored on physical storage media and Oct 22 2001 HOW Pat NO 6 915 392 Winch is a methods for generating point-in-time copies, or snapshots, con?nue’ltion_il’l_ an of 2'‘ li'cat’ion ’NO ’09/872 597 of logical disks. A snapshot logical disk is referred to as a ?led on Jun 1 5001 noglppat NO 6961838 ’ ’ predecessor logical disk and the original logical disk is ' ’ ’ ' ' ’ ’ ' referred to as a successor logical disk. Creating a snapshot (51) 1111.0. 1nvo' 1 ves creatin'gPd re ecessor lg'o 1ca ld'k1s ma PP'gd1n ata G06F 12/16 (200601) structures and populating the data structures With metadata G06F 1 7/30 (200601) that maps the predecessor logical disk to the user data stored (52) Us Cl 711/162 707/204 711/206 on physical media. Logical disks include metadata that 58 F,‘ I'd "" "" "s h ’ ’ N indicates Whether user information is shared between logical ( ) gm 01. $551 ?cla It?“ earcl t """"" one disks. Multiple generations of snapshots may be created, and ee app 1ca Ion e or Comp 6 e Seam 15 my‘ user data may be shared between these generations. Methods (56) References Cited are disclosed for maintaining data accuracy When Write l/O operations are directed to a logical disk. U.S. PATENT DOCUMENTS 5,109,318 A 4/1992 Funari et al. 17 Claims, 15 Drawing Sheets

K- 501 | RSS ID I PHYSICAL MEMBER SELECTION RAIDINFO STATE LMAP POINTER ~._\ , STATE LMAPPOINTER (- 50a ,./"

STATE LMAP POINTER RSSS

STATE LMAP POINTER R5EG# STATE RSD POINTER STATE STATE LMAP POINTER ' RSEGTI STATE RSO POINTER P$EG# PSEG# STATE LMAF‘ POINTER RSEGII STATE RSO POINTER PSEG# PSEGit STATE LMAP POINTER : : PSEG‘t P5561‘ STATE LMAP POINTER I D I PSEG# PSEG# STATE LMAP POINTER wm :. :- STATE LMAP POINTER PsEOrr v.—l I I I ‘ ' ' - RSD STATE LMAP POINTER * ' STATE LMAPPOINTER K 505 STATE LMAF' POINTER US 7,290,102 B2 Page 2

US. PATENT DOCUMENTS 2002/0019923 A1 2/2002 Reuter 2002/0048284 A1 4/2002 Moulton 6,073,209 A 6/2000 Bergsten 2002/0188800 A1 12/2002 TomasZeWski 6,148,414 A 11/2000 Brown et 31- 2003/0051109 A1 3/2003 Cochran 6,161,192 A 12/2000 Lubbérs 2003/0056038 A1 3/2003 Cochran 6,170,063 B1 V2001 Goldlng 2003/0063134 A1 4/2003 Lord 6,188,973 B1 2/2001 Martinez et al. 2003/0074492 A1 4/2003 Cochran 6,237,112 B1 5/2001 YOO eta1~ 2003/0079014 A1 4/2003 Lubbers 6,243,790 B1 6/2001 Yorimitsu 2003/0079074 A1 4/2003 Sicola 6,260,120 B1 7/2001 Blumenau et al. 2003/00790g2 A1 4/2003 Sicola 6,266,721 B1 7/2001 Sheikh et 31 2003/0079083 A1 4/2003 Lubbers 6,278,609 B1 8/2001 Suzuki et 81- 2003/0079102 A1 4/2003 Lubbers 6,282,094 B1 8/2001 L0 eta1~ 2003/0079156 A1 4/2003 Sicola 6,282,096 B1 8/2001 Lo et 31- 2003/0084241 A1 5/2003 Lubbers 6,282,610 B1 8/2001 Bergsten 2003/0101318 A1 5/2003 Kaga 6,295,578 B1 9/2001 Dimitroff 2003/0110237 A1 6/2003 Kitamura 6,397,293 B2 5/2002 Shrader 2003/0126315 A1 7/2003 Tan 6,487,636 B1 11/2002 Dolphin 2003/0126347 A1 7/2003 Tan 6,490,122 B1 12/2002 Holmquist 2003/0140191 A1 7/2003 McGoWen 6,493,656 B1 12/2002 Houston 2003/0145045 A1 7/2003 Pellegrino 6,505,268 B1 V2003 Schultz 2003/0145130 A1 7/2003 Schultz 6,523,749 B2 2/2003 Reasoner 2003/0170012 A1 9/2003 Cochran 6,546,459 B2 4/2003 Rust 2003/0177323 A1 9/2003 Popp 6,560,673 B2 5/2003 Elliot 2003/0187847 A1 10/2003 Lubbers 6,587,962 B1 7/2003 Hepner 2003/0187947 A1 10/2003 Lubbers 6,594,745 B2 7/2003 Grover 2003/0188085 A1 10/2003 Arakawa 6,601,187 B1 7/2003 Sicola 2003/0188114 A1 10/2003 Lubbers 6,606,690 B2 8/2003 Padovano 2003/0188119 A1 10/2003 Lubbers 6,609,145 B1 8/2003 Thompson 2003/0188153 A1 10/2003 Dcinofr 6,629,108 B2 9/2003 Frey 2003/0188218 A1 10/2003 Lubbers 6,629,273 B1 9/2003 Patterson 2003/0188229 A1 10/2003 Lubbers 6,643,795 B1 11/2003 $100111 2003/0188233 A1 10/2003 Lubbers 6,647,514 B1 11/2003 Umberger 2003/0191909 A1 10/2003 Asano 6,658,590 B1 12/2003 Sicola 2003/0191919 A1 10/2003 Sato 6,663,003 B2 12/2003 Johnson 2003/0196023 A1 10/2003 Dickson 6,681,308 B1 V2004 Dallmann 2003/0212781 A1 11/2003 Kaneda 6,708,285 B2 3/2004 Old?eld 2003/0229651 A1 12/2003 MiZuno 6,715,101 B2 3/2004 Old?eld 2003/0236953 A1 12/2003 Giicfr 6,718,404 B2 4/2004 Reuter 2004/0019740 A1 1/2004 Nakayama 6,718,434 B2 4/2004 Veitch 2004/0022546 A1 2/2004 Cochran 6,721,902 B1 4/2004 Cochran 2004/0024838 A1 2/2004 Cochran 6,725,393 B1 4/2004 Pellegrlno 2004/0024961 A1 2/2004 Cochran 6,742,020 B1 5/2004 Dimitroff 2004/0030727 A1 2/2004 Armangau 6,745,207 B2 6/2004 Reuter 2004/0030846 A1 2/2004 Armangau 6,763,409 B1 7/2004 Elliot 2004/0049634 A1 3/2004 Cochran 6,772,231 B2 8/2004 Reuter 2004/0078638 A1 4/2004 Cochran 6,775,790 B2 8/2004 Reuter 2004/0078641 A1 4/2004 Fleischmann 6,795,904 B1 9/2004 Kamvysselis 2004/0128404 A1 7/2004 Cochran 6,802,023 B2 10/2004 Old?eld 2004/0168034 A1 8/2004 Sigeo 6,807,605 B2 10/2004 Umberger 2004/0215602 A1 10/2004 Cioccarelli 6,817,522 B2 11/2004 Brignone 2004/0230859 A1 11/2004 Cochran 2,253,322 5; 15588;‘ gaget?mn 2004/0267959 A1 12/2004 Cochran , , am e 6,842,833 B1 1/2005 Phillips OTHER PUBLICATIONS 6,845,403 B2 1/2005 Chadalapaka Compaq Computer Corporation, “The Compaq Enterprise Network 2002/0019863 A1 2/2002 Reuter Storage Architecture: An Overview,” May 2000. 2002/0019908 A1 2/2002 Reuter Compaq Computer Corporation, “Compaq Storage Works” Data 2002/0019920 A1 2/2002 Reuter Replication Manager HSG80 ACS V8.5P Operations Guide, Mar. 2002/0019922 A1 2/2002 Reuter 2000. U.S. Patent 0a. 30, 2007 Sheet 1 0f 15 US 7,290,102 B2

44069.. xwa1| $069 , xwE

130604 v56

h.QI U.S. Patent 0a. 30, 2007 Sheet 2 0f 15 US 7,290,102 B2

203[

' 301 303 213.r\\ FIG.3

211 U.S. Patent 0a. 30, 2007 Sheet 3 0f 15 US 7,290,102 B2

Pom

Q wow

wow

Now U.S. Patent 0a. 30, 2007 Sheet 4 0f 15 US 7,290,102 B2

wwwm IO OC _Emmi3w? 05mmE252”.mmmsmg$0.9112955mm .Qwm

.Emma

:823mm".owm1526a55w#wwwm Emmanew?‘x5251Qwm35wEmma Emma“28mmun0\ x3260E55“6mm”.owmM55 F2555600mm“Emma. 8m\ ,.

Pom\t

U.S. Patent 0a. 30, 2007 Sheet 7 0f 15 US 7,290,102 B2

905 CREATE PREDECESSOR PLDMC

1 9O QUIESCE WRITE l/O OPERATIONS TO SUCCESSOR LOGICAL DISK I Y FLUSH SUCCESSOR LOGICAL DISK CACHE DATA L 920 SET SHARE BITS IN SUCCESSOR PLDMC _I__95 CREATE LZMAP. LMAP FOR PREDECESSOR LOGICAL DISK 1 930 PREDECESSORPOPULATE // LZMAP WITH PREDECESSOR LMAP POINTERS 1 93 5 POPULATE I PRED EC ESSOR LMAP WITH SUCCESSOR LMAP CONTENTS 1 940 SET SHARE BITS I IN SUCCESSOR / AND PREDECESOR LMAPS

I 945

SUCCESSORUNQUIESCE / LOGICAL DISK Fig. 9 U.S. Patent 0a. 30, 2007 Sheet 8 0f 15 US 7,290,102 B2

_ Ss Sp S5 Sp Ss Sp Ss Sp __

FIG. 10 U.S. Patent 0a. 30, 2007 Sheet 9 0f 15 US 7,290,102 B2

WI

___Ss Sp Ss Sp Ss Sp Ss Sp _ FIG. 11a

W] C B W A

_ Ss Sp Ss Sp Ss Sp Ss Sp __

FIG. 1 lb

W1 CBW

__ S5 Sp S5 Sp Ss Sp

FIG. 11c U.S. Patent 0a. 30, 2007 Sheet 10 0f 15 US 7,290,102 B2

W2

__Ss Sp Ss Sp Ss Sp Ss Sp_“ FIG. 12a

W2 CBW

CBW

___ Ss Sp Ss Sp Ss Sp Ss Sp ~_

FIG. 12b

W2 CBW

CBW U.S. Patent 0a. 30, 2007 Sheet 11 0f 15 US 7,290,102 B2

__S5 Sp 55 SP_

FIG. 13a

__Ss Sp S5 Sp Ss Sp_

FIG. 13b

I BGCop W

_Ss Sp Ss Sp Ss Sp_

FIG. 13c U.S. Patent 0a. 30, 2007 Sheet 12 0f 15 US 7,290,102 B2

FIG. 13s U.S. Patent 0a. 30, 2007 Sheet 13 0f 15 US 7,290,102 B2

FIG. 13h

FIG. l3i U.S. Patent 0a. 30, 2007 Sheet 14 0f 15 US 7,290,102 B2

______Ss Sp >__ __

FIG. 14a

__Ss Sp S5 Sp _ Sp __ _ Ss Sp S5 Sp W

FIG. 14b

. :>

_SS Sp__ __ Sp___ _ Ss Sp

FIG. 14c U.S. Patent 0a. 30, 2007 Sheet 15 0f 15 US 7,290,102 B2

BGCopy 4\_ _S5 SpSs Sp_ Sp_ HGv 15a

1’ CBW €\__/

_55 Sp Ss Sp__ __

FIG. 15b

__SS SpSs Sp_ __ FIG. 15c

FIG. 15d US 7,290,102 B2 1 2 POINT IN TIME STORAGE COPY High reliability and high availability storage systems refer to systems that spread data across multiple physical storage RELATED APPLICATIONS systems to ameliorate risk of data loss in the event of one or more physical storage failures. Both large capacity and high This application is a continuation of US. patent applica availability/high reliability systems are implemented, for tion Ser. No. 10/080,961, entitled SYSTEM AND example, by RAID (redundant array of independent drive) METHOD FOR POINT IN TIME STORAGE COPY, ?led systems. Oct. 22, 2001 now US. Pat. No. 6,915,397, Which is a Storage management tasks, Which often fall on an infor continuation-in-part of US. patent application Ser. No. mation technology (IT) staff, often extend across multiple 09/872,597, entitled PROCESS FOR FAST, SPACE-EFFI systems, multiple rooms Within a site, and multiple sites. CIENT DISK COPIES USING PARALLEL DISTRIB This physical distribution and interconnection of servers and UTED TABLE DRIVEN I/O MAPPING, ?led Jun. 1, 2001 storage subsystems is complex and expensive to deploy, now US. Pat. No. 6,961,838, the disclosures of Which are maintain and manage. Essential tasks such as backing up and hereby incorporated by reference in its entirety. restoring data are often di?icult and leave the computer BACKGROUND OF THE INVENTION system vulnerable to lengthy outages. Storage consolidation is a concept of groWing interest. 1. Field of the Invention Storage consolidation refers to various technologies and The present invention relates generally to computer-based techniques for implementing mass storage as a uni?ed, information storage systems. More particularly, the present largely self-managing utility for an enterprise. By uni?ed it 20 invention relates to a system and method for generating a is meant that the storage can be accessed using a common copy (or copies) of data stored in a computer-based infor interface Without regard to the physical implementation or mation storage system such as, for example, a RAID storage redundancy con?guration. By self-managing it is meant that system. many basic tasks such as adapting to changes in storage 2. Relevant Background capacity (e.g., adding or removing drives), creating redun 25 Recent years have seen a proliferation of computers and dancy sets, and the like are performed automatically Without storage subsystems. Demand for storage capacity groWs by need to recon?gure the servers and client machines access over seventy-?ve percent each year. Early computer systems ing the consolidated storage. relied heavily on direct-attached storage (DAS) consisting of Computers access mass storage capacity using a ?le one or more disk drives coupled to a system bus. More 30 system implemented With the computer’s . recently, netWork-attached storage (NAS) and storage area A ?le system is the general name given to the logical netWork (SAN) technologies are used to provide storage structures and softWare routines, usually closely tied to the With greater capacity, higher reliability, and higher avail operating system softWare, that are used to control access to ability. The present invention is directed primarily at net storage. File systems implement a mapping data structure Work storage systems that are designed to provide shared that associates addresses used by application software to that is beyond the ability of a single host 35 addresses used by the underlying storage layers. While early computer to e?iciently manage. ?le systems addressed the storage using physical informa To this end, mass data storage systems are implemented in tion about the hard disk(s), modern ?le systems address netWorks or fabrics that provide means for communicating logical units (LUNs) that comprise a single drive, a portion data With the storage systems. Host computers or servers are of a drive, or more than one drive. 40 coupled to the netWork and con?gured With several disk Modern ?le systems issue commands to a disk controller drives that cumulatively provide more storage capacity or either directly, in the case of direct attached storage, or different storage functions (e.g., data protection) than could through a netWork connection, in the case of netWork ?le be implemented by a DAS system. In many cases, dedicated systems. A disk controller is itself a collection of hardWare data storage systems implement much larger quantities of 45 and softWare routines that translate the ?le system com data storage than Would be practical for a stand-alone mands expressed in logical terms into hardWare-speci?c computer or Workstation. Moreover, a server dedicated to commands expressed in a protocol understood by the physi data storage can provide various degrees of redundancy and cal drives. The controller may address the disks physically, mirroring to improve access performance, availability and hoWever, more commonly a controller addresses logical reliability of stored data. 50 block addresses (LBAs). The disk drives themselves include HoWever, because the physical storage disks are ulti a controller that maps the LBA requests into hardWare mately managed by particular servers to Which they are speci?c commands that identify a particular physical loca directly attached, many of the limitations of DAS are tion on a storage media that is to be accessed. ultimately present in conventional SAN systems. Speci? cally, a server has limits on hoW many drives it can manage Despite the fact that disks are addressed logically rather as Well as limits on the rate at Which data can be read from 55 than physically, logical addressing does not truly “virtual and Written to the physical disks that it manages. Accord iZe” the storage. Presently, a user (i.e., IT manager) is ingly, server-managed SAN provides distinct advantages required to have at least some level of knoWledge about the over DAS, but continues to limit the ?exibility and impose physical storage topology in order to implement, manage high management costs on mass storage implementation. and use large capacity mass storage and/or to implement A signi?cant di?iculty in providing storage is not in 60 high reliability/high availability storage techniques. User providing the quantity of storage, but in providing that aWareness refers to the necessity for a user of the mass storage capacity in a manner than enables ready, reliable storage to obtain knoWledge of physical storage resources access With simple interfaces. Large capacity, high avail and topology in order to con?gure controllers to achieve a ability, and high reliability storage architectures typically desire storage performance. In contrast, personal computer involve complex topologies of physical storage devices and 65 technology typically does not require user aWareness to controllers. By “large capacity” it is meant storage systems connect to storage on a local area netWork (LAN) as simple having greater capacity than a single mass storage device. con?guration utilities alloW a user to point to the LAN US 7,290,102 B2 3 4 storage device an connect to it. In such cases, a user can be Allocation is closely tied to data access because the manner unaware of the precise physical implementation of the LAN in which storage is allocated determines the siZe of the data storage, which may be implemented in multiple physical structure required to access the data. Hence, a need exists for devices and may provide RAID-type data protection. a storage allocation system that allocates storage in a manner Hence, even though the storage may appear to an end-user that provides ef?cient data structures for accessing the data. as abstracted from the physical storage devices, in fact the Data security is another important consideration in stor storage is dependent on the physical topology of the storage age systems. One component of ensuring data security is devices. A need exists for systems, methods and software generating backup copies of information stored on physical that e?fect a true separation between physical storage and the media in the storage system. Traditional techniques for logical view of storage presented to a user. Similarly, a need generating backup copies of information stored on physical exists for systems, methods and software that merge storage media involved making a redundant copy of the information, management functions within the storage itself. usually on a separate storage medium such as, e.g., a generally refers to systems that magnetic tape or optical disk. These techniques raise mul provide transparent abstraction of storage at the block level. tiple issues in large capacity storage, high availability stor In essence, virtualiZation separates out logical data access age systems. Foremost, traditional backup procedures may from physical data access, allowing users to create virtual render the storage system inaccessible during the backup disks from pools of storage that are allocated to network process, which is inconsistent with the goal of maintaining coupled hosts as logical storage when needed. Virtual stor high availability. In addition, traditional backup procedures age eliminates the physical one-to-one relationship between consume signi?cant storage space, much of which may be servers and storage devices. The physical disk devices and 20 wasted. Hence, a need exists for backup procedures that distribution of storage capacity become transparent to serv make ef?cient use of storage space and processing time. ers and applications. SUMMARY OF THE INVENTION VirtualiZation can be implemented at various levels within a SAN environment. These levels can be used together or 25 In one aspect, the present invention addresses these and independently to maximize the bene?ts to users. At the other needs by providing a storage system adapted to utiliZe server level, virtualiZation can be implemented through logical disks. Physical storage space is divided into seg software residing on the server that causes the server to ments, referred to as PSEGs, which may be combined in behave as if it is in communication with a device type even accordance with desired redundancy rules into a logically though it is actually communicating with a virtual disk. 30 addressable data structure referred to as a Redundant Store. Server-based virtualiZation has limited interoperability with A multi-level mapping structure is implemented to relate hardware or software components. As an example of server logically addressable storage space to user data stored on based storage virtualiZation, Compaq o?fers the Compaq physical media. At one level, a Redundant Store Descriptor SANworksTM Virtual Replicator. (RSD) structure contains metadata identifying the PSEGs on Compaq VersaStorTM technology is an example of fabric 35 which user data “contained” by the RSD resides. At a higher level virtualiZation. In Fabric-level virtualiZation, a virtual level, an LMAP structure may include a plurality of entries, iZing controller is coupled to the SAN fabric such that each of which has a pointer to an RSD “contained” by the storage requests made by any host are handled by the LMAP and metadata describing whether the user data “con controller. The controller maps requests to physical devices tained” by RSD is shared with another logical disk. At an coupled to the fabric. VirtualiZation at the fabric level has 40 even higher level, an L2MAP corresponds to a logical disk advantages of greater interoperability, but is, by itself, an and may include a plurality of pointers to LMAPs “con incomplete solution for virtualiZed storage. The virtualiZing tained” in the logical disk. controller must continue to deal with the physical storage When a snapshot operation is executed, the user data for resources at a drive level. What is needed is a virtualiZation the target logical disk may be operationally “frozen”, and a system that operates at a system level (i.e., within the SAN). 45 new logical disk may be created. The new logical disk is Storage system architecture involves two fundamental referred to as a “predecessor” logical disk (“predecessor”), tasks: data access and storage allocation. Data is accessed by and the original logical disk is referred to as the “successor” mapping an address used by the software requesting access logical disk (“successor” . to a particular physical location. Hence, data access requires Advantageously, when the snapshot operation is that a data structure or memory representation of the storage 50 executed, no user data need be copied from the successor system that this mapping be available for search, which logical disk to the predecessor logical disk. Instead, the typically requires that the data structure be loaded into mapping structures necessary for representing the predeces memory of a processor managing the request. For large sor logical disk are generated and a sharing relationship is volumes of storage, this mapping structure can become very established between the predecessor and the successor. large. When the mapping data structure is too large for the 55 Metadata may be recorded that indicates where user data for processor’s memory, it must be paged in and out of memory the predecessor resides on the successor. User data may be as needed, which results in a severe performance penalty. A shared between the predecessor and the successor. Never need exists for a storage system architecture that enables a theless, both the predecessor and the successor may remain memory representation for large volumes of storage using active, i.e., both read and write I/O operations may be limited memory so that the entire data structure can be held 60 directed to the predecessor and successor logical disks. Data in memory. management algorithms are implemented to maintain accu Storage allocation refers to the systems and data struc rate data in both the predecessor and successor logical disks. tures that associate particular storage resources of a physical In one aspect, the invention a method of creating a storage device (e.g., disks or portions of disks) with a predecessor logical disk that is a snapshot of a successor particular purpose or task. Storage is typically allocated in 65 logical disk. Preferably, the successor logical disk is de?ned larger quantities, called “chunks” or “clusters”, than the by user data stored in a plurality of uniquely identi?able smallest quantity of data that can be accessed by a program. PSEGS and by metadata including an L2MAP having a US 7,290,102 B2 5 6 plurality of LMAP pointers, one or more LMAPs including example, hardWare and softWare RAID controllers that a plurality of RSD pointers, and one or more RDSs having Would manage multiple directly attached disk drives. a plurality of PSEG pointers. The method comprises the In the examples used herein, the computing systems that steps of creating a predecessor PLDMC; creating an LMAP require storage are referred to as hosts. In a typical imple for the predecessor logical disk; populating the LMAP for mentation, a host is any computing system that consumes the predecessor logical disk With RSD pointers from the vast quantities of data storage capacity on its oWn behalf, or successor logical disk; creating an L2MAP for the prede on behalf of systems coupled to the host. For example, a host cessor logical disk; populating the L2MAP for the prede may be a supercomputer processing large databases, a cessor logical disk With the LMAP pointers from the pre transaction processing server maintaining transaction decessor logical disk; setting share bits in the LMAPs for the records, and the like. Alternatively, the host may be a ?le predecessor logical disk and the successor logical disk to server on a local area netWork (LAN) or Wide area netWork indicate that the data is being shared; and setting share bits (WAN) that provides mass storage services for an enterprise. in the successor PLDMC to indicate that the data is being In the past, such a host Would be out?tted With one or more shared. The steps of the method need not be performed in a disk controllers or RAID controllers that Would be con?g particular order. ured to manage multiple directly attached disk drives. The host connects to the virtualiZed SAN in accordance With the BRIEF DESCRIPTION OF THE DRAWINGS present invention With a high-speed connection technology such as a ?bre channel (FC) fabric in the particular FIG. 1 shoWs a logical vieW of a netWorked computer examples. Although the ho st and the connection betWeen the environment in Which the virtualiZed storage system in 20 host and the SAN are important components of the entire accordance With the present invention is implemented; system, neither the host nor the FC fabric are considered FIG. 2 illustrates a physical vieW of a netWorked com components of the SAN itself. puter environment in Which the virtualiZed storage system in The present invention implements a SAN architecture accordance With the present invention is implemented; comprising a group of storage cells, Where each storage cell FIG. 3 illustrates a storage cell shoWn in FIG. 2 in greater 25 comprises a pool of storage devices called a disk group. detail; Each storage cell comprises parallel storage controllers FIG. 4 shoWs a functional block-diagram of components coupled to the disk group. The storage controllers coupled to of an alternative embodiment storage cell; the storage devices using a ?bre channel arbitrated loop FIG. 5 depicts data structures implementing an connection, or through a netWork such as a ?bre channel in-memory representation of a storage system in accordance 30 fabric or the like. The storage controllers are also coupled to With the present invention; each other through point-to-point connections to enable FIG. 6 illustrates atomic physical and logical data storage them to cooperatively manage the presentation of storage structures in accordance With the present invention; capacity to computers using the storage capacity. FIG. 7 shoWs a prior art storage system implementing The present invention is illustrated and described in terms multiple types of data protection; 35 of a distributed computing environment such as an enter FIG. 8 shoWs a storage system in accordance With the prise computing system using a private SAN. HoWever, an present invention implementing multiple types of data pro important feature of the present invention is that it is readily tection; scaled upWardly and doWnWardly to meet the needs of a FIG. 9 is a ?owchart illustrating steps in a method for particular application. creating a snapshot logical disk accessible; and 40 FIG. 1 shoWs a logical vieW of an exemplary SAN environment 100 in Which the present invention may be FIG. 10 is a schematic diagram illustrating a plurality of snapshot logical disks and the sharing relationships betWeen implemented. Environment 100 shoWs a storage pool 101 them; comprising an arbitrarily large quantity of storage space FIGS. 11a-11c are schematic diagrams illustrating a Write from Which logical disks (also called logical units or LUNs) 45 102 are allocated. In practice, storage pool 101 Will have operation directed to a logical disk; some ?nite boundaries determined by a particular hardWare FIGS. 12a-12c are schematic diagrams illustrating a Write implementation, hoWever, there are feW theoretical limits to operation directed to a logical disk; the siZe of a storage pool 101. FIGS. 13a-13i are schematic diagrams illustrating aspects Within pool 101 logical device allocation domains of a ?rst exemplary snapclone operation; 50 (LDADs) 103 are de?ned. LDADs correspond to a set of FIGS. 14a-14c are schematic diagrams illustrating aspects physical storage devices from Which LUNs 102 may be of removing a snapclone logical disk from a sharing tree; allocated. LUNs 102 do not span LDADs 103 in the pre and ferred implementations. Any number of LDADs 103 may be FIGS. 15a-15d are schematic diagrams illustrating de?ned for a particular implementation as the LDADs 103 aspects of a second exemplary snapclone operation. 55 operate substantially independently from each other. LUNs 102 have a unique identi?cation Within each LDAD 103 that DETAILED DESCRIPTION is assigned upon creation of a LUN 102. Each LUN 102 is essential a contiguous range of logical addresses that can be NetWork and Device Architecture addressed by host devices 105, 106, 107 and 109 by map The present invention generally involves a storage archi 60 ping requests from the connection protocol used by the hosts tecture that provides virtualiZed data storage at a system to the uniquely identi?ed LUN 102. level, such that virtualiZation is implemented Within a SAN. Some hosts such as host 107 Will provide services of any VirtualiZation in accordance With the present invention is type to other computing or data processing systems. Devices implemented in a storage system controller to provide high such as client 104 may access LUNs 102 via a host such as performance, high data availability, fault tolerance, and 65 server 107 to Which they are coupled through a LAN, WAN, ef?cient storage management. In the past, such behaviors or the like. Server 107 might provide ?le services to net Would be implemented at the fabric or server level by, for Work-connected clients, transaction processing services for