Disk Drive Operations Storage Access Methods

Lecture 2 September 19, 2006

NEU csg389 Information Storage Technologies, Fall 2006 Plan for today

• Questions from last time • On-disk data organization • what they’re all about • Storage access methods • Block-based access • File-based access • Object-based access

NEU csg389 Fall 2006, Jiri Schindler 2 Disk Drive Firmware Algorithms

NEU csg389 Information Storage Technologies, Fall 2006 Outline

• Mapping LBNs to physical sectors • zones • defect management • track and cylinder skew • Bus and buffer management • optimizing storage subsystem resources • Advanced buffer space usage • prefetching and caching • /-on-arrival

NEU csg389 Fall 2006, Jiri Schindler 4 How functionality is implemented

• Some of it is in ASIC logic • error detection and correction • signal/servo processing • motor/seek control • cache hits (often) • Some of it is in firmware running on control processor • request processing • request queueing and scheduling • LBN-to-PBN mapping • Key considerations: cost and performance and cost • optimize common cases • keep things simple and space-conscious

NEU csg389 Fall 2006, Jiri Schindler 5 Recall the storage device interface

• Linear address space of equal-sized blocks • each identified by logical block number (LBN)

… 5 6 7 12 23 …

• Common block size: 512 bytes • Number of blocks: device capacity / block size

NEU csg389 Fall 2006, Jiri Schindler 6 Recall the physical disk storage reality

Cylinder

Track

Sector

NEU csg389 Fall 2006, Jiri Schindler 7 Physical sectors of a single-surface disk

NEU csg389 Fall 2006, Jiri Schindler 8 LBN-to-physical for a single-surface disk

4 5

16 17 3 6 15 28 29 18 27 30 40 41 7 2 14 39 42 19 26 38 43 31 37 25 44 32 13 1 36 45 20 8 4 46 24 33 12 35 34 21 9 0 23 22

11 10

NEU csg389 Fall 2006, Jiri Schindler 9 Extending mapping to a multi-surface disk

NEU csg389 Fall 2006, Jiri Schindler 10 Some real numbers for modern disks

• # of platters: 1-4 • 2-8 surfaces for data • # of tracks per surface: 10s of 1000s • same thing as # of cylinders • # sectors per track: 500-900 • so, 250-450KB • # of bytes per sector: usually 512 • can be chosen by OS for some disks • disk manufactures want to make it bigger

NEU csg389 Fall 2006, Jiri Schindler 11 Clarification of Cylinder Numbering

NEU csg389 Fall 2006, Jiri Schindler 12 First Complication: Zones

• Outer tracks are longer than inner ones • so, they can hold more data • benefits: increased capacity and higher bandwidth • Issues • increased bookkeeping for LBN-to-physical mapping • more complex signal processing logic – because of variable bit rate timing • Compromise: zones • all tracks in each zone hold same number of sectors

NEU csg389 Fall 2006, Jiri Schindler 13 Constant number of sectors per track

NEU csg389 Fall 2006, Jiri Schindler 14 Multiple “zones”

NEU csg389 Fall 2006, Jiri Schindler 15 A real zone breakdown

• IBM Ultrastar 18ES (1998)

Zone Start cylinder End cylinder SPT 0 0 377 390 1 378 1263 374 2 1264 2247 364 3 2248 3466 351 4 3466 4504 338 5 4505 5526 325 6 5527 7044 312 7 7045 8761 286 8 8762 9815 273 9 9816 10682 260 10 10683 11473 247

NEU csg389 Fall 2006, Jiri Schindler 16 Second Complication: Defects

• Portions of the media can become unusable • both before installation and during use – former is MUCH more common than latter • Need to set aside physical space as spares • simplicity dictates having no holes in LBN space • many different organizations of spare space – e.g., sectors per track, cylinder, group of cylinders, zone • Two schemes for using spare space to handle defects • remapping – leave everything else alone and just remap the disturbed LBNs • slipping – change mapping to skip over defective regions

NEU csg389 Fall 2006, Jiri Schindler 17 One spare sector per track

4 5

15 16 3 6 14 26 27 17 25 28 37 38 7 2 13 36 39 18 24 35 40 29 34 23 41 30 12 1 33 42 19 8 43 22 31 11 32 20 0 22 9

10

NEU csg389 Fall 2006, Jiri Schindler 18 Remapping from defective sector to spare

4 5

15 16 3 6 14 26 27 17 28 37 38 7 2 13 36 39 18 24 35 40 29 34 23 41 30 12 1 33 42 19 8 43 22 31 11 25 32 20 0 22 9

10

NEU csg389 Fall 2006, Jiri Schindler 19 LBN mapping slipped past defective sector

4 5

15 16 3 6 14 25 26 17 27 37 38 7 2 13 36 39 18 24 35 40 28 34 23 41 29 12 1 33 42 19 8 43 22 30 11 32 31 20 0 22 9

10

NEU csg389 Fall 2006, Jiri Schindler 20 Some Real Defect Management Schemes

• High level facts • percentage of space: < 1% • always slip if possible – much more efficient for streaming data

• One real scheme: Seagate Cheetah 4LP • 108 spare sectors every 12 cylinders – located on the last track of the 12-cylinder group – used only for remapped sectors grown during usage • many spare sectors on innermost cylinders – used to provide backstop for all slipped sectors

NEU csg389 Fall 2006, Jiri Schindler 21 Computing physical location from LBN

• First, check list of remapped LBNs • usually identifies exact physical location of replacement • If no match, do the steps from before • but, also account for slipped sectors that affect desired LBN

• About 10 different management schemes • For any given scheme, the computations can be fairly straightforward. However, it is quite complex to discuss them all at once concretely

NEU csg389 Fall 2006, Jiri Schindler 22 When defects “grow” during operation

• First, try ECC • it can recover from many problems • Next, try to read the sector again • often, failure to read the sector is transient • cost is a full rotation added to access time • Last resort, report failure and remap sector • this means that the stored data has been lost • until next write to this LBN, reads get error response – new data allows the location change to take effect

NEU csg389 Fall 2006, Jiri Schindler 23 Error Recovery Algorithm for READs

READ SELECTOR typically tries 3 times

ECC COMPARE

NO

RETRY YES ECC SYMBOL OR NO RETRY RETRY THRESHOLD LIMIT SECTOR REACHED? REACHED? READ

NO YES

REPLACE BLOCK ERROR NO INCREASE OR SIGNAL HOST TO RECOVERY LEVELS ERROR RECOVERY PERFORM EXHAUSTED? REPLACEMENT LEVEL YES

DELIVER SIGNAL DATA TO DATA ERROR HOST TO HOST

NEU csg389 Fall 2006, Jiri Schindler 24 Third Complication: Skew

• Switching from one track to another takes time • sequential transfers would suffer full rotation • Solution: skew • offset physical location of first sector to avoid extra rotation – selection of skew value made from switch time statistics • Track skew • for when switching to next surface within a cylinder • Cylinder skew • for when switching from last surface of one cylinder to first surface of next cylinder

NEU csg389 Fall 2006, Jiri Schindler 25 What happens to requests that span tracks?

1 2 11 0 0 1 0 13 14 3 10 23 12 1 11 12 13 2 12 25 26 15 22 35 24 13 23 24 25 14 24 27 34 25 35 26 11 4 9 2 10 3 23 35 28 16 21 33 26 14 22 34 27 15 34 29 32 27 33 28 10 22 17 5 8 20 15 3 9 21 16 4 33 30 31 28 32 29 21 32 31 18 19 30 29 16 20 31 30 17 9 20 19 6 7 18 17 4 8 19 18 5 8 7 6 5 7 6

Request spans 2 tracks After reading first part After track switch

NEU csg389 Fall 2006, Jiri Schindler 26 What happens to requests that span tracks?

1 2 11 0 0 1 0 13 14 3 10 23 12 1 11 12 13 2 12 25 26 15 22 35 24 13 23 24 25 14 24 27 34 25 35 26 11 4 9 2 10 3 23 35 28 16 21 33 26 14 22 34 27 15 34 29 32 27 33 28 10 22 17 5 8 20 15 3 9 21 16 4 33 30 31 28 32 29 21 32 31 18 19 30 29 16 20 31 30 17 9 20 19 6 7 18 17 4 8 19 18 5 8 7 6 5 7 6

Request spans 2 tracks After reading first part After track switch

Sector 12 rotates past during track switch, so full rotation needed

NEU csg389 Fall 2006, Jiri Schindler 27 Same request with track skew of one sector

1 2 11 0 0 1 0 12 13 3 10 22 23 1 11 23 12 2 23 35 24 14 21 33 34 12 22 34 35 13 34 25 32 35 33 24 11 4 9 2 10 3 22 33 26 15 20 31 24 13 21 32 25 14 32 27 30 25 31 26 10 21 16 5 8 19 14 3 9 20 15 4 31 28 29 26 30 27 20 30 29 17 18 28 27 15 19 29 28 16 9 19 18 6 7 17 16 4 8 18 17 5 8 7 6 5 7 6

Request spans 2 tracks After reading first part After track switch

Track skew prevents unnecessary rotation

NEU csg389 Fall 2006, Jiri Schindler 28 Examples of Track and Cylinder Skews

Quantum IBM Atlas 10k Ultrastar 18ES S ke Z w o n SPT Track Cylinder SPT Track Cylinder e 1 334 64 101 390 58 102 2 324 62 98 374 56 97 3 306 56 93 364 55 95

NEU csg389 Fall 2006, Jiri Schindler 29 Computing Physical Location from LBN

Figure out cylno, surfaceno, and sectno • using algorithms indicated previously

Compute total skew for first mapped physical sector on this track • totalskew = (cylno * cylskew) + (surfaceno + (cylno * (surfaces-1)) * trackskew)

Compute rotational offset on given track • offset = (totalskew + sectno) % sectspertrack

NEU csg389 Fall 2006, Jiri Schindler 30 Basic On-disk Caching

NEU csg389 Information Storage Technologies, Fall 2006 On-disk RAM

• RAM on disk drive controllers sector • firmware • speed matching buffer • prefetching buffer one $ segment • cache • Canonical disk drive buffers • several fixed-size “segments” • latest thing: variable-size segments • down the road: OS style management

NEU csg389 Fall 2006, Jiri Schindler 32 Prefetching and Caching

• Prefetching • sequential prefetch essentially free until next request arrives – and until track boundary • Note: physically sequential sectors are prefetched – usefulness depends on access patterns • Example algorithms – prefetch until buffer is full or next request arrives – MIN and MAX values for prefetching – if track n-1 and n have been READ, prefetch track n+1 • Caching • data in buffer segments retained as cache • most of the benefit comes from prefetching

NEU csg389 Fall 2006, Jiri Schindler 33 Disk Drive – Complete System?

C A I/O CPU C Memory H Devices E

CONTROL PROCESSOR

MEDIA

BUFFFER/ BUS CACHE BUS I-FACE

NEU csg389 Fall 2006, Jiri Schindler 34 Not really, recall this…

Rest of System

Requests Completions Device Driver(s) System Bus

Bus Adapter I/O Bus

I/O Controller

Independent Disks storage subsystem

NEU csg389 Fall 2006, Jiri Schindler 35 File Systems

NEU csg389 Information Storage Technologies, Fall 2006 Key FS design issues

• Application interface and system software • Data organization and naming • On-disk data placement • Cache management • Metadata integrity and crash recovery • Access control

NEU csg389 Fall 2006, Jiri Schindler 37 Starting at the top: what applications see

• At the highest level (in most systems) • contents of a file: sequence of bytes • most basic operations: , , read, write • open starts a “session” and returns a “handle” • in POSIX (e.g., Linux and various UNIXes) – handle is process-specific integer called “” – session remembers current offset into file – for local files, session also postpones full file deletion • handle is provided with each subsequent operation • read or write access bytes at some offset in file – could be explicitly provided or remembered by session • close ends session and destroys the handle

NEU csg389 Fall 2006, Jiri Schindler 38 Sidebar: shared and private sessions

• Two opens of the same file yield independent sessions fd1 offset

fd2 offset

File Open file File descriptors objects • A session can be shared across handles or processes fd1 offset fd2

File Open file File descriptor object

NEU csg389 Fall 2006, Jiri Schindler 39 Where information resides

User Level

Kernel System Calls

File Descriptors

File System

NEU csg389 Fall 2006, Jiri Schindler 40 Some associated structures in kernel

File Structures

fds[0] fds[1] f_flag f_flag f_flag fds[2] f_count=1 f_count=2 f_count=1 ... f_offset f_offset . . . f_offset fds[n] f_inode f_inode f_inode

Process A

fds[0] fds[1] fds[2] ... “sharing” file structure fds[n] but have different file descriptors Process B

NEU csg389 Fall 2006, Jiri Schindler 41 Moving data between kernel and application

• Common approach: copy it • as in read(fd, buffer, size) or write(fd, buffer, size) • simplifies things in two ways – application knows it can use the memory immediately • and that the corresponding data is in that memory – kernel has no hidden coordination with application • e.g., later changes to buffer do not silently change file • Sometimes better approach: hand it off • as in char *buffer = read(fd, size) – notice that buffer containing data is returned • this allows page swapping (via VM) rather than copying • downsides – sometimes not much of a performance improvement – makes file caching more difficult – can be confusing for application writers

NEU csg389 Fall 2006, Jiri Schindler 42 The uio structure for scatter/gather I/O

struct uio File uio_offset (logical view) uio_iovcnt = 3 *uio_iovec[] ...

base { length base length base { length { struct iovec[] process address space

NEU csg389 Fall 2006, Jiri Schindler 43 Remember that the FS data lives on disk

User Level

Kernel System Calls

File Descriptors

File System

Disk

NEU csg389 Fall 2006, Jiri Schindler 44 From file offsets to LBNs

• File offsets • 0 to num_blocks_in_file – offset to a file given in block number • File System blocks • 0 to num_blocks_in_filesystem • Single block may span multiple disk LBNs

file offsets 0 1 2 … 19 20 21

FS blocks 98 9999 100 101 102 … 119 112200 121

LBN space … 1658 1670 2004 FS block size = 8 LBNs (4KB)

NEU csg389 Fall 2006, Jiri Schindler 45 Mapping file offsets to disk LBNs

• Issue in question • must know which LBNs hold which file’s data • Trivial mapping: just remember start location • then keep entire file in contiguous LBNs – what happens when it grows? • alternately, include a “next pointer” in each “block” – how does one find location of a particular offset? • Most common approach: block lists • an array with one LBN per block in the file • Note: file block size can exceed one logical block – file system treats groups of logical blocks as a unit

NEU csg389 Fall 2006, Jiri Schindler 46 A common approach to recording a block list

Direct Block 1 Data (lbn 576) Direct Block 2 . . . Data (lbn 344) Direct Block 12 Data (lbn 968)

Indirect Block Data Block 13 Data (lbn 632) Data Block 14 . . . Data (lbn 1944) Data Block N Data (lbn 480) Double-Indirect Block Indirect Block 1 Data Block N+1 Data (lbn 96) Data Block N+2 . . . Data (lbn 176) Indirect Block 2 Data Block Q+1 ...... Data (lbn 72)

NEU csg389 Fall 2006, Jiri Schindler 47

• FS stores other per-file information as well • length of file • owner • access permissions • last modification time • … • Usually kept together with the block list

NEU csg389 Fall 2006, Jiri Schindler 48 Supporting multiple file system types

User Level

Kernel System Calls

File Descriptors

Vnode Layer

Vnode Operators

LFS FFS NFS

Disk

NEU csg389 Fall 2006, Jiri Schindler 49 Vnode layer: inside kernel

• Want to have multiple file systems at once • and possibly of differing types • Solution: virtual file system layer • adding level of indirection always seems to help…

• Everything in kernel interacts with FS via a virtualized layer of functions • these function calls are routed to the appropriate FS-specific implementations – once the correct FStype has been identified

NEU csg389 Fall 2006, Jiri Schindler 50 Open file object points to a vnode

vnode Structure Open file object (for active files) Array of pointers to v_count file system specific f_vnode v_ops functions for offset v_vfsp implementing the v_data virtual FS interface User Area

Pointer to file system uf_ofile[] dependent structure such as an (an in-memory copy, of course)

User Area of Currently Running Process

NEU csg389 Fall 2006, Jiri Schindler 51 Can also support non-disk FS

System Calls

Vnode Layer

NFS PC File System 4.2BSD File System NFS Server

Network

Disk Floppy

NEU csg389 Fall 2006, Jiri Schindler 52 Key FS design issues

• Application interface and system software • Data organization and naming • On-disk data placement • Cache management • Metadata integrity and crash recovery • Access control

NEU csg389 Fall 2006, Jiri Schindler 53 What makes this so important?

• One of the biggest problems, looking ahead • with TBs of data, how does one organize things • how to ensure we can find what we want later? • Not nearly as easy as it seems • try to find some old piece of paper sometime – e.g., your exam #2 from Calculus 3 • think ahead to when you’re a lot busier…

NEU csg389 Fall 2006, Jiri Schindler 54 Common approach: hierarchy

• Hierarchies are good to deal with complexity • … and data organization is a complex problem • It works well for moderate-sized data sets • easy to identify course breakdowns • when it gets too big, split it and refine namespace • Traversing the directory hierarchy

• the ‘.’ and ‘..’ entries F/S

/

dira dirb dirc

directories wow file

NEU csg389 Fall 2006, Jiri Schindler 55 What’s in a directory

• Directories to translate file names to inode IDs • just special file with entries formatted in some way

4 bytes 2 bytes 2 bytes variable length N Inode Record Length File Name (max. 255 characters) U number type of name L L

• often sets of entries put in sector-sized chunks

# FILE 5 foo.c # DIR 3 bar # DIR 6 mumble

A directory block with three entries

NEU csg389 Fall 2006, Jiri Schindler 56 Managing namespace: mount/unmount

• One can have many FSs on many devices • … but only one namespace • So, one must combine the FSs into one namespace • starts with a “root file system” – the one that has to be there when the system boots • “mount” operation attaches one FS into the namespace – at a specific point in the overall namespace • “unmount” detaches a previously-attached file system

NEU csg389 Fall 2006, Jiri Schindler 57 Mounting an FS

Root FS FS

/ / directory

tomd dira dirb dirc

junk directories wow file VIEW BEFORE MOUNTING

Namespace VIEW AFTER MOUNTING / directory # mount FS /tomd tomd

dira dirb dirc

sub-directories wow file

NEU csg389 Fall 2006, Jiri Schindler 58 How to find the root directory?

• Need enough information to find key structures • allocation structures • inode for root directory • any other defining information • Common approach • use predetermined locations within file system – known locations of (copies of) superblocks • Alternate approach • some external record

NEU csg389 Fall 2006, Jiri Schindler 59 Sidebar: multiple FSs on one disk

• How is this possible? • divide capacity into multiple “partitions” • How are the partitions remembered? • commonly, via a “partition map” at the 2nd LBN • each partition map entry specifies – start LBN for partition – length of partition (in logical blocks) • Usually device drivers handle partition map • file system requests are relative to their partition • device driver shifts these requests relative to partition start

NEU csg389 Fall 2006, Jiri Schindler 60 Difficulty with directory hierarchies • Can be very difficult to scale to large sizes • eventually, the refinements become too fine • and they tend to be less distinct • Problem: what happens when number of entries in directory grows too large?? • think about having to read through all of those entries • possible solution: partition into subdirectories again • Problem: what happens when data objects could fit into any of several subdirectories?? • think about having to find something specific • possible solution: multiple names for such files

NEU csg389 Fall 2006, Jiri Schindler 61 On-disk Data Placement

NEU csg389 Information Storage Technologies, Fall 2006 Key FS design issues

• Application interface and system software • Data organization and naming • On-disk data placement • Cache management • Metadata integrity and crash recovery • Access control

NEU csg389 Fall 2006, Jiri Schindler 63 Fact – seek time depends on distance

Quantum Atlas III year 1998 seek 7.8 ms single track 0.80 ms full stroke seek 15 ms

Seagate Cheetah 15K.3 year 2002 seek 3.6 ms single track 0.20 ms full stroke seek 6.5 ms

Goal – requests in sequence physically near one another

NEU csg389 Fall 2006, Jiri Schindler 64 Fact –positioning time dominates transfer ) s

d 40 Seagate Cheetah 15K.3 n o

c average seek model ST373453 e s

i capacity 73.4 GB

l 30 one cylinder l i read disk (seq) < 20 min m ( read disk (2KB) > 30 hours e

m 20 read disk (128KB) < 1 hour i T

e c i

v 10 r e S time to read the entire disk, 0 sequentially or randomly 1 2 4 8 16 32 64 128 256 with varying request size Request Size (kilobytes)

Goal – fewer, larger requests to amortize positioning costs

NEU csg389 Fall 2006, Jiri Schindler 65 Breakdown of disk head time

Latency Transfer Seek

100%

80% e g a

s 60% U

d a e

H 40%

k s i D 20%

0% 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 Request Size (KB)

NEU csg389 Fall 2006, Jiri Schindler 66 File System Allocation

• Two issues • Keep track of which space is available • Pick unused blocks for new data • Simplest solution – free list • maintain a linked list of free blocks – using space in unused blocks to store the pointers • grab block from this list when new block is needed – usually, the list is used as a stack

• While simple, this approach rarely yields good performance

NEU csg389 Fall 2006, Jiri Schindler 67 File System Allocation (cont.)

• Most common approach – a bitmap • large array of bits, with one bit per allocation unit – one value says “free” and the other says “in use” • Scan the array when a new block is needed – we don’t have to just take first “free” block in the array – we can look in particular regions – we can look for particular patterns • Even better way (in some cases) – list of free extents • maintain a sorted list of “free” extents of space – each holds a contiguous range of free space • pull space from a part of a specific free extent – can start at a specific point – can look for a point with significant room for growth

NEU csg389 Fall 2006, Jiri Schindler 68 File System Allocation – Summary

• FS performance (largely) dictated by disk performance • and optimization starts with allocation algorithms • as always, there are exceptions to this rule • Two technology drivers yield two goals • Closeness (locality) – reduce seeks by putting related things close to each other – generally, benefits can be in the 2x range • Amortization (large transfers) – amortize each positioning delay by accessing lots of data – generally, benefits can reach into the 10x range

NEU csg389 Fall 2006, Jiri Schindler 69 Spatial proximity can yield…

fast file access Inode #3 Block #20 Various Information DATA Block #20

Block #42 Inode #5 DATA Various < , > Information Block #42 Block #44 Directory Block #44 DATA Inodes fast directory access Data Blocks

NEU csg389 Fall 2006, Jiri Schindler 70 Fast File System (1984)

• Source of many still-popular optimizations • For locality – cylinder groups • called allocation groups in many modern file systems

Default usage of LBN space

Bitmap Inodes Data blocks

Superblock

Organization of an allocation group

• allocate inode in cylgroup with directory • allocate first data blocks in cylgroup with inode

NEU csg389 Fall 2006, Jiri Schindler 71 Other ways of enhancing locality

• Disk request scheduling • for example, consider all dirty blocks in file cache • Write anywhere • specifically, writing near the disk head [Wang99] – assumes space is free and the head’s location is known • cool idea that nobody currently uses • Same thing for reads • assumes multiple replicas on the disk • difficult to keep the metadata consistent

NEU csg389 Fall 2006, Jiri Schindler 72 FFS schemes

• To get large transfers • larger block size – more data per disk read or write – use with fragments for small-file space efficiency • allocate next block after previous one, if possible – do this by starting search at block # just after previous • fetch more when sequential access detected – so, multiple blocks per seek+rotational latency

NEU csg389 Fall 2006, Jiri Schindler 73 Other ways of getting large transfers

• Re-allocation • to re-establish sequential allocation when it was not feasible at the time of original allocation • Can you give an example?

• Pre-allocation • to avoid a failure to allocate sequentially later

• Extents (and extent-like) • as a replacement for block lists • as a replacement for bitmaps • things to consider – When does this help? – When does it hurt performance?

NEU csg389 Fall 2006, Jiri Schindler 74 Block-based vs. Extent-based Allocation

Block-Based Allocation Extent-Based Allocation File

Storage

NEU csg389 Fall 2006, Jiri Schindler 75 Sidebar: BSD FFS constants

Parameter Meaning MAXBPG max blocks per file in a cylinder group MAXCONTIG max contiguous blocks before rotdelay gap MINFREE min percentage of free space NSECT sectors per track ROTDELAY rotational delay between contiguous blocks RPS revs per second TRACKS tracks per cylinder TRACKSKEW track skew in sectors

• What their purpose? • Historical prospective • details being pushed down

NEU csg389 Fall 2006, Jiri Schindler 76 Object-based Access

NEU csg389 Information Storage Technologies, Fall 2006 OIDs

• Generation of unique ID • Flat name space (no hierarchy)

• How to remember where things are? • Divide and conquer • Employ external applications/DATABASES

• Will discuss in the context of Centera

NEU csg389 Fall 2006, Jiri Schindler 78 What’s next…

• Lecture: 9/26 • Database structures • DB Workloads

NEU csg389 Fall 2006, Jiri Schindler 79