<<

: Basics - Transactions • Transaction is execution of program that accesses DB • Basic operations: 1. read item(X): Read DB item X into program variable 2. write item(X): Write program variable into DB item X • Basic unit of transfer is disk block • Basic ops consist of lower-level atomic operations: – read item(X): 1. Find address of disk block containing X 2. Copy block into buffer if not in memory already 3. Copy data item from buffer to variable – write item(X): Write program variable into DB item X 1. Find address of disk block containing X 2. Copy block into buffer if not in memory already 3. Copy variable into appropriate location in buffer 4. Write buffer to disk • Sample transaction: read item(X) X = X + A write item(X) read item(Y) read item(X) Y = Y + X write item(Y) • Problems arise wrt concurrency and recovery

1 Transaction Processing: Basics - Concurrency Problems Lost Update

T1 T2 r(X) X += A r(X) w(X) X += B w(X)

Dirty Read

T1 T2 r(X) X += A w(X) r(X) abort w(X)

Incorrect Summary

T1 T2 r(X) Sum = 0 X += A w(X) r(X) Sum += X r(Y) r(Y) Y += B w(Y) Sum += Y

2 Transaction Processing: Basics - Concurrency Problems 2 Non-repeatable Read

T1 T2 r(X) r(Y) r(X) X += A w(X) r(Z) r(X)

3 Transaction Processing: Basics - Desirable Characteristics • ACID properties: 1. Atomicity – Transaction is atomic unit of DB processing – It executes in its entirety, or not at all 2. Consistency preservation Transaction takes DB from one consistent state to another 3. Isolation – Effects of transaction are invisible to other transactions until committed – Degrees of isolation: (a) 0: Does not overwrite dirty reads made by transactions of higher degree (b) 1: No lost updates (c) 2: No lost updates and no dirty reads (d) 3: (true isolation): Repeatable reads 4. Durability Once committed, changes cannot be lost

4 Transaction Processing: Basics - Recovery • To enforce consistency, system must insure that transactions either 1. Complete successfully, with all changes to DB permanently recorded 2. Have no effect on DB - any changes made are undone I.e., Either all of its actions have effect, or none of them have effect • Transaction (redefined): Atomic unit of DB work that is executed in its entirety or not at all • Types of transactions: – Read only – Read/write • Types of failure: – Local errors or exceptions detected by transaction - data not found, program condition not met – Transaction or system error - overflow, user-enforced abort, logical error – Concurrency error – System crash – Disk failure – Physical problems, catastrophes - disk not mounted, fire, theft • First 4 recoverable • Recovery Manager is module responsible for recovery • Following operations monitored by Recovery Manager: 1. Begin transaction 2. Read 3. Write 4. End transaction 5. 6. Rollback (abort) 7. Undo 8. Redo

5 Transaction Processing: Basics - Recovery (2)

• Execution of transaction can be represented by state diagram:

• Partially committed state most interesting: – System checks whether transaction has interfered with other transactions – Checks if failure at this point would preclude recovery – If checks OK, commit • System log (journal) maintains record of transactions for recovery • Stored on disk, archived on tape • Types of entries: 1. [start trans, T ] 2. [write item, T, X, old, new] 3. [read item, T, X] 4. [commit, T ] 5. [abort, T ] • Commit point reached when 1. Transaction completed 2. Effects of all operations recorded in log • From a commit, assumed all transaction actions are permanent

6 Transaction Processing: Basics - Recovery (3)

• On failure, – Only those transactions not yet committed need be rolled back – Committed transactions whose operations are not yet recorded in DB can be redone from log • Log written to disk when – Block is full – On commit (force-write)

7 Transaction Processing: Schedules - Intro

• Schedule of transactions T1,T2, ..., Tn is an ordering of the operations of the Ti such that the order of the operations of Ti are relatively the same in the schedule. • Operations of interest: read (r), write (w), abort (a), commit (c) • Given 2 transactions 1. T1: r(x); x = x + a; w(x); 2. T2: r(X); x = x + b; w(x); 2 schedules would be 1. S1: r1(x); w1(x); c1; r2(x); w2(x); c2; 2. S2: r1(x); r2(x); w2(x); w1(x); c2; c1; • Conflicting operations are those that 1. Belong to different transactions 2. Access the same data item 3. One of which is a write In S1 (and S2), conflicting operations are r1(x) and w2(x), r2(x) and w1(x, and w1(x) and w2(x) • Complete schedule is one in which 1. Every transaction ends with an abort or commit 2. For any pair of conflicting operations, one must precede the other in the schedule • Complete schedule imposes no restrictions on order of non-conflicting opera- tions Imposes a partial ordering on operations • Since DB processing is dynamic, concept of complete schedule is an abstract ideal • Committed projection of schedule S (C(S)) is the set of operations belonging to committed transactions of S Is more practical concept than a complete schedule • Can classify schedules wrt to their recoverability and their serializability

8 Transaction Processing: Schedules - Recoverability

• T2 reads from T1 if T2 reads data items written by T1

– T1 should not abort prior to T2’s read

– There is no T3 that writes the data item between T1’s write and T2’s read • Types of schedules wrt recoverability:

1. Recoverable schedule is one in which no transaction T2 commits until all the Ti that write data items that are read by T2 have committed (a) S1: r1(x); w1(x); c1; r2(x); w2(x); c2; (recoverable) (b) S2: r1(x); r2(x); w2(x); w1(x); c2; c1; (recoverable) (c) S3: r1(x); w1(x); r2(x); w2(x); c2; c1; (not recoverable) – For a recoverable schedule, no committed transaction will ever need to be rolled back 2. Schedule avoids cascading rollback if every one of its transactions reads only data items written by only committed transactions

– Cascading rollback is situation in which T1 aborts, forcing T2 to rollback because it read a data item written by T1 (a) S3 above experiences cascading rollback if change c1 to a1 (b) S1 above avoids cascading rollback 3. Strict schedule is one in which transactions cannot read or write data items until the transaction that writes the data item has committed – Recovery is process of restoring ’before’ images of data items (a) S1 above is strict

9 Transaction Processing: Schedules - Serializability • Serial schedule is one in which transactions are not interleaved – Is correct – Wastes CPU time • Non-serial schedule allows interleaving – Subject to concurrency problems • Serializable schedule of n transactions is equivalent to some serial schedule of those n transactions – n! possible serial schedules for n transactions – m! possible schedules for the m operations in those n transactions ∗ 2 disjoint sets of these m! schedules: 1. Those equivalent to at least one serial schedule 2. Those not equivalent to any serial schedule

10 Transaction Processing: Schedules - Types of Serializability Equivalence 1. Result equivalence • Schedules S1 and S2 are result equivalent if they produce the same result • Generally not meaningful • Example: – Transactions: (a) T1: r(x); x = x2; w(x); (b) T2: r(x); x− = 5; w(x); – Schedules (x = 3 initially): (a) S1: r1(x); x = x2; w1(x); r2(x); x− = 5; w2(x); (b) S2: r2(x); x− = 5; w2(x); r1(x); x = x2; w1(x); 2. Conflict equivalence • Schedules S1 and S2 are conflict equivalent if the order of any 2 conflicting operations is the same in each • If different orders, then they could produce different results – Transactions: (a) T1: r(x); (b) T2: r(x); w(x); – Schedules: (a) S1: r1(x); r2(x); w2(x); (b) S2: r2(x); w2(x); r1(x); (not conflict equiv to S1) (c) S3: r2(x); r1(x); w2(x); (conflict equiv to S1) • Schedule is conflict serializable if it is conflict equivalent to some serial schedule

11 Transaction Processing: Schedules - Types of Serializability Equivalence (2) • Conflict serializable schedule can have its operations reordered to obtain the equivalent serial schedule – Transactions: (a) T1: r(x); w(x); (b) T2: r(x); w(x); – Schedules: S1 S2 S3 T1 T2 T1 T2 T1 T2 r1(x) r2(x) r1(x) x += a x += b r2(x) w1(x) w2(x) x += a r2(x) r1(x) w1(x) x += b x += a x += b w2(x) w1(x) w2(x)

– Transactions: (a) T1: r(x); w(x); r(y), w(y); (b) T2: r(x); r(z); w(x); (c) T3: r(x); r(z); w(y); – Schedules: S1 S2 T1 T2 T3 T1 T2 T3 r(x) r(x) w(x) r(x) r(x) r(x) r(y) w(x) r(z) r(z) w(x) r(z) r(x) r(y) r(z) w(y) w(y) w(y) w(y) w(x)

12 Transaction Processing: Schedules - Types of Serializability Equivalence (3) • To test for conflict serializability:

(a) For each transaction Ti in S, create a graph node Ti (b) For each situation in S where Tj does a read(X) after Ti does a write(X), create an edge Ti → Tj (c) For each situation in S where Tj does a write(X) after Ti does a read(X), create an edge Ti → Tj (d) For each situation in S where Tj does a write(X) after Ti does a write(X), create an edge Ti → Tj (e) S is serializable if the resulting graph contains no cycles

• If S is serializable, equivalent serial schedule is one in which Ti precedes every Tj for which there is an edge Ti → Tj in the graph 3. View equivalence • 2 schedules S1 and S2 are equivalent if (a) S1 and S2 contain the same set of transactions

(b) If Ti performs ri(X), where either X has been written by Tj in S1, or X has not been modified in S1,the same condition must hold for X when S2 performs ri(X) (c) If wk(Y ) of Tk is the last write of Y in S1, then wk(Y ) of Tk must be the last write of Y in S2 • Motivation is that: As long as each read reads the result of the same write in each schedule, the writes must produce the same results • Reads see the same ’view’ of the DB in both schedules

13 Transaction Processing: Schedules - Types of Serializability Equivalence (4) • Schedule is view serializable if it is view equivalent to some serial schedule – Transactions: (a) T1: r(x); w(x); c; (b) T2: w(x); c; (c) T3: w(x); c; – Schedules: S1 S2 T1 T2 T3 T1 T2 T3 r(x) r(x) w(x) w(x) c w(x) w(x) w(x) c c w(x) c c c

• View and conflict serializable similar if constrained write assumption holds on all transactions: Any wi(x) in Ti is preceded by ri(x) in Ti, and value written by wi(x) depends only on value of x read by ri(x)

• Unconstrained write: wi(x) in Ti independent of any old value of x Called a blind write • Every conflict serializable schedule is view serializable, but not vice-versa

14 Transaction Processing: Schedules - Serializability Problems • Problems: 1. Cannot determine order of interleaving of transactions prior to execution 2. If test serializability after end-transaction, may need to abort 3. No beginning/end of schedule in continuous system • To deal with problems, use protocols – Protocols based on serializability – Are sets of rules that insure serializable schedules

15 Transaction Processing: - Locks Intro • Lock is variable associated with data item • Value determines operations that are allowed on item • Synchronizes access to data item • Used to simulate serial schedules • Several types: – Binary – Multi-mode

16 Transaction Processing: Concurrency Control - Binary Locks • Have 2 states: locked, unlocked • Transaction locks and unlocks data items it accesses • When locked, only transaction issuing lock has access to data item • 2 operations: 1. lock item(x) (l(x)) Algorithm:

B: if (lock(x) == 0) lock(x) ← 1 else { wait goto B } – wait adds request to a queue – When x is unlocked, waiting requests removed from queue 2. unlock item(x) (ul(x)) Algorithm:

lock(x) ← 0 if (queue not empty) wake up next transaction on queue • Locks enforce mutual exclusion on data items • li and uli must be atomic • Critical section of transaction is section delimited by lock and unlock • Lock manager is DBMS module for managing locks • Transactions must obey following rules: 1. Transaction T must issue lock on item before read/write of item 2. T must unlock item after completing reads and writes 3. T won’t issue locks on items that it already holds locks on 4. T won’t issue unlocks on items that it doesn’t hold locks on • Problems: – Only allow 1 transaction to access a data item at any 1 time

17 Transaction Processing: Concurrency Control - Multi-valued (3-way) Locks • Multi-valued lock has 3 states: unlocked, read (share)-locked, write (exclusive)- locked • 3 operations: 1. read-lock item(X) (rl(X)) Algorithm:

B: if (lock(x) == unlocked) { lock(x) ← read-locked T num + + } else if (lock(x) = read-locked) T num + + else { wait goto B } – When read-locked, multiple transactions may read an item – T num keeps track of the number of transactions with read locks on an item 2. write-lock item(x) (wl(x)) Algorithm:

B: if (lock(x) == unlocked) lock(x) ← write-locked else { wait goto B } – When write-locked, transaction has exclusive lock on item

18 Transaction Processing: Concurrency Control - Multi-valued (3-way) Locks (2) 3. unlock item(X) (ul(X)) Algorithm:

if (lock(x) == write-locked) { lock(x) ← unlocked wake up next transaction on queue } else { T num − − if (T num = 0) { lock(x) ← unlocked wake up next transaction on queue } }

• Transactions must obey following rules: 1. Transaction T must issue read or write-lock on item before read of item 2. T must issue write-lock on item before write of item 3. T must unlock item after completing reads and writes 4. T won’t issue read-locks on items that it already holds locks on 5. T won’t issue write-locks on items that it already holds locks on 6. T won’t issue unlocks on items that it doesn’t hold locks on • Conditions 4 and 5 can be relaxed: 1. T downgrade write-locks to read-lock on items that it already holds locks on 2. T may upgrade read-lock to write-lock on item it holds if T is only holder of item

19 Transaction Processing: Concurrency Control - Multi-valued (3-way) Locks (3)

• Problems: – Do not guarantee serializability

T1 T2 rl(y) r(y) ul(y) rl(x) r(x) ul(x) wl(y) r(y) w(y) ul(y) wl(x) r(x) w(x) ul(x)

20 Transaction Processing: Concurrency Control - 2-Phase Locking • 2-phase locking protocol guarantees serializability • Has 2 stages: 1. Growing phase - locks are only acquired 2. Shrinking phase - locks are only released • Earlier transactions using 2-phase locking:

T1 T2 rl(y) rl(x) r(y) r(x) wl(x) wl(y) ul(y) ul(x) r(x) r(y) w(x) w(y) ul(x) ul(y) • Allows upgrades and downgrades: – Can upgrade from read to write-lock only in growing phase – Can downgrade from write to read-lock only in shrinking phase • Problems: – Transaction may be done with item but may need to hold lock cause still in growing phase

T1 T2 rl(x) r(x) ... ← done here rl(x) wl(y) ... ul(x)

21 Transaction Processing: Concurrency Control - 2-Phase Locking (2)

• Types of 2-phase locking: 1. Basic (as described) 2. Conservative – All locks acquired before transaction begins – If not all available at once, wait until they are – Prevents deadlock 3. Strict – Guarantees strict schedules – No locks released until transaction commits or aborts – If transaction aborts, no other transaction will need to be rolled back – Produces strict schedules – Does not prevent deadlock

22 Transaction Processing: Concurrency Control - Deadlock • Deadlock is condition in which 2 or more transactions are waiting for item that is locked by another transaction that is waiting for item held by others

T1 T2 rl(y) r(y) rl(x) r(x) wl(x) wl(y) • 2 ways to handle: 1. Prevention 2. Detection

23 Transaction Processing: Concurrency Control - Deadlock Prevention • Prevention most applicable to situations where – Transactions are long – Transactions access many data items – Many transactions • Methods: 1. Conservative 2-phase locking 2. Ordering data items in DB – Not practical – Requires programmer to know order – Order changes as DB changes 3. Time stamps – Every transaction receives time stamp when it starts – Prevention techniques: (a) Wait-die

if (TS(T 1) < T S(T 2)) T 1 waits to access item locked by T2 else T 1 aborts and restarts later with original time stamp

∗ Young transactions will eventually stop aborting, as they will be- come oldest ∗ Prevents younger transactions from competing with older ones for the same locked items (b) Wound-wait

if (TS(T 1) < T S(T 2)) T 2 aborts and restarts later with original time stamp else T 1 waits

∗ Young transactions are preempted ∗ Non-deadlocked transactions may be aborted needlessly, multiple times

24 Transaction Processing: Concurrency Control - Deadlock Prevention (2) 4. Waiting protocols designed to overcome problems with prevention methods (a) Cautious waiting T 1 blocked by T 2 if (T 2 not blocked) T 1 waits else abort T 1

– Guarantees that a cycle never occurs – If T 1 is waiting on T 2, it means that T 2 was not blocked when T 1 started waiting – If T 2 tries to lock an item held by T 1, it will abort (b) No waiting – If transaction is blocked, it immediately aborts – Restarts after an arbitrary time – Results in much needless aborting 5. Timeouts – If wait for a longer than predefined time period, abort

25 Transaction Processing: Concurrency Control - Deadlock Detection • Detection most applicable to situations where – Transactions are short – Transactions access few data items – Few transactions • Wait-for graph – Have node for each transaction that’s active – When T 1 attempts to lock item locked by T 2, create edge from T 1 to T 2 – Delete edge when lock dropped – Deadlock indicated by cycle • Issues: – When to check ∗ Based on number of active transactions ∗ Based on time interval – Which transaction to abort (victim selection) ∗ Avoid those that · Are active for long time · Perform many updates · Restart cyclically ∗ Prefer those that · Are short-lived · Have few updates · Are involved in many deadlocks • Livelock – Situation in which transaction is prevented from executing even though other transactions perform normally – Result of unfair waiting scheme for locked items – Fair schemes: ∗ First come, first served ∗ Priority increases with wait time

26 Transaction Processing: Concurrency Control - Deadlock Detection (2)

• Starvation – Situation in which deadlocked transaction continually aborts and never gets chance to complete – Fair schemes: ∗ Wait-die ∗ Wait-wound

27 Transaction Processing: Concurrency Control - Time Stamps • Time stamps are alternative to locks for concurrency control • Advantage is that do not generate deadlock • Basic concept is that transactions are ordered by their time stamps – Represent time stamp of transaction T as TS(T ) – Algorithm is called time stamp ordering (TO) – Produces serial schedule – Must insure that conflicting operations do not violate serializability • To facilitate, use 2 stamps per data item: 1. read TS(X) – Represents largest time stamp (most recent) of all transactions (T ) that have successfully read X – read TS(X) = TS(T ) 2. write TS(X) – Represents largest time stamp (most recent) of all transactions (T ) that have successfully written X – write TS(X) = TS(T )

28 Transaction Processing: Concurrency Control - Time Stamps (2)

• Basic TO – Algorithm: 1. Transaction T attempts write item(X):

if ((read_TS(X) > TS(T)) || (write_TS(X) > TS(T))) { // younger transaction has read or written X before T abort(T); rollback(T); } else { write_item(X); write_TS(X) = TS(T); }

2. Transaction T attempts read item(X):

if (write_TS(X) > TS(T)) { // younger transaction has written X before T abort(T); rollback(T); } else { read_item(X); read_TS(X) = max(TS(T), read_TS(X)); } – When conflicting operations detected in wrong order, aborts transaction that issued the later request – Guarantees conflict serializable schedule – Note that -along with 2PL - does not recognize all possible serial schedules – May result in cyclic restart, and hence starvation

29 Transaction Processing: Concurrency Control - Time Stamps (3)

• Strict TO – Insures strict and conflict serializable schedules – Algorithm: 1. Transaction T attempts write item(X:

if (write_TS(X) < TS(T)) { // sleep until transaction that wrote X (T’) aborts or commits wait(T’); write\_item(X); write_TS(X) = TS(T); } else { write_item(X); write_TS(X) = TS(T); } 2. Transaction T attempts read item(X):

if (write_TS(X) < TS(T)) { // sleep until transaction that wrote X (T’) aborts or commits wait(T’); read\_item(X); read_TS(X) = TS(T); } else { read_item(X); read_TS(X) = TS(T); } – T 0 must essentially lock item X until commits or aborts – Deadlock not possible as only wait when TS(T ) > T S(T 0)

30 Transaction Processing: Concurrency Control - Time Stamps (4)

• Thomas’s Write Rule – Rejects fewer writes than Basic TO – Does not enforce conflict serializability – Algorithm: 1. Transaction T attempts write item(X):

if (read_TS(X) > TS(T)) { // younger transaction has read or written X before T abort(T); rollback(T); } else if (write_TS(X) > TS(T)) { // Transaction T’ has already written X, and T’ younger than T // T’s write ignorable continue; } else { write_item(X); write_TS(X) = TS(T); }

31 Transaction Processing: Concurrency Control - Multi Version Techniques • Multi version concurrency control techniques store multiple versions of data items • To maintain serializability, an appropriate version is used by a transaction • This allows reads that may be rejected by other protocols • Requires more storage, but storage may be required for other reasons by DBMS • Multi Version based on Time Stamp Ordering – 2 stamps per data item version: 1. read TS(Xi) ∗ Represents largest time stamp (most recent) of all transactions (T ) that have successfully read Xi

∗ read TS(Xi) = TS(T ) 2. write TS(Xi) ∗ Represents largest time stamp (most recent) of all transactions (T ) that have successfully written Xi ∗ write TS(Xi) = TS(T )

– Whenever Xi written,

∗ New version Xi+1 created with new value ∗ read TS(Xi+1) = TS(T )

∗ write TS(Xi+1) = TS(T )

32 Transaction Processing: Concurrency Control - Multi Version Techniques (2) – Algorithm: 1. Transaction T attempts write item(X):

// Let Xi be version with highest value of write_TS where // write_TS(Xi) <= TS(T) if (read_TS(Xi) > TS(T)) { // T trying to write version that should have been // read by T’ where TS(T’) = read_TS(Xi) // T’ already read Xi, which was written by some T’’ // where TS(T’’) = write_TS(Xi) abort(T); rollback(T); } else { create new version Xj; write_item(Xj); write_TS(Xj) = TS(T); read_TS(Xj) = TS(T); }

2. Transaction T attempts read item(X):

\\ Let Xi be version with highest value of write_TS \\ where write_TS(Xi) <= TS(T); read_item(Xi); read_TS(Xi) = max(TS(T), read_TS(Xi)); // Always do a read

33 Transaction Processing: Concurrency Control - Multi Version Techniques (3)

• Multi Version based on 2PL using Certify Locks (MV2PL) – Uses 3 lock modes: 1. read 2. write 3. certify – Lock compatibility ∗ Shows among ability to obtain a lock on a locked data item ∗ Columns represent locks held ∗ Rows represent locks requested ∗ 2PL table: RW R YN W NN

∗ MV2PL table: RWC R YYN W YNN C NNN

– Purpose in MV2PL is to allow reads of items that are write locked – Use 2 versions of data item: ∗ Committed version - result of a write ∗ Working version - created when a write lock obtained – Transactions may read the working version when item is write locked

34 Transaction Processing: Concurrency Control - Multi Version Techniques (4) – Algorithm: 1. Transaction T attempts read item(X):

read_lock(Xc); read_item(Xc); unlock(Xc); // Always do a read 2. Transaction T attempts write item(X):

write_lock(X); create working version Xw; write_item(Xw); wait; // until all transactions reading any of items // write locked by T are finished for (all items T holds write locks on) upgrade to certify locks; commit; // committed versions overwritten by working versions release all locks; }

– May allow deadlock if allow upgrade from read to write lock

35 Transaction Processing: Concurrency Control - Validation Control Techniques • These methods do no checking of ramifications of operations wrt concurrency • Actions are simply applied • Referred to as optimistic/validation/certification techniques because proceed in hopes that everything will be all right • Phases of protocol: 1. Read phase – Transaction can read values of committed items from DB – Updates applied to local copies 2. Validation phase – Checks made to insure serializability maintained – If serializability violated, abort and restart 3. Write phase – Updates applied to DB • Motivation is that overhead is minimized if all checks performed at one time • If little interference among transactions, most will succeed • If much interference among transactions, many will abort • Protocol requires – Time stamps – Read sets of transaction – Write sets of transactions

• Transaction Ti passes validation if one of following holds for each transaction Tj either committed or currently in its own validation phase:

– Tj completes write phase before Ti starts read phase T – Ti starts write phase after Tj completes write phase, readset(Ti) writeset(Tj) = Φ T T – readset(Ti) writeset(Tj) = Φ, writeset(Ti) writeset(Tj) = Φ, Tj com- pletes read phase before Ti completes read phase

36 Transaction Processing: Concurrency Control - Granularity Issues • Granularity refers to size of data items being accessed • Typical granularity hierarchy

• Can be a factor in concurrency control • Coarser granularity results in – Lower overhead – Less concurrency • Ideal granularity wrt concurrency dependent on transactions

37 Transaction Processing: Concurrency Control - Granularity Issues (2)

• Multiple Granularity Level Locking (2PL) – Consider 2 cases 1. Case 1:

∗ T1 needs to write many records in a file ∗ Acquires write lock on file - more efficient than acquiring locks on individual records

∗ T2 wants to read one record from same file ∗ Simple to determine record is locked 2. Case 2:

∗ T2 wants to read one record from same file ∗ Acquires read lock on record

∗ T1 wants write lock on same file

∗ Not so simple to determine record (and file) is locked to T1 – To facilitate above, use an intention lock ∗ Purpose of intention lock is to signal at a high level the types of lock held at lower level ∗ Types of intention locks: 1. Intention-shared (IS): Shared locks will be requested for some descen- dants 2. Intention-exclusive (IX): Exclusive locks will be requested for some descendants 3. Shared-intention-exclusive (SIX): Current node has shared lock, ex- clusive will be requested for some descendants ∗ Compatibility table:

IS IX S SIX X IS YYYYN IX YYNNN S YNYNN SIX YNNNN X NNNNN

38 Transaction Processing: Concurrency Control - Granularity Issues (3) – Multiple Granularity Locking Protocol (MGL): ∗ Compatibility table cannot be violated ∗ Root of tree must always be locked first ∗ Node N can be locked by transaction T in S or IS mode only if parent locked by T in IX or IS mode ∗ Node N can be locked by transaction T in X, IX, or SIX mode only if parent locked by T in X or SIX mode ∗ T can lock a new node only if it has not unlocked any nodes ∗ T can unlock N only if no children of N are locked by T – MGL incurs less overhead than other approaches for transactions that ac- cess a wide variety of granularities

39