Unit V Database Transaction Management PDF
Document Details
Mr. Rajkumar V. Panchal
Tags
Summary
This document presents an overview of database transaction management, including concepts like ACID properties, transaction states, and concurrency control. It includes examples and explanations to illustrate these concepts.
Full Transcript
Unit V Database Transaction Management Database System Concepts, 6th Ed. ©Silberschatz, Korth and Sudarshan See www.db-book.com for conditions on re-use UNIT V Unit Lect. Content Details As Per Syllabus V...
Unit V Database Transaction Management Database System Concepts, 6th Ed. ©Silberschatz, Korth and Sudarshan See www.db-book.com for conditions on re-use UNIT V Unit Lect. Content Details As Per Syllabus V 1 Introduction to Database Transaction, Transaction states, ACID properties, Database 2 Concept of Schedule, Serial Schedule. Serializability: Conflict and View, Transaction 3 Cascaded Aborts, Recoverable and Non-recoverable Schedules. Management 4 Concurrency Control: Lock-based, Time-stamp based Deadlock handling. 5 Recovery methods: Shadow-Paging and Log-Based Recovery, Checkpoints. 6 Log-Based Recovery: Deferred Database Modifications and Immediate Database Modifications Course Outcomes CO5: Apply ACID properties for transaction management and concurrency control Mr. Rajkumar V. Panchal Transaction Concept A transaction is a unit of program execution that accesses and possibly updates various data items. E.g., transaction to transfer $50 from account A to account B: 1. read(A) 2. A := A – 50 3. write(A) 4. read(B) 5. B := B + 50 6. write(B) Two main issues to deal with: – Failures of various kinds, such as hardware failures and system crashes – Concurrent execution of multiple transactions Example of Fund Transfer Transaction to transfer $50 from account A to account B: 1. read(A) 2. A := A – 50 3. write(A) 4. read(B) 5. B := B + 50 6. write(B) Atomicity requirement – If the transaction fails after step 3 and before step 6, money will be “lost” leading to an inconsistent database state Failure could be due to software or hardware – The system should ensure that updates of a partially executed transaction are not reflected in the database Durability requirement — once the user has been notified that the transaction has completed (i.e., the transfer of the $50 has taken place), the updates to the database by the transaction must persist even if there are software or hardware failures. Example of Fund Transfer (Cont.) Consistency requirement in above example: – The sum of A and B is unchanged by the execution of the transaction In general, consistency requirements include – Explicitly specified integrity constraints such as primary keys and foreign keys – Implicit integrity constraints e.g., sum of balances of all accounts, minus sum of loan amounts must equal value of cash-in-hand – A transaction must see a consistent database. – During transaction execution the database may be temporarily inconsistent. – When the transaction completes successfully the database must be consistent Erroneous transaction logic can lead to inconsistency Example of Fund Transfer (Cont.) Isolation requirement — if between steps 3 and 6, another transaction T2 is allowed to access the partially updated database, it will see an inconsistent database (the sum A + B will be less than it should be). T1 T2 1. read(A) 2. A := A – 50 3. write(A) read(A), read(B), print(A+B) 4. read(B) 5. B := B + 50 6. write(B Isolation can be ensured trivially by running transactions serially – That is, one after the other. However, executing multiple transactions concurrently has significant benefits, as we will see later. ACID Properties A transaction is a unit of program execution that accesses and possibly updates various data items. To preserve the integrity of data the database system must ensure: Atomicity. Either all operations of the transaction are properly reflected in the database or none are. Consistency. Execution of a transaction in isolation preserves the consistency of the database. Isolation. Although multiple transactions may execute concurrently, each transaction must be unaware of other concurrently executing transactions. Intermediate transaction results must be hidden from other concurrently executed transactions. – That is, for every pair of transactions Ti and Tj, it appears to Ti that either Tj, finished execution before Ti started, or Tj started execution after Ti finished. Durability. After a transaction completes successfully, the changes it has made to the database persist, even if there are system failures. Transaction State Active – the initial state; the transaction stays in this state while it is executing Partially committed – after the final statement has been executed. Failed -- after the discovery that normal execution can no longer proceed. Aborted – after the transaction has been rolled back and the database restored to its state prior to the start of the transaction. Two options after it has been aborted: – Restart the transaction Can be done only if no internal logical error – Kill the transaction Committed – after successful completion. Transaction State (Cont.) Concurrent Executions Multiple transactions are allowed to run concurrently in the system. Advantages are: – Increased processor and disk utilization, leading to better transaction throughput E.g., one transaction can be using the CPU while another is reading from or writing to the disk – Reduced average response time for transactions: short transactions need not wait behind long ones. Concurrency control schemes – mechanisms to achieve isolation – That is, to control the interaction among the concurrent transactions in order to prevent them from destroying the consistency of the database Schedules Schedule – a sequences of instructions that specify the chronological order in which instructions of concurrent transactions are executed – A schedule for a set of transactions must consist of all instructions of those transactions – Must preserve the order in which the instructions appear in each individual transaction. A transaction that successfully completes its execution will have a commit instructions as the last statement – By default transaction assumed to execute commit instruction as its last step A transaction that fails to successfully complete its execution will have an abort instruction as the last statement Schedule 1 Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to B. A serial schedule in which T1 is followed by T2 : Schedule 2 A serial schedule where T2 is followed by T1 Schedule 3 Let T1 and T2 be the transactions defined previously. The following schedule is not a serial schedule, but it is equivalent to Schedule 1 In Schedules 1, 2 and 3, the sum A + B is preserved. Schedule 4 The following concurrent schedule does not preserve the value of (A + B ). Serializability Basic Assumption – Each transaction preserves database consistency. Thus, serial execution of a set of transactions preserves database consistency. A (possibly concurrent) schedule is serializable if it is equivalent to a serial schedule. Different forms of schedule equivalence give rise to the notions of: 1. Conflict serializability 2. View serializability Simplified view of transactions We ignore operations other than read and write instructions We assume that transactions may perform arbitrary computations on data in local buffers in between reads and writes. Our simplified schedules consist of only read and write instructions. Conflicting Instructions Instructions li and lj of transactions Ti and Tj respectively, conflict if and only if there exists some item Q accessed by both li and lj, and at least one of these instructions wrote Q. 1. li = read(Q), lj = read(Q). li and lj don’t conflict. 2. li = read(Q), lj = write(Q). They conflict. 3. li = write(Q), lj = read(Q). They conflict 4. li = write(Q), lj = write(Q). They conflict Intuitively, a conflict between li and lj forces a (logical) temporal order between them. If li and lj are consecutive in a schedule and they do not conflict, their results would remain the same even if they had been interchanged in the schedule. Conflict Serializability If a schedule S can be transformed into a schedule S’ by a series of swaps of non- conflicting instructions, we say that S and S’ are conflict equivalent. We say that a schedule S is conflict serializable if it is conflict equivalent to a serial schedule Conflict Serializability (Cont.) Schedule 3 can be transformed into Schedule 6, a serial schedule where T2 follows T1, by series of swaps of non-conflicting instructions. Therefore Schedule 3 is conflict serializable. Schedule 3 Schedule 6 Conflict Serializability (Cont.) Example of a schedule that is not conflict serializable: We are unable to swap instructions in the above schedule to obtain either the serial schedule < T3, T4 >, or the serial schedule < T4, T3 >. View Serializability Let S and S’ be two schedules with the same set of transactions. S and S’ are view equivalent if the following three conditions are met, for each data item Q, 1. If in schedule S, transaction Ti reads the initial value of Q, then in schedule S’ also transaction Ti must read the initial value of Q. 2. If in schedule S transaction Ti executes read(Q), and that value was produced by transaction Tj (if any), then in schedule S’ also transaction Ti must read the value of Q that was produced by the same write(Q) operation of transaction Tj. 3. The transaction (if any) that performs the final write(Q) operation in schedule S must also perform the final write(Q) operation in schedule S’. As can be seen, view equivalence is also based purely on reads and writes alone. View Serializability (Cont.) A schedule S is view serializable if it is view equivalent to a serial schedule. Every conflict serializable schedule is also view serializable. Below is a schedule which is view-serializable but not conflict serializable. What serial schedule is above equivalent to? Every view serializable schedule that is not conflict serializable has blind writes. Other Notions of Serializability The schedule below produces same outcome as the serial schedule < T1, T5 >, yet is not conflict equivalent or view equivalent to it. Determining such equivalence requires analysis of operations other than read and write. Testing for Serializability Consider some schedule of a set of transactions T1, T2,..., Tn Precedence graph — a direct graph where the vertices are the transactions (names). We draw an arc from Ti to Tj if the two transaction conflict, and Ti accessed the data item on which the conflict arose earlier. We may label the arc by the item that was accessed. Example of a precedence graph Test for Conflict Serializability A schedule is conflict serializable if and only if its precedence graph is acyclic. Cycle-detection algorithms exist which take order n2 time, where n is the number of vertices in the graph. – (Better algorithms take order n + e where e is the number of edges.) If precedence graph is acyclic, the serializability order can be obtained by a topological sorting of the graph. – This is a linear order consistent with the partial order of the graph. – For example, a serializability order for Schedule A would be T5 T1 T3 T2 T4 Are there others? Test for View Serializability The precedence graph test for conflict serializability cannot be used directly to test for view serializability. – Extension to test for view serializability has cost exponential in the size of the precedence graph. The problem of checking if a schedule is view serializable falls in the class of NP-complete problems. – Thus, existence of an efficient algorithm is extremely unlikely. However practical algorithms that just check some sufficient conditions for view serializability can still be used. Recoverable Schedules Need to address the effect of transaction failures on concurrently running transactions. Recoverable schedule — if a transaction Tj reads a data item previously written by a transaction Ti , then the commit operation of Ti appears before the commit operation of Tj. The following schedule (Schedule 11) is not recoverable If T8 should abort, T9 would have read (and possibly shown to the user) an inconsistent database state. Hence, database must ensure that schedules are recoverable. Cascading Rollbacks Cascading rollback – a single transaction failure leads to a series of transaction rollbacks. Consider the following schedule where none of the transactions has yet committed (so the schedule is recoverable) If T10 fails, T11 and T12 must also be rolled back. Can lead to the undoing of a significant amount of work Cascadeless Schedules Cascadeless schedules — cascading rollbacks cannot occur; – For each pair of transactions Ti and Tj such that Tj reads a data item previously written by Ti, the commit operation of Ti appears before the read operation of Tj. Every Cascadeless schedule is also recoverable It is desirable to restrict the schedules to those that are cascadeless Concurrency Control A database must provide a mechanism that will ensure that all possible schedules are – either conflict or view serializable, and – are recoverable and preferably cascadeless A policy in which only one transaction can execute at a time generates serial schedules, but provides a poor degree of concurrency – Are serial schedules recoverable/cascadeless? Testing a schedule for serializability after it has executed is a little too late! Goal – to develop concurrency control protocols that will assure serializability. Concurrency Control (Cont.) Concurrency-control schemes tradeoff between the amount of concurrency they allow and the amount of overhead that they incur. Some schemes allow only conflict-serializable schedules to be generated, while others allow view-serializable schedules that are not conflict-serializable. Concurrency Control vs. Serializability Tests Concurrency-control protocols allow concurrent schedules, but ensure that the schedules are conflict/view serializable, and are recoverable and cascadeless. Concurrency control protocols (generally) do not examine the precedence graph as it is being created – Instead a protocol imposes a discipline that avoids non- serializable schedules. Tests for serializability help us understand why a concurrency control protocol is correct. Concurrency Control Database System Concepts, 7th Ed. ©Silberschatz, Korth and Sudarshan See www.db-book.com for conditions on re-use Lock-Based Protocols A lock is a mechanism to control concurrent access to a data item Data items can be locked in two modes : 1. exclusive (X) mode. Data item can be both read as well as written. X-lock is requested using lock-X instruction. 2. shared (S) mode. Data item can only be read. S-lock is requested using lock-S instruction. Lock requests are made to concurrency-control manager. Transaction can proceed only after request is granted. Lock-Based Protocols (Cont.) Lock-compatibility matrix A transaction may be granted a lock on an item if the requested lock is compatible with locks already held on the item by other transactions Any number of transactions can hold shared locks on an item, But if any transaction holds an exclusive on the item no other transaction may hold any lock on the item. Schedule With Lock Grants Grants omitted in rest of chapter – Assume grant happens just before the next instruction following lock request This schedule is not serializable (why?) A locking protocol is a set of rules followed by all transactions while requesting and releasing locks. Locking protocols enforce serializability by restricting the set of possible schedules. Deadlock Consider the partial schedule Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to wait for T3 to release its lock on B, while executing lock-X(A) causes T3 to wait for T4 to release its lock on A. Such a situation is called a deadlock. – To handle a deadlock one of T3 or T4 must be rolled back and its locks released. Deadlock (Cont.) The potential for deadlock exists in most locking protocols. Deadlocks are a necessary evil. Starvation is also possible if concurrency control manager is badly designed. For example: – A transaction may be waiting for an X-lock on an item, while a sequence of other transactions request and are granted an S-lock on the same item. – The same transaction is repeatedly rolled back due to deadlocks. Concurrency control manager can be designed to prevent starvation. The Two-Phase Locking Protocol A protocol which ensures conflict- serializable schedules. Phase 1: Growing Phase – Transaction may obtain locks Lock – Transaction may not release locks s Phase 2: Shrinking Phase – Transaction may release locks Time – Transaction may not obtain locks The protocol assures serializability. It can be proved that the transactions can be serialized in the order of their lock points (i.e., the point where a transaction acquired its final lock). The Two-Phase Locking Protocol (Cont.) Two-phase locking does not ensure freedom from deadlocks Extensions to basic two-phase locking needed to ensure recoverability of freedom from cascading roll-back – Strict two-phase locking: a transaction must hold all its exclusive locks till it commits/aborts. Ensures recoverability and avoids cascading roll-backs – Rigorous two-phase locking: a transaction must hold all locks till commit/abort. Transactions can be serialized in the order in which they commit. Most databases implement rigorous two-phase locking, but refer to it as simply two-phase locking The Two-Phase Locking Protocol (Cont.) Two-phase locking is not a necessary condition for serializability – There are conflict serializable schedules that cannot be obtained if the two-phase locking protocol is used. In the absence of extra information (e.g., ordering of access to data), two-phase locking is necessary for conflict serializability in the following sense: – Given a transaction Ti that does not follow two-phase locking, we can find a transaction Tj that uses two-phase locking, and a schedule for Ti and Tj that is not conflict serializable. Locking Protocols Given a locking protocol (such as 2PL) – A schedule S is legal under a locking protocol if it can be generated by a set of transactions that follow the protocol – A protocol ensures serializability if all legal schedules under that protocol are serializable Lock Conversions Two-phase locking protocol with lock conversions: – Growing Phase: – can acquire a lock-S on item – can acquire a lock-X on item – can convert a lock-S to a lock-X (upgrade) – Shrinking Phase: – can release a lock-S – can release a lock-X – can convert a lock-X to a lock-S (downgrade) This protocol ensures serializability Automatic Acquisition of Locks A transaction Ti issues the standard read/write instruction, without explicit locking calls. The operation read(D) is processed as: if Ti has a lock on D then read(D) else begin if necessary wait until no other transaction has a lock-X on D grant Ti a lock-S on D; read(D) end Automatic Acquisition of Locks (Cont.) The operation write(D) is processed as: if Ti has a lock-X on D then write(D) else begin if necessary wait until no other trans. has any lock on D, if Ti has a lock-S on D then upgrade lock on D to lock-X else grant Ti a lock-X on D write(D) end; All locks are released after commit or abort Implementation of Locking A lock manager can be implemented as a separate process Transactions can send lock and unlock requests as messages The lock manager replies to a lock request by sending a lock grant messages (or a message asking the transaction to roll back, in case of a deadlock) – The requesting transaction waits until its request is answered The lock manager maintains an in-memory data- structure called a lock table to record granted locks and pending requests Lock Table Dark rectangles indicate granted locks, light colored ones indicate waiting requests Lock table also records the type of lock granted or requested New request is added to the end of the queue of requests for the data item, and granted if it is compatible with all earlier locks Unlock requests result in the request being deleted, and later requests are checked to see if they can now be granted If transaction aborts, all waiting or granted requests of the transaction are deleted – lock manager may keep a list of locks held by each transaction, to implement this efficiently Graph-Based Protocols Graph-based protocols are an alternative to two-phase locking Impose a partial ordering on the set D = {d1, d2 ,..., dh} of all data items. – If di dj then any transaction accessing both di and dj must access di before accessing dj. – Implies that the set D may now be viewed as a directed acyclic graph, called a database graph. The tree-protocol is a simple kind of graph protocol. Tree Protocol Only exclusive locks are allowed. The first lock by Ti may be on any data item. Subsequently, a data Q can be locked by Ti only if the parent of Q is currently locked by Ti. Data items may be unlocked at any time. A data item that has been locked and unlocked by Ti cannot subsequently be relocked by Ti Graph-Based Protocols (Cont.) The tree protocol ensures conflict serializability as well as freedom from deadlock. Unlocking may occur earlier in the tree-locking protocol than in the two-phase locking protocol. – Shorter waiting times, and increase in concurrency – Protocol is deadlock-free, no rollbacks are required Drawbacks – Protocol does not guarantee recoverability or cascade freedom Need to introduce commit dependencies to ensure recoverability – Transactions may have to lock data items that they do not access. increased locking overhead, and additional waiting time potential decrease in concurrency Schedules not possible under two-phase locking are possible under the tree protocol, and vice versa. Deadlock Handling System is deadlocked if there is a set of transactions such that every transaction in the set is waiting for another transaction in the set. Deadlock Handling Deadlock prevention protocols ensure that the system will never enter into a deadlock state. Some prevention strategies: – Require that each transaction locks all its data items before it begins execution (pre- declaration). – Impose partial ordering of all data items and require that a transaction can lock data items only in the order specified by the partial order (graph-based protocol). More Deadlock Prevention Strategies wait-die scheme — non-preemptive – Older transaction may wait for younger one to release data item. – Younger transactions never wait for older ones; they are rolled back instead. – A transaction may die several times before acquiring a lock wound-wait scheme — preemptive – Older transaction wounds (forces rollback) of younger transaction instead of waiting for it. – Younger transactions may wait for older ones. – Fewer rollbacks than wait-die scheme. In both schemes, a rolled back transactions is restarted with its original timestamp. – Ensures that older transactions have precedence over newer ones, and starvation is thus avoided. Deadlock prevention (Cont.) Timeout-Based Schemes: – A transaction waits for a lock only for a specified amount of time. After that, the wait times out and the transaction is rolled back. – Ensures that deadlocks get resolved by timeout if they occur – Simple to implement – But may roll back transaction unnecessarily in absence of deadlock Difficult to determine good value of the timeout interval. – Starvation is also possible Deadlock Detection Wait-for graph – Vertices: transactions – Edge from Ti Tj. : if Ti is waiting for a lock held in conflicting mode byTj The system is in a deadlock state if and only if the wait-for graph has a cycle. Invoke a deadlock-detection algorithm periodically to look for cycles. Wait-for graph without a cycle Wait-for graph with a cycle Deadlock Recovery When deadlock is detected : – Some transaction will have to rolled back (made a victim) to break deadlock cycle. Select that transaction as victim that will incur minimum cost – Rollback -- determine how far to roll back transaction Total rollback: Abort the transaction and then restart it. Partial rollback: Roll back victim transaction only as far as necessary to release locks that another transaction in cycle is waiting for Starvation can happen (why?) – One solution: oldest transaction in the deadlock set is never chosen as victim Recovery System Failure Classification Transaction failure : – Logical errors: transaction cannot complete due to some internal error condition – System errors: the database system must terminate an active transaction due to an error condition (e.g., deadlock) System crash: a power failure or other hardware or software failure causes the system to crash. – Fail-stop assumption: non-volatile storage contents are assumed to not be corrupted by system crash Database systems have numerous integrity checks to prevent corruption of disk data Disk failure: a head crash or similar disk failure destroys all or part of disk storage – Destruction is assumed to be detectable: disk drives use checksums to detect failures Summary of Protocols / Schemes ACID : Atomicity, Consistency, Isolation, Durability Isolation: ○ Lock based, ○ 2-Phase Locking, ○ Time Stamp, ○ Validation based Atomicity: ○ Shadow-Page ○ Log Based Recovery (Immediate, Deferred) Recovery and Atomicity We study To ensure atomicity despite failures log-based recovery mechanisms we first output information describing the modifications to stable storage without modifying the database itself shadow-paging shadow-copy Shadow Paging Shadow paging is an alternative to log-based recovery; this scheme is useful if transactions execute serially Idea: maintain two page tables during the lifetime of a transaction –the current page table, and the shadow page table Database is considered as made up of pages. Pages are mapped into physical blocks of storage, Page table maps Page into Disc. Store the shadow page table in nonvolatile storage, such that state of the database prior to transaction execution may be recovered. – Shadow page table is never modified during execution Shadow Paging To start with, both the page tables are identical. Only current page table is used for data item accesses during execution of the transaction. Whenever any page is about to be written for the first time A copy of this page is made onto an unused page. The current page table is then made to point to the copy The update is performed on the copy Shadow Paging (Cont.) To commit a transaction : 1. Flush all modified pages in main memory to disk 2. Output current page table to disk 3. Make the current page table the new shadow page table, as follows: – keep a pointer to the shadow page table at a fixed (known) location on disk. – to make the current page table the new shadow page table, simply update the pointer to point to current page table on disk Once pointer to shadow page table has been written, transaction is committed. No recovery is needed after a crash — new transactions can start right away, using the shadow page table. Pages not pointed to from current/shadow page table should be freed (garbage collected). Shadow Paging Advantages : Require fewer disk accesses to perform operation. recovery from crash is inexpensive and quite fast. There is no need of operations like- Undo and Redo. Disadvantages : difficult to keep related pages in database closer on disk. During commit operation, changed blocks are going to be pointed by shadow page table which have to be returned to collection of free blocks otherwise they become accessible. Decreases Execution Speed. To allow this technique to multiple transactions concurrently it is difficult. Log-Based Recovery A log is a sequence of log records. The records keep information about update activities on the database. – The log is kept on stable storage When transaction Ti starts, it registers itself by writing a log record Before Ti executes write(X), a log record is written, where V1 is the value of X before the write (the old value), and V2 is the value to be written to X (the new value). When Ti finishes it last statement, the log record is written. Two approaches using logs – Immediate database modification – Deferred database modification. Immediate Database Modification The immediate-modification scheme allows updates of an uncommitted transaction to be made to the buffer, or the disk itself, before the transaction commits Update log record must be written before database item is written – We assume that the log record is output directly to stable storage Output of updated blocks to disk can take place at any time before or after transaction commit Order in which blocks are output can be different from the order in which they are written. The deferred-modification scheme performs updates to buffer/disk only at the time of transaction commit – Simplifies some aspects of recovery – But has overhead of storing local copy Transaction Commit A transaction is said to have committed when its commit log record is output to stable storage – All previous log records of the transaction must have been output already Writes performed by a transaction may still be in the buffer when the transaction commits, and may be output later Immediate Database Modification Example Log Write Output A = 950 B = 2050 BC output before T1 C = 600 commits BB , BC BA BA output after T0 Note: BX denotes block containing X. commits Undo and Redo Operations Undo and Redo of Transactions – undo(Ti) -- restores the value of all data items updated by Ti to their old values, going backwards from the last log record for Ti Each time a data item X is restored to its old value V a special log record is written out When undo of a transaction is complete, a log record is written out. – redo(Ti) -- sets the value of all data items updated by Ti to the new values, going forward from the first log record for Ti No logging is done in this case Checkpoints Redoing/undoing all transactions recorded in the log can be very slow – Processing the entire log is time-consuming if the system has run for a long time – We might unnecessarily redo transactions which have already output their updates to the database. Streamline recovery procedure by periodically performing checkpointing 1. Output all log records currently residing in main memory onto stable storage. 2. Output all modified buffer blocks to the disk. 3. Write a log record < checkpoint L> onto stable storage where L is a list of all transactions active at the time of checkpoint. 4. All updates are stopped while doing checkpointing Checkpoints (Cont.) During recovery we need to consider only the most recent transaction Ti that started before the checkpoint, and transactions that started after Ti. – Scan backwards from end of log to find the most recent record – Only transactions that are in L or started after the checkpoint need to be redone or undone – Transactions that committed or aborted before the checkpoint already have all their updates output to stable storage. Some earlier part of the log may be needed for undo operations – Continue scanning backwards till a record is found for every transaction Ti in L. – Parts of log prior to earliest record above are not needed for recovery, and can be erased whenever desired. Example of Checkpoints T1 can be ignored (updates already output to disk due to checkpoint) T2 and T3 redone. T4 undone T1 can be ignored (updates already output to disk due to checkpoint) T2 and T3 redone. T4 undone References 1.https://db-book.com/ 2.https://www.geeksforgeeks.org/codds-rules- in-dbms 3.https://anandgharu.wordpress.com/dbms/ Thank you