Concurrency and Synchronization PDF

Summary

This document provides an overview of concurrency and synchronization concepts. It explores various synchronization primitives, such as mutexes and semaphores, and explains how they are used in concurrent programming. The document also discusses issues like deadlock and starvation, along with solutions and examples to prevent these issues. Concurrency control mechanisms in database systems are explored.

Full Transcript

Let’s Let’s learn learn CONCURRENCY CONCURRENCY AND AND SYNCHRONIZATION SYNCHRONIZATION START 01 03 INTRODUCTION TO TABLE TABLE OF OF CONTENTS CONTENTS...

Let’s Let’s learn learn CONCURRENCY CONCURRENCY AND AND SYNCHRONIZATION SYNCHRONIZATION START 01 03 INTRODUCTION TO TABLE TABLE OF OF CONTENTS CONTENTS DEADLOCK AND CONCURRENCY STARTVATION 02 04 SYNCHRONIZATION CONCURRENCY PRIMITIVES CONTROL INTRODUCTION TO CONCURRENCY Concurrency refers to the execution of multiple tasks or processes simultaneously or in overlapping time periods, often improving efficiency and performance. It allows different parts of a program to execute independently or interact with each other while running at the same time. EXAMPLE Consider a web server that handles multiple client requests. Without concurrency, the server would have to process one request at a time, making each client wait until the previous one is fully handled. With concurrency, the server can handle multiple requests simultaneously by: 1. Creating separate threads for each request. 2. Processing parts of a request while waiting for database responses or other resources. 3. Using asynchronous I/O operations so the server doesn't have to be blocked while waiting for external data. KEY KEY PARTS PARTS OF OF CONCURRENCY CONCURRENCY Processes Independent programs that can run concurrently. Each process has its own memory space and system resources. Threads Smaller units of a process that share the same memory space. Asynchronous Programming A programming model that allows tasks to run independently of the main program flow. Synchronization Mechanisms used to control the access of multiple threads or processes to shared resources. Concurrency Control Techniques used in database systems to ensure that transactions are processed reliably and concurrently. IMPORTANCE IMPORTANCE OF OF CONCURRENCY CONCURRENCY IN IN MODERN MODERN COMPUTING COMPUTING IMPROVED PERFORMANCE FAULT TOLERANCE RESPONSIVENES PARALLEL PROBLEM S SOLVING SCALABILITY SYNCHRONIZATION PRIMITIVES Synchronization primitives are essential mechanisms in concurrent programming that allow safe access to shared resources by multiple threads or processes. Two key synchronization primitives are mutexes and semaphores, each designed to solve different concurrency challenges. MUTEXES A mutex (short for mutual exclusion) is a synchronization primitive used to prevent multiple threads from accessing a shared resource simultaneously. The primary purpose of a mutex is to ensure that only one thread can access critical sections of code or shared data at a time, preventing race conditions. HOW HOWMUTEXES MUTEXESWORK WORK A mutex operates by providing a lock mechanism. When a thread needs to access a shared resource, it must first acquire the mutex (lock it). If the mutex is already locked by another thread, the requesting thread is forced to wait until the mutex is released. Once the mutex is released (unlocked), the next waiting thread can acquire it and proceed with its task. USE CASES AND EXAMPLES: PREVENTING RACE CONDITIONS: If multiple threads need to update a shared variable, using a mutex ensures that only one thread can modify the variable at a time. FILE I/O OPERATIONS: When threads are writing to a file, using a mutex ensures data integrity by allowing one thread to write at a time. SEMAPHORES A semaphore is another synchronization primitive, but unlike a mutex, it can allow multiple threads to access a resource simultaneously up to a specified limit. A semaphore is typically used to control access to a resource pool or to signal between threads. There are two types of semaphores: COUNTING SEMAPHORES These can take non-negative integer values and are used to manage access to a finite number of identical resources. The semaphore's count represents the number of available resources. When a thread acquires a resource, the count is decremented, and when it releases a resource, the count is incremented. BINARY SEMAPHORES These behave similarly to mutexes, as they can only take values of 0 or 1, representing whether a resource is available or not. Binary semaphores are useful for signaling between threads and are often employed for thread synchronization. HOW HOW SEMAPHORES SEMAPHORES WORK WORK Semaphores work by maintaining a counter. When a thread wants to access a shared resource: It decrements the counter. If the counter is greater than or equal to zero, the thread can proceed. If the counter is negative, the thread must wait until another thread increments the counter, signaling that a resource is available. USE CASES AND EXAMPLES: RESOURCE POOL MANAGEMENT: Semaphores are ideal for managing access to a limited pool of resources, such as a connection pool or a limited number of file handles. PRODUCER-CONSUMER PROBLEMS: Semaphores help synchronize producer and consumer threads, ensuring that a producer can produce data when space is available, and a consumer can consume data only when it is present. MUTEXES VS. SEMAPHORES Mutexes are strictly used for mutual exclusion, ensuring that only one thread can access a resource at a time. Semaphores can allow multiple threads to access shared resources simultaneously (in the case of counting semaphores) or signal between threads (in the case of binary semaphores). DEADLOCK DEADLOCK AND AND STARVATION STARVATION WHAT IS DEADLOCK? Deadlock is a situation when all processes are holding one resource each while they wait for another process to acquire a different resource. EXAMPLE EXAMPLE FOUR FOURCONDITION CONDITIONTHAT THATMAY MAY OCCUR OCCURIN INDEADLOCK DEADLOCK Mutual Exclusion If another process requests the same resource, it must wait until the resource is released by the process that is now using it. Hold and Wait A process must be holding a resource while awaiting the acquisition of another resource held by another process. No Preemption The resource-holding process cannot be stopped or preempted. When the process holding the resource has completed its duty, it must relinquish the resource willingly. Circular Wait In a cyclical form, the process must wait for resources. THREE THREEGENERAL GENERALAPPROACHES APPROACHES TO TOHANDLING HANDLINGDEADLOCK DEADLOCK 01 02 DEADLOCK DEADLOCK PREVENTION AVOIDANCE 03 DEADLOCK DETECTION AND RECOVERY 1 DEADLOCK PREVENTION ELIMINATE MUTUAL EXCLUSION ELIMINATE HOLD AND WAIT ELIMINATE NO PREEMPTION ELIMINATE CIRCULAR WAIT 2 DEADLOCK AVOIDANCE RESOURCE ALLOCATION GRAPH (RAG) BANKER'S ALGORITHM 3 DEADLOCK DETECTION AND RECOVERY DEADLOCK DETECTION RESOURCE ALLOCATION GRAPH (RAG) WAIT-FOR GRAPH (WFG) DEADLOCK RECOVERY PROCESS TERMINATION RESOURCE PREEMPTION ROLLBACK WHAT WHAT IS IS STARVATION? STARVATION? Starvation is a situation that prevents low- priority processes from acquiring resources. Starvation occurs when processes with high priorities constantly consume resources. EXAMPLE EXAMPLE CAUSES CAUSESOF OFSTARVATION STARVATION High-Priority Processes In priority-based scheduling algorithms, processes are assigned priorities, and the scheduler favors higher-priority processes. Priority Inversion A low-priority process holding a resource needed by a high- priority process can cause the high-priority process to wait indefinitely. Insufficient Resources If the system doesn't have enough resources to meet the demands of all processes, some processes might be starved of the resources they need. Circular Waiting In a deadlock scenario, multiple processes are stuck waiting for each other, each holding a resource that another process needs. This can lead to starvation of all processes involved in the deadlock. CAUSES CAUSESOF OFSTARVATION STARVATION Limited Access In multi-user systems, resource quotas might be set for different users or processes. Resource Contention When the system is overloaded with too many processes or tasks, the OS may struggle to allocate sufficient resources to all tasks, potentially leading to starvation of some processes. SOLUTION SOLUTIONTO TOPREVENT PREVENT STARVATION STARVATION Aging Gradually increase the priority of processes that have been waiting for a long time. This ensures that lower-priority processes eventually get a chance to access resources. Priority Inheritance Temporarily raise the priority of a low-priority process holding a resource needed by a high-priority process. This allows the high- priority process to acquire the resource and avoid being starved. Round Robin Scheduling Fairly allocate time slices to each process, ensuring that every process gets a chance to execute, reducing the risk of starvation. CONCURRENCY CONCURRENCY CONTROL CONTROLMECHANISMS MECHANISMS a set of techniques used to ensure that multiple transactions or operations can be executed simultaneously (concurrently) in a database or system without leading to inconsistency, data loss, or deadlocks. It ensures that transactions are isolated and executed in such a way that their results are correct and don't interfere with each other, maintaining data integrity. TYPES TYPESOF OFCONCURRENCY CONCURRENCY CONTROL CONTROLMECHANISMS MECHANISMS 1 Lock-based Concurrency Control: - Locks are used to synchronize access to resources. Transactions must acquire locks before they can read or modify data. There are two main types of locks: Shared locks (S-lock): Allows multiple transactions to read the data but not modify it. Exclusive locks (X-lock): Allows only one transaction to both read and modify the data. 2 Two-Phase Locking (2PL): - is a widely used lock-based mechanism, which operates in two phases: Growing phase: Transactions can acquire locks but not release them. Shrinking phase: Transactions can release locks but not acquire new ones. TYPES TYPESOF OFCONCURRENCY CONCURRENCY CONTROL CONTROLMECHANISMS MECHANISMS 3 Timestamp-based Concurrency Control: In this mechanism, each transaction is assigned a unique timestamp when it begins. This timestamp helps order transactions to ensure consistency. Transactions with earlier timestamps get priority. If a conflict arises, the system will allow or reject a transaction based on its timestamp, ensuring serializability. 4 Optimistic Concurrency Control (OCC): This technique assumes that conflicts between transactions are rare. Instead of locking data upfront, it allows transactions to execute without any restrictions during their processing. At the end of the transaction, during the validation phase, the system checks whether any conflicts have occurred. If conflicts are found, the transaction is rolled back and restarted. Otherwise, it is committed. TYPES TYPESOF OFCONCURRENCY CONCURRENCY CONTROL CONTROLMECHANISMS MECHANISMS 5 Multiversion Concurrency Control (MVCC): MVCC allows multiple versions of data to exist at the same time. It enables readers and writers to work without blocking each other: Readers access the old versions of the data (snapshot) that were valid when they started. Writers work on the latest version and create a new version of the data. It is commonly used in databases like PostgreSQL and Oracle. TRANSACTION TRANSACTION MANAGEMENT MANAGEMENT refers to the control and coordination of transactions to ensure data integrity, consistency, and isolation in a concurrent computing environment, especially in database systems. A-C-I-D A-C-I-D Atomicity Ensures that all operations within a transaction are completed successfully; if any operation fails, the entire transaction is aborted, and the database is left unchanged. Consistency Guarantees that a transaction transforms the database from one valid state to another, maintaining data integrity. Isolation Ensures that concurrent transactions do not interfere with each other. Changes made by one transaction are not visible to others until the transaction is committed. Durability Once a transaction is committed, its changes are permanent and survive system failures. CONSISTENCY CONSISTENCYAND AND ISOLATION ISOLATIONLEVELS LEVELS CONSISTENCY CONSISTENCY LEVELS LEVELS ensures that a transaction takes the database from one valid state to another. A transaction is consistent if it starts in a consistent state (where all integrity rules are satisfied) and ends in a consistent state after its completion. ISOLATION ISOLATION LEVELS LEVELS is the degree to which the operations of one transaction are isolated from those of other concurrent transactions. It prevents conflicts and anomalies, such as dirty reads, non- repeatable reads, and phantom reads. EXAMPLES EXAMPLESOF OFCONSISTENCY CONSISTENCY AND ANDISOLATION ISOLATIONLEVELS LEVELS Read Uncommitted the lowest isolation level. Transactions are allowed to read uncommitted changes made by other transactions, which can lead to dirty reads. Read Committed a transaction can only read data that has been committed by other transactions, avoiding dirty reads. However, non- repeatable reads are still possible (data can change between reads within the same transaction). EXAMPLES EXAMPLESOF OFCONSISTENCY CONSISTENCY AND ANDISOLATION ISOLATIONLEVELS LEVELS Repeatable Read a transaction is guaranteed to see the same data if it reads the same rows multiple times. This level prevents non-repeatable reads but does not prevent phantom reads (the appearance of new rows in the dataset due to concurrent inserts). Serializable the highest isolation level, which ensures complete isolation between transactions. It guarantees that transactions are executed in a way that they produce the same result as if they had been run sequentially. It prevents all inconsistencies: dirty reads, non-repeatable reads, and phantom reads. PAHINGA MUNA KAYO. NEXT WEEK: AFTER REPORTING WE WILL HAVE INDIVIDUAL RECITATION. 15 ITEMS QUIZ DURING RECITATION

Use Quizgecko on...
Browser
Browser