Chapter 6 Synchronization (1).pptx
Document Details
Uploaded by GracefulMossAgate
Full Transcript
Chapter 6 Introduction to Synchronization Prepared by Dr Babar Shah 1 Objectives To present the concept of process synchronization. To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To present both software and hardware sol...
Chapter 6 Introduction to Synchronization Prepared by Dr Babar Shah 1 Objectives To present the concept of process synchronization. To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To present both software and hardware solutions of the critical-section problem To examine several classical process-synchronization problems To explore several tools that are used to solve process synchronization problems 2 Concurrency and Data Consistency Processes within a system may be independent or cooperative Reasons for cooperative processes: Data sharing Computation speedup Modularity Processes can execute concurrently May be interrupted at any time, partially completing execution Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes 3 Concurrency and Data Consistency (Contd.) Order of process/ thread execution is non-deterministic Multiprocessing system may contain multiple processors cooperative threads/processes can execute simultaneously A Multi-programming Thread/process Operations execution can be interleaved because of time-slicing often consist of multiple, visible steps Goal: Ensure that your concurrent program works under ALL possible interleaving 4 Race Condition Race condition is when the outcome depends on the particular order in which instructions execute There must be solutions whatever the order in which cooperating processes/threads execute, the final result is correct To guard against race condition, use SYNCHRONIZATION Synchronize processes/ threads Synchronization is the coordination of the activities of two or more processes/threads that usually need to execute the following activities: Compete for resources in mutually exclusive manner (i.e., one at a time) Cooperate in sequencing or ordering specific events in their individual activities 5 Critical Section Problem Race Condition - When there is concurrent access to shared data and the final outcome depends upon the order of execution Critical Section – Section of code where shared data is accessed Each process has a critical section segment of code Process may be changing common variables, updating a table, or writing a file, (see example1 account variable, example 2 counter variable) When one process in a critical section, no other may be in its critical section The critical-section problem is to design a protocol that the processes can use to synchronize their activity so as to cooperatively share data. Entry Section - Code that requests permission to enter its critical section Exit Section - Code that is run after exiting the critical section Remainder Section - The remaining code is the remainder section 6 Solution to Critical-Section Problem 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If more than one process wants to enter the critical section, the process that should enter the critical region first must be established. This is termed progress. absence of deadlock 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted absence of starvation 7 Synchronization tools Synchronization tools: 1. Software Solutions Semaphores Peterson Algorithm Monitors 2. Hardware Solutions Disable Interrupt Atomic hardware instructions 8 Software Solution to Critical Section: Semaphores A semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. A semaphore is an integer variable, shared among multiple processes. The main aim of using a semaphore is process synchronization and access control for a common resource in a concurrent environment. 9 Software Solution to Critical Section: Semaphores OS designers build software tools to solve critical section problem Semaphores were Introduced by Dijkstra in 1965 What is a semaphore? It is a variable, S, that has a non-negative integer value upon which only three operations are defined: initialize S, increment S, and decrement S S can only be accessed via two indivisible (atomic) operations wait() (sometimes called acquire) and signal()(sometimes called release) Originally called P() and V() Usually implemented via hardware atomic instructions Think of Semaphore as a class with S as a variable and two methods wait() and signal() 10 Operation on Semaphores Semaphores implemented in the operating system kernel. Therefore, the wait and signal operations are implemented as system calls. Definition of the wait() operation wait(S) { Let S be a semaphore, and its integer value represents the amount of some resource available. while (S<=0); S--; } Definition of the signal() operation signal(S) { S++; } 11 Types of Semaphore Counting semaphore – A counting semaphore is again an integer value, which can range over an unrestricted domain. We can use it to resolve synchronization problems like resource allocation. Binary semaphore – A binary semaphore can have only two integer values: 0 or 1. It’s simpler to implement and provides mutual exclusion. We can use a binary semaphore to solve the critical section problem. 12 Software Solution to Critical Section Peterson’s Solution Peterson’s Algorithm is used to synchronize two processes. It uses two variables int turn Boolean flag Initially the flags are false. When a process wants to execute its critical section, it sets its flag to true and turns as the index of the other process. This means that the process wants to execute but it will allow the other process to run first. The process performs busy waiting until the other process has finished its own critical section. After this the current process enters its critical section and adds or removes a random number from the shared buffer. After completing the critical section, it sets its own flag to false, indicating it does not wish to execute anymore. 13 Software Solution to Critical Section Peterson’s Solution Peterson's solution allows multiple processes to share and access a resource without conflict between the resources. Every process gets a chance of execution. It is simple to implement and uses simple and powerful logic. It can be used in any hardware as it is completely software dependent and executed in the user mode. Prevents the possibility of a deadlock. 14 Solution to Critical Section: Hardware Solution In uni-processor, one single CPU exists. So, Only one process can execute at a time A process running (using CPU) continue to run until it invokes an OS service or until it is interrupted One hardware solution: prevent a process from being interrupted to guarantee mutual exclusion– Kind of lock and unlock Not wise solution: user processes disable and enable interrupts Does not work in case of multiprocessors Modern machines provide special atomic hardware instructions Atomic = non-interruptible The hardware-based solution to critical section problem is based on a simple tool i.e., lock. The solution implies that before entering into the critical section the process must acquire a lock and must release the lock when it exits its critical section. Using of lock also prevent the race condition. 15 Solution to Critical-section Problem Using Locks while True: acquire lock critical section release lock remainder section 16 Mutex Locks Protect a critical section by first wait() (to acquire a lock) then signal() (to release the lock) Boolean variable indicating if lock is available or not Calls to acquire() and release() must be atomic But this solution requires busy waiting This lock therefore called a spinlock 17 Semaphore Implementation Must guarantee that no two processes can execute the wait() and signal() on the same semaphore at the same time Thus, the implementation becomes the critical section problem where the wait and signal code are placed in the critical section Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution 18 Textbook: A. Silberschatz, Operating System Concepts References: W. Stallings, Operating Systems: Internals And Design Principles, 9/E Andrew Tanenbaum & Herbert Bos, Modern Operating Systems, 4/E https://www.cs.unc.edu/~porter/courses/cse306/s13/slides/ 19