Process Synchronization.pdf

Full Transcript

Process Synchronization Operating Systems 1- CSC141 Lecture 10 Prof. Jelili O. Oyelade Mr Ogbu H.O Mr. Okuoyo Miss. Nathaniel Outline  Process Synchronization  Race Condition  Critical Section Problem  Hardware Support for Synchronization ...

Process Synchronization Operating Systems 1- CSC141 Lecture 10 Prof. Jelili O. Oyelade Mr Ogbu H.O Mr. Okuoyo Miss. Nathaniel Outline  Process Synchronization  Race Condition  Critical Section Problem  Hardware Support for Synchronization Process Synchronization  A system typically consists of several threads running either concurrently or in parallel. Threads often share user data. Meanwhile, the operating system continuously updates various data structures to support multiple threads.  A race condition exists when access to shared data is not controlled, possibly resulting in corrupt data values.  Process synchronization involves using tools that control access to shared data to avoid race conditions. 3 Process Synchronization  These tools must be used carefully, as their incorrect use can result in poor system performance, including deadlock.  Cooperating processes can either directly share a logical address space (that is, both code and data) or be allowed to share data only through shared memory or message passing.  Concurrent access to shared data may result in data inconsistency. 4 RACE CONDITION  A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. 5 RACE CONDITION  In Figure 1, two processes, P0 and P1, are creating child processes using the system call fork().  There is a race condition on the kernel variable which represents next available pid, the value of the next available process identifier.  Unless mutual exclusion is provided, it is possible the same process identifier number could be assigned to two separate processes. 6 RACE CONDITION Figure 1: Race condition when assigning a pid. 7 The Critical-Section Problem  Consider a system consisting of n processes { P0, P1,..., Pn-1}.  Each process has a segment of code, called a critical section, in which the process may be accessing and updating data that is shared with at least one other process.  The important feature of the system is that, when one process is executing in its critical section, no other process is allowed to execute in its critical section. That is, no two processes are executing in their critical sections at the same time. 8 The Critical-Section Problem  The critical-section problem is to design a protocol that the processes can use to synchronize their activity so as to cooperatively share data.  Each process must request permission to enter its critical section.  The section of code implementing this request is the entry section.  The critical section may be followed by an exit section.  The remaining code is the remainder section. 9 The Critical-Section Problem Solution to the critical-section problem must satisfy three requirements: 1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely 10 The Critical-Section Problem General structure of a typical process. 11 The Critical-Section Problem 3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. The critical-section problem could be solved simply in a single- core environment if we could prevent interrupts from occurring while a shared variable was being modified 12 Synchronization Tools  Software-based solutions like the Peterson’s solution are not guaranteed to work on modern computer architectures.  As such, three major hardware instructions provide support for solving the critical-section problem.  These primitive operations can be used directly as synchronization tools, or be used to form the foundation of more abstract synchronization mechanisms. 13 Synchronization Tools 1  Mutex Locks: is used to protect critical sections and thus prevent race conditions. That is, a process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section.  The function acquire() lock, and the function releases() the lock, as illustrated in Figure 2. 14 Mutex Lock 15 Solution to critical-section problem using Mutex Mutex Lock  A mutex lock has a Boolean variable whose value indicates if the lock is available or not.  If the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.  A process that attempts to acquire an unavailable lock is blocked until the lock is released. 16 Mutex Lock Definition of acquire() and release() Calls to either acquire() or release() must be performed atomically. Thus, mutex locks can be implemented using the 17 CAS () operation Disadvantage of Mutex Lock  The main disadvantage of implementing the mutex lock is that it requires busy waiting.  While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire().  Spinlock is the mechanism used to achieve short duration lock 18 Mutex Lock  The advantage of Spinlocks is that, no context switch is required when a process must wait on a lock.  In certain circumstances on multicore systems, spinlocks are in the preferable choice for locking. 19 Synchronization Tools 2  Semaphores: A semaphore S, is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: signal() and wait() 20 Semaphores The definition of wait() is as follows: Definition of wait and signal operations 21 Semaphores  All modifications to the integer value of the semaphore in the and wait() operations must be executed atomically.  That is, when one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value.  In addition, in the case of wait(S), the testing of the integer value of (S

Use Quizgecko on...
Browser
Browser