Full Transcript

280 Chapter 6: Concurrency, Inter-Process Communication and Synchronization Operating Systems and System So ware 281 Overview...

280 Chapter 6: Concurrency, Inter-Process Communication and Synchronization Operating Systems and System So ware 281 Overview In this chapter, we will take about concurrent programming! 1. Concurrency 2. Synchronization mechanisms 3. Inter-process communication 4. Deadlocks Operating Systems and System So ware 282 Concurrency Operating Systems and System So ware 283 Overview Allowing processes to communicate and work together is an important feature provided by operating systems. For example: Shell scripts connected by pipes use each others’ outputs as inputs Multi-threaded programs share data and synchronize themselves when they are done To provide these features, operating systems need to solve three main problems: 1. How can processes pass information to each other? 2. How can we make sure that threads do not get in each other’s way? 3. How can we ensure that threads can synchronize their behavior? Let’s start with some definitions about concurrent systems. Operating Systems and System So ware 284 Concurrency vs. Parallelism Definition Definition Concurrency is the ability for multiple programs to be executed Parallelism is the ability to use multiple CPU cores at the same time. simultaneously on a system. This can be on a single CPU core, with This is only possible on multi-core or distributed systems that can context switching between threads. physically execute multiple threads at the same time. Operating Systems and System So ware 285 Race Conditions When processes execute concurrently, they may use shared resources, which could be memory areas, files, devices, etc. Depending on the operations performed on these resources, some situations may lead to undefined behaviors or bugs. Definition A race condition happens when the behavior of a concurrent system or program depends on the order of operations and leads to unexpected results. Race conditions may happen with: Parallelism, i.e., two CPU cores use a shared variable at the same time Concurrency, i.e., a context switch happens at the point that makes some operation inconsistent Operating Systems and System So ware 286 Race Conditions: Example 1 unsigned int nr_req; In a multi-threaded web server, worker 2 threads increment a global counter every 3 void worker(void) { time they process a packet to measure 4 do { 5 process_packet(); the bandwidth of the web server. 6 nr_req++; // equivalent to: read nr_req; add 1; write nr_req 7 } while (1); 8 } Scenario 1: Context switch happens a er line 6 Scenario 2: Context switch happens during line 6 Worker 1 Worker 2 Worker 1 Worker 2 ; nr_req = 0 ; nr_req = 0 mov reg, nr_req mov reg, nr_req add reg, reg, 1 ; context switch mov nr_req, reg ; nr_req = 0 ; nr_req = 1 mov reg, nr_req ; context switch add reg, reg, 1 mov reg, nr_req mov nr_req, reg add reg, reg, 1 ; nr_req = 1 mov nr_req, reg ; context switch ; nr_req = 2 add reg, reg, 1 mov nr_req, reg ; nr_req = 1 Everything works as expected. We only see one increment instead of two in the end! Operating Systems and System So ware 287 Critical Sections To avoid race conditions, we can make sure that only one thread at a time has access to the shared resource. Definition A critical section is a portion of code that should be accessed by at most one thread at a time to avoid race conditions. In these critical sections, we need to enforce mutual exclusion. To enforce this, we need to be able to: 1. Allow one and only one thread at a time into a critical section 2. If another thread wants to access an already used critical section, it must wait until the section is available How can we enforce this mutual exclusion in a critical section? Operating Systems and System So ware 288 Supporting Concurrency Earlier, we said that we needed three things to properly support concurrency: 1. How can processes pass information to each other? 2. How can we make sure that threads do not get in each other’s way? 3. How can we ensure that threads can synchronize their behavior? Let’s start with synchronization mechanisms to solve 2. and 3. Operating Systems and System So ware 289 Synchronization Mechanisms Operating Systems and System So ware 290 Mutex: Mutual Exclusion Locks A mutual exclusion lock, or mutex, is an object that enables mutual exclusion on critical sections. A mutex is a synchronization object that provides two primitives: lock(): If the lock is available, take it and enter the critical section; else wait for it to be available unlock(): Make the lock available again and leave the critical section Usage: When entering the critical section, call lock() on the mutex wait… When exiting the critical section, call unlock() on the mutex Invariant 1 Only one thread at a time can hold the mutex. Invariant 2 Only the lock owner can unlock it. Operating Systems and System So ware 291 Mutex: Internals A mutex is just a single variable that can take two possible values, 0 or 1, to represent the two states locked and unlocked, respectively. However, a normal variable would not work because of the race condition if two threads increment the variable simultaneously. We need an atomic variable to perform the read-increment-write operation with the guarantee that another thread cannot also do the same concurrently. This is made possible by special atomic instructions provided by the CPU instruction set: x86 provides CMPXCHG (compare-exchange) or LOCK INC/DEC (atomic increment/decrement) Arm/RISC-V also provide LL/SC (load-link/store-conditional) semantics We won’t detail how all these work now. We can discuss it at the end if we have time. Primitives are implemented as follows: 1 mutex_lock: 2 mov reg, 1 ; write 1 into reg 3 cmpxchg 0, mutex, reg ; if mutex = 0; swap reg and mutex (this sets the mutex to 1 if successful) 4 jz ok ; if the mutex was unlocked, then, jump to the end, you have the lock 5 call yield ; else, yield the CPU (could also block) 6 jmp mutex_lock ; when scheduled again, retry 7 ok: 8 ret ; return to calling function 9 10 mutex_unlock: 11 mov mutex, 0 ; set the mutex to 0 12 ret ; return to calling function We should also check that we own the lock to allow the unlock()! Operating Systems and System So ware 292 Mutex: POSIX API Header: #include Creation/destruction: int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *attr); int pthread_mutex_destroy(pthread_mutex_t *mutex); Primitives: int pthread_mutex_lock(pthread_mutex_t *mutex); int pthread_mutex_trylock(pthread_mutex_t *mutex); int pthread_mutex_unlock(pthread_mutex_t *mutex); Operating Systems and System So ware 293 Mutex: Limitations Mutexes are not a silver bullet for all race conditions: You may need to let more than one process into a critical section, e.g., you have multiple resources to protect, and they can be used independently by multiple threads You may need to unlock a thread without owning the lock Let’s see an example, a parking lot management system! Free: 12 11 1 0 } Integer protected with a mutex lock EXIT Protect with a mutex lock? Impossible, since exiting cars cannot unlock() the mutex that was taken by a car waiting in line! Operating Systems and System So ware 294 Semaphore A semaphore is a synchronization object similar to a mutex, with the difference that it can allow more than one thread into the critical section. A semaphore provides: A resource counter, e.g., number of free parking spots A waiting mechanism, e.g., red traffic light A signalling mechanism, e.g., green traffic light triggered by exits Operating Systems and System So ware 295 Semaphore: Internals Semaphores are the basic block of most synchronization primitives. They are built around an atomic integer K ≥ 0 and provide two primitives: > 0, then decrement K by 1 wait(): wait until K Alternate names Wait: P()/down()/acquire() signal(): increment K by 1 Signal: V()/up()/release()/post() Waiting threads are stored in a wait queue, so that they can be woken up later. The atomic counter K is manipulated with atomic instructions only, in the same way as for mutexes. A mutex is a semaphore initialized with K = 1, that allows the signal() method only for the lock owner. Let’s go back to our parking example! Operating Systems and System So ware 296 Semaphore: The Parking Lot Strikes Back Let’s revisit our parking lot management system from earlier, with semaphores! wait Free: 12 11 1 0 } Managed by a semaphore signal EXIT }Managed by application logic This could be done by having an array of mutexes (one per parking spot), or an array of atomic integers, or an array of integers protected by a single mutex, etc. Operating Systems and System So ware 297 Semaphore: The Producer-Consumer Pattern Another example is the frequent programming pattern known as producer-consumer. One thread produces, i.e., reads, a resource, the producer, another thread consumes, i.e., writes, it, the consumer. This creates a pipeline of data between both threads, e.g., pipes in shell scripts. Example: A video decoding program with two threads: Thread 1: Reads the video file from disk to memory by chunks Thread 2: Decodes the data from memory to display it by chunks 1 semaphore_t sem; 2 3 int producer(void) { 4 while (1) { 5 read_chunk(file, buf); 6 signal(sem); 7 } 8 } 9 10 int consumer(void) { 11 while (1) { 12 wait(sem); 13 decode_chunk(buf); 14 } 15 } We also need a way to tell the consumer where to read in the buffer. This was omitted here to simplify the example. Operating Systems and System So ware 298 Semaphore: Limitations Some characteristics of semaphores are implementation-dependent: Wakeup order: If multiple threads are waiting on a semaphore, which one should be woken up first? Multiple strategies are possible: ▪ Wake up threads in the order they started to wait ▪ Randomly wake up a waiting thread ▪ Wake up threads according to their priority, e.g., nice value Freedom from starvation: A thread waiting on a semaphore will eventually be woken up, i.e., a thread cannot wait forever on a semaphore. This is heavily dependent on the wakeup order strategy. Among the three strategies above, only the first one guarantees freedom from starvation. However, the other strategies can implement special code paths to avoid starvation while maintaining their overall strategy. Lock ordering: Programming with semaphores (or mutexes) is extremely error-prone, and hard to debug. For example, if we have two locks A and B and two threads, if they take locks in different orders, e.g., thread 1 takes A then B while thread 2 takes B then A, we might end up with both threads waiting for each other, forever. Operating Systems and System So ware 299 Semaphore: POSIX API Header: #include Creation/destruction: int sem_init(sem_t *sem, int pshared, unsigned int value); int sem_destroy(sem_t *sem); Primitives: int sem_wait(sem_t *sem); int sem_trywait(sem_t *sem); int sem_timedwait(sem_t *sem, const struct timespec *abs_timeout); int sem_post(sem_t *sem); int sem_getvalue(sem_t *sem, int *sval); Operating Systems and System So ware 300 Conditions In some cases, threads in a critical section need to wait on some event. However, if they do so, they would still hold the mutex and prevent other threads from progressing. This is even more critical if the event the thread waits for can happen only if another thread gets the mutex. To allow this, we can use a condition, a synchronization object that enables threads to synchronize on a condition being met. When a thread waits on a condition, it also releases the mutex. When the condition becomes true, the thread is woken up with the mutex held automatically. Condition usually provide two primitives: wait(mutex): Wait until the condition becomes true, releasing the mutex in the meantime signal(): Wake up a thread waiting on the condition Operating Systems and System So ware 301 Conditions: Producer-Consumer Example Let’s take our producer-consumer example again: 1 int producer(void) { 2 while (1) { 3 lock(mutex); 4 if (buf_full(buf)) 5 cond_wait(buf_not_full_cond, mutex); 6 add_item(item, buf); 7 cond_signal(buf_not_empty_cond); 8 unlock(mutex); 9 } 10 } 11 12 int consumer(void) { 13 while (1) { 14 lock(mutex); 15 if (buf_empty(buf)) 16 cond_wait(buf_not_empty_cond, mutex); 17 get_item(item, buf); 18 cond_signal(buf_not_full_cond); 19 unlock(mutex); 20 } 21 } Operating Systems and System So ware 302 Conditions: POSIX API Header: #include Creation/destruction: int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr); int pthread_cond_destroy(pthread_cond_t *cond); Primitives: int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex); int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex, const struct timespec *abstime); int pthread_cond_signal(pthread_cond_t *cond); int pthread_cond_broadcast(pthread_cond_t *cond); Operating Systems and System So ware 303 Barriers If we need multiple threads (usually more than two) to synchronize, mutexes and semaphores might not be enough. For example, an application divided into phases would require all threads to complete phase i before starting phase i + 1. To this end, we can use a barrier. A barrier is a synchronization point that blocks threads that wait on it until all expected threads have reached it. It is initialized with a strictly positive integer, the number of threads to wait for, and provides a wait() primitive. Example: We need to synchronize five threads, the barrier will block them until five threads wait on it. It will then unlock all threads from the barrier. Operating Systems and System So ware 304 Barriers: POSIX API Header: #include Creation/destruction: int pthread_barrier_init(pthread_barrier_t *barrier, const pthread_barrierattr_t *attr, unsigned count); int pthread_barrier_destroy(pthread_barrier_t *barrier); Primitives: int pthread_barrier_wait(pthread_barrier_t *barrier); Operating Systems and System So ware 305 Readers-Writer Locks Another useful synchronization object is the readers-writer lock. It is pretty frequent that some data structures are o en read but rarely modified. In this case, readers can operate in parallel, without occurring any problem. However, when the structure is modified, the writer needs to be in mutual exclusion. Readers-writer locks allows N readers in parallel, and 1 writer in mutual exclusion. They provide this asymmetric behavior with the following primitives: read_lock(): Used by readers to enter the critical section. Upon entry, readers are guaranteed that no one will modify the data structure protected. If a writer is in the critical section, wait until they exit it. If readers are in the critical section, you can still enter it. read_unlock(): Used by readers before exiting the critical sections. write_lock(): Used by writers. Upon entry, the writer is guaranteed to be alone in the critical section. If a reader or a writer is in the critical section, wait until they all exit. write_unlock(): Used by writers before exiting the critical section. Example: A linked list protected by a readers-writer lock can be iterated over by as many readers as needed (the list is not modified, so it doesn’t break). Whenever we need to modify the list, the writer will wait for all readers to exit the critical section, get the lock (preventing new readers), modify the list, then release the lock to allow readers again. Operating Systems and System So ware 306 Readers-Writer Locks: POSIX API Header: #include Creation/destruction: int pthread_rwlock_init(pthread_rwlock_t *rwlock, const pthread_rwlockattr_t *attr); int pthread_rwlock_destroy(pthread_rwlock_t *rwlock); Primitives: int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock); int pthread_rwlock_tryrdlock(pthread_rwlock_t *rwlock); int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock); int pthread_rwlock_trywrlock(pthread_rwlock_t *rwlock); int pthread_rwlock_unlock(pthread_rwlock_t *rwlock); Operating Systems and System So ware 307 Read-Copy-Update One last synchronization mechanism is slightly different from the rest. All the previous mechanisms lead to some threads being blocked in order to wait for the critical section to become available. This has a performance cost that can sometimes be avoided. The Read-Copy-Update (RCU) mechanisms allow lock-free access and modification of shared data structures. This is made possible by letting old readers use the old version of the data structure while new readers see the updated version. Let’s see how to achieve this on a linked list: 1. We have a linked list with four items 2. We create a new element to insert, fully initialized 3. We insert it by atomically changing a single pointer from the first element 4. Old readers have missed the new element, and new readers will see it. 5. Our new list has five elements. 6. We want to delete the third element. This can be done by changing a single pointer atomically between the second and third/fourth elements. 7. We still have a consistent list, with our removed element still there for old readers. Operating Systems and System So ware 308 Read-Copy-Update: Grace Period One issue we have here is when deleting an element. The RCU mechanism decouples the removal of the element from the data structure and the reclamation, i.e., freeing the element from memory. However, there might be old readers accessing this element, even a er the removal from the data structure! To allow safe reclamation, we need to wait long enough to have no more readers on the removed element. RCUs usually implement an API so that readers need to call functions around the critical section (similar to lock/unlock). RCUs track how many readers are in the critical section, and determine when the grace period, i.e., duration until the removed element is reclaimed, ends. RCUs are not very common in user programs, but they are extensively used in kernel code. Linux uses in most subsystems for highly parallel data structures. Operating Systems and System So ware 309 Priority Inversion Synchronization is tightly coupled with scheduling, as waiting threads might be blocked by the OS. If the scheduler uses a priority-based policy, i.e., high priority threads always execute before low priority ones, and threads use synchronization mechanisms, e.g., share a lock, we can run into the following problem: Let’s assume three threads with different priorities: Thigh , Tmedium and Tlow. Thigh and Tlow share a mutex to use a device, and we only have a single CPU core. 1. Tlow owns the lock, preventing Thigh from continuing its execution. 2. Tmedium keeps running, preventing Tlow from releasing the lock. We have a priority inversion: a high priority thread is prevented from running because a low priority thread cannot release the lock. Operating Systems and System So ware 310 Preventing Priority Inversion There are various ways of solving priority inversion problems: 1. Disabling interrupts in critical sections: This prevents preemption of low priority threads when they hold locks. However, we cannot trust user programs to enable interrupts again later. This also “breaks” the rationale of the scheduling algorithm if a higher priority thread is runnable. 2. Priority ceiling: Affect a priority to the mutex, and assign it to any thread that hold it. If the mutex priority is high enough, i.e., higher than the priority of any thread that needs the mutex, we won’t have priority inversions. 3. Priority inheritance: When a high priority thread must wait for a mutex, its priority is given to the mutex holder until it releases the mutex. This allows the low priority thread to run and release the mutex quickly, while maintaining the priorities in the system. Linux uses priority inversion for the priority-based real-time schedulers. 4. Random boosting: Randomly give priority boosts to mutex holders to give them a chance to complete the critical section and release the lock. Windows uses random boosting to prevent priority inversions. Operating Systems and System So ware 311 Supporting Concurrency Earlier, we said that we needed three things to properly support concurrency: 1. How can processes pass information to each other? 2. How can we make sure that threads do not get in each other’s way? 3. How can we ensure that threads can synchronize their behavior? Now that we solved 2. and 3., let’s have a look at inter-process communication to solve 1. Operating Systems and System So ware 312 Inter-Process Communication Operating Systems and System So ware 313 Communicating Between Processes With the previous examples (producer-consumer, parking lot, etc.), we only discussed how to synchronize threads. We glossed over the communication means between them. Threads in the same process share an address space! ⟹ They use the same memory and can easily communicate through buffers and variables For processes, it is a bit more complex, as they are isolated from each other. ⟹ The operating system needs to provide communication channels/mechanisms! We will now see a few ways of communicating between processes: Message passing Shared memory Signals Operating Systems and System So ware 314 Message Passing Message passing is a communication mechanism between processes that relies on two primitives: send and receive. These primitives are usually implemented as system calls with the following interface: send(destination, &message): Send message to destination. The message will be copied into the kernel memory until it is consumed. receive(source, &message): Receive a message from source and store it in message. If a message is available, it will be copied into the caller’s address space. Else, the caller can block until a message arrives. source may also be a generic value that means any source. The communication channel (identified by source and destination above) might be represented by various types of objects depending on the OS. They might refer to: A local object for inter-process communication on the local machine A distant object for over-the-network communication UNIX systems use file descriptors that refers to a special file called a socket. Operating Systems and System So ware 315 Message Passing: Producer-Consumer Example Let’s revisit our producer-consumer example using message passing! 1 void producer(void) { 2 struct item it; 3 4 while (1) { 5 it = generate_data(); 6 send(channel, &it); 7 } 8 } 9 10 void consumer(void) { 11 struct item it; 12 13 while (1) { 14 receive(channel, &it); 15 do_something(it); 16 } 17 } Message passing embeds synchronization semantics, taking care of both communication and synchronization! Operating Systems and System So ware 316 Message Passing: Buffering Since the messages are copied between address spaces, they end up in kernel memory until they are received. Two buffering schemes can be used: No buffering: The kernel does not offer an intermediate buffer. The sending process blocks until the message is received. Buffering with N elements: The kernel provides a limited size buffer. Senders can send messages without being blocked as long as the buffer is not full. When the buffer is full, senders are blocked until some messages are consumed by receivers. Operating Systems and System So ware 317 Message Passing: Limitations Networking considerations Message passing can implement communication locally or across the network transparently. However, over the network, we might suffer data losses on the way, as well as increased security risks. Data loss: To ensure that a message has arrived, the receiver can send an acknowledgement message to the sender. If a sender doesn’t receive it, it can assume the message was lost and send it again. Since acknowledgments can also be lost (or delayed), messages usually have a sequence identifier to detect multiple receptions of identical messages. Authentication: To avoid communicating with undesired participants over the network, we also need a proper way of identifying processes in an unambiguous way and make sure we are not communicating with an imposter. Thus, when the OS provides message passing, it usually transparently provides these features through different network protocols. We won’t detail that here, but you’ll get familiar with that in the Data Communication course next semester. Performance To enable the communication across address spaces and synchronization, message passing interfaces suffer from a non-negligible performance costs: Multiple copies: Messages are copied twice across address spaces: from sender to kernel, and from kernel to receiver. System calls: We need to perform system calls to send and receive messages. This incurs the cost of switching between user and supervisor modes. Operating Systems and System So ware 318 Sockets In the UNIX world, sockets are special files that can be bound to a network interface. They provide a message passing interface, with an API tailored for network communications. A socket is created with the socket() function that returns a file descriptor. This file descriptor is then used to interact with the socket using API functions: bind(): Associate the socket with an IP address and port number listen() (server side): Start listening for incoming connections connect() (client side): Connect the socket with a remote server (IP address and port) accept() (server side): Accept incoming connections from a client send()/recv(): Used to communicate over the socket select()/poll(): Check the status of a socket (can it be read from/written to?) Operating Systems and System So ware 319 UNIX Pipes UNIX pipes are another message passing API provided in UNIX systems. Pipes connect file descriptors of multiple processes so that one process writes into a pipe and the other one reads from it. Thus, pipes are unidirectional. Pipes are implemented as a memory buffer in the kernel memory, where calls to read()/write() are redirected. Reads are blocking when the pipe is empty, and writes are blocking when the pipe is full. For example, running the following shell line will create a pipeline of the three programs A, B and C, with the following file descriptor flow graph: A | B | C Pipes can also be created programmatically using the POSIX functions pipe() or mkfifo(). Operating Systems and System So ware 320 Shared Memory Another way for processes to communicate is to directly share memory. We already briefly discussed this when we mentioned how processes are forked. Thanks to virtual memory, multiple process can have their respective virtual pages point to the same physical page frames. For example, process A and B have their purple pages point to the same page frames, even if the virtual address differ. Both processes can then use this memory normally, and they will both see each other’s modifications. Thus, they need to use the previously described synchronization mechanisms to avoid race conditions. Operating Systems and System So ware 321 Shared Memory: POSIX API Creating shared memory between multiple processes is done in three steps in POSIX systems: 1. Each process needs to create/open a shared memory object, represented by a file descriptor, with the following function: int shm_open(const char *name, int oflag, mode_t mode); Only one process needs to create this file, the others can open it using the path of this file. Note that this is not a regular file! It is not located on any storage device, but fully in memory! 2. Set the size of the shared memory region with: int ftruncate(int fd, off_t length); Usually, only the process that creates the shared memory region does this. 3. Map the shared memory region into each process’ virtual address space with: void *mmap(void addr, size_t length, int prot, int flags, int fd, off_t offset); The address returned can then be used as a pointer to the start of the shared memory region. The shared memory is accessed as any private memory, albeit usually in conjunction with synchronization mechanisms. Memory-Mapped Files More generally, the same idea can be applied on any regular file, not just shared memory objects! This can be achieved by using the mmap() function on any open regular file! Depending on the flags used, the level of sharing can vary, due to the existence of the page cache. Operating Systems and System So ware 322 Signals One last communication mechanism, slightly different from the previous ones, is using signals. A signal is an asynchronous notification sent between processes, with a signal number specifying the reason of the signal. When a signal is received, the process executes a signal handler, and then returns to where it was previously. They can be seen as a form of so ware interrupts, fully managed by the OS kernel, not the hardware interrupt controller. You can send signals to a process with int kill(pid_t pid, int sig_nr); Signal numbers encode the reason of the signal, e.g., KILL, STOP, INT, SEGV, etc.. As for the receiver side, signal handlers can be re-defined with the signal()/sigaction() functions. Similar to interrupts, most signals can be masked, except a few non-maskable signals. Synchronicity Unlike hardware interrupts, signals are not synchronous! The kernel triggers the jump to the signal handler whenever it wants to, usually while exiting the kernel due to an interrupt or a system call. Signal count Note that a process cannot know how many times it received a signal before handling it, as they are only represented by a bit in a bitmask. Operating Systems and System So ware 323 Deadlocks Operating Systems and System So ware 324 Granting Access to Shared Resources Shared resources may need to be accessed by only one thread at a time, in mutual exclusion. We can classify resources into two types: Preemptable resources can be taken over from the owning process without breaking the application logic or the system. For example, memory is preemptable, as it can be taken away from a process with swapping. Nonpreemptable resources, on the other hand cannot be taken away as easily. For example, if a process has the ownership of a printer while it is printing, the OS cannot take away the printer resource from the process without breaking the process’ application logic as well as the system’s behavior, i.e., the printer, as we will get a broken output by the printer, mixing data from the previous and the new owner of the resource. When multiple resources are involved, it becomes messier. Let’s see an example posed by Dijkstra in 1965: the dining philosophers. Operating Systems and System So ware 325 Dining Philosophers Problem Problem Five philosophers are seated around a circular table, alternating between thinking and eating. Each one has a plate filled with in front of them. There is also one fork between each plate. Unfortunately, the food is so slippery that a philosopher needs two forks to eat. First simple solution: 1 struct mutex forks; 2 3 void philosopher(int id) { 4 struct mutex left_fork = forks[id % 5]; 5 struct mutex right_fork = forks[(id + 1) % 5]; 6 7 think(); // think for a random amount of time 8 lock(left_fork); // take the fork on my left 9 lock(right_fork); // take the fork on my right 10 eat(); 11 unlock(right_fork); // put the fork on my right 12 unlock(left_fork); // put the fork on my left 13 } What happens if all philosophers take their le fork at the same time? Operating Systems and System So ware 326 Deadlocks If all philosophers hold their le fork, none can take a right fork, leading to the program not progressing. This is called a deadlock. Definition A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause. In our dining philosophers code, each philosopher is waiting for another philosopher to release their le fork. Conditions for a deadlock: 1. Mutual exclusion condition. Each resource is either currently assigned to exactly one process or available. 2. Hold-and-wait condition. Processes currently holding resources that were granted earlier can request new resources. 3. No-preemption condition. Resources previously granted cannot be forcibly taken away from a process. They must be explicitly released by the process holding them. 4. Circular wait condition. There exists a circular list of two or more processes that are each waiting for a resource held by the next member of the chain. All four conditions must be present for a deadlock to occur! Now, let’s see how we can prevent deadlocks! Operating Systems and System So ware 327 The Ostrich Algorithm The first strategy is to stick your head in the sand and ignore the existence of the problem. While this technically doesn’t solve the deadlock problem, it still makes sense in some cases. Let’s imagine a system where a deadlock statistically occurs every year. If this system can recover by restarting, and you need to restart it every month for maintenance, e.g., updates, anyways: Why bother solving this deadlock? This is even more true since eliminating the deadlock will most likely have a performance cost. Is it worth it to reduce the performance of the whole system for a problem that rarely occurs and can be solved by restarting? Operating Systems and System So ware 328 Detect and Recover One way to detect potential deadlock is to build a dependency graph of all resources. At run time, whenever a thread acquires a resource, we add this resource acquisition into the graph. If we detect a cycle in the graph, it means a deadlock can occur! The Linux kernel implements such a detector, lockdep, in order to help developers detect potential deadlocks. Since this tool has a performance cost, it is usually enabled only during the development and testing phases. Recovery heavily depends on the nature of the resources and on the program itself. It might be achieved by forcing the preemption of a resource, killing a process to break the cycle or rolling back to a previous state and retrying. Operating Systems and System So ware 329 Deadlock Prevention Deadlocks are defined by the four conditions we discussed earlier. To prevent deadlock, one can break one of the conditions, leading to a scenario where a deadlock cannot happen. 1. Breaking the mutual exclusion condition. Removing situations where resources need to be in mutual exclusion is one possibility. For example, having a daemon manage a resource can achieve this by letting all users submit jobs, and the daemon pick the jobs and run them on the shared resource. Thus, the resource is always owned only by the daemon. 2. Breaking the hold-and-wait condition. If processes acquire all the necessary resource at once when starting, we break this condition. In practice, this is difficult to achieve, as resources might not be known in advance. 3. Breaking the no-preemption condition. Allowing forced resource preemption breaks this condition. However, in practice, this is rarely possible as it might also break the system/resource. 4. Breaking the circular wait condition. This rule can be broken by either allowing only one resource to be held at a time or by enforcing a strict ordering on resource acquisition. Operating Systems and System So ware 330 Other Types of Deadlocks Protocol Deadlocks Another type of deadlocks is not directly tied to resources, but to communication protocols. In these deadlocks, there are no resources, but processes are blocked waiting for another process to trigger an event. For example, imagine two processes A and B communicating through message passing. If A waits for a request from B, and B waits for a request from A, then both A and B are blocked by each other. These deadlocks are solved during the system design phase, with robust protocols. Again, you will tackle this in more details in the Data Communication course next semester. Livelocks Live locks are similar to deadlocks except that the processes involved are not blocked on a resource, but actively working without making progress. This happens when using active polling for example. Operating Systems and System So ware 331 Back To Our Starving Philosophers In our previous naive solution, our philosophers suffered from starvation. Definition Starvation is the situation that occurs when one or more threads fail to make progress, even when no food is involved in the programs. Let’s see a first working solution by breaking the circular wait condition! 1 struct mutex forks; 2 3 void take_forks(int id) { 4 // we always take the forks in the order of their ids 5 int fork1 = id; 6 int fork2 = (id + 1) % 5; 7 8 lock(forks[min(fork1, fork2)]); 9 lock(forks[max(fork1, fork2)]); 10 } 11 12 void put_forks(int id) { 13 unlock(forks[id]); 14 unlock(forks[(id + 1) % 5]); 15 } 16 17 void philosopher(int id) { 18 19 think(); // think for a random amount of time Everyone takes their le fork first, except the last 20 take_forks(id); 21 eat(); philosopher (id=4) that takes the right one first. 22 put_forks(); This way, this philosopher doesn’t prevent the first 23 } philosopher (id=0) from taking both forks, thus breaking the circular wait condition. Operating Systems and System So ware

Use Quizgecko on...
Browser
Browser