Deadlocks and Memory Management PDF

Document Details

CleverChrysoprase4501

Uploaded by CleverChrysoprase4501

University of San Agustin

Tags

deadlocks operating systems memory management computer science

Summary

This document discusses deadlocks in operating systems, their causes, modelling techniques, detection, recovery, and avoidance strategies. It explains the concept of resource allocation graphs and various algorithms for preventing deadlocks.

Full Transcript

Deadlocks and Memory Management Deadlocks: A set of blocked processes, each holding a resource and waiting to acquire a resource held by another process. A process in operating systems uses different resources in the following way: ​ Requests a resource ​ Uses the resource ​ Releases t...

Deadlocks and Memory Management Deadlocks: A set of blocked processes, each holding a resource and waiting to acquire a resource held by another process. A process in operating systems uses different resources in the following way: ​ Requests a resource ​ Uses the resource ​ Releases the resource Deadlock can arise if the following four conditions hold simultaneously: ​ Mutual Exclusion: One or more resources are non-sharable (only one process can use them at a time). ​ Hold and Wait: A process is holding at least one resource and waiting for resources. ​ No Pre-emption: A resource cannot be taken from a process unless the process releases the resource. ​ Circular Wait: A set of processes is waiting for each other in a circular form. Deadlock Modeling Resource-Allocation Graph: In some cases, deadlocks can be understood more clearly through the use of resource-allocation graphs, having the following properties: ​ A set of resource categories, { R1, R2, R3,..., RN }, which appear as square nodes on the graph. Dots inside the resource nodes indicate specific instances of the resource (e.g., two dots might represent two laser printers). ​ A set of processes, { P1, P2, P3,..., PN }. ​ Request Edges: A set of directed arcs from Pi to Rj, indicating that process Pi has requested Rj and is currently waiting for that resource to become available. ​ Assignment Edges: A set of directed arcs from Rj to Pi, indicating that resource Rj has been allocated to process Pi and that Pi is currently holding resource Rj. The structure of a resource-allocation graph helps determine the possibility of deadlock: ​ No Cycles = No Deadlock: If the graph has no cycles, deadlock cannot occur. ​ Cycles = Potential Deadlock: If a cycle exists, deadlock might occur. However: ○​ If all resource types in the cycle have only one instance, deadlock is certain. ○​ If there are multiple instances of resources in the cycle, deadlock is not guaranteed. Deadlock Detection and Recovery Deadlock detection is the process of actually determining that a deadlock exists and identifying the processes and resources involved in the deadlock. ​ If resources have a single instance: In this case, for deadlock detection, we can run an algorithm to check for cycles in the resource allocation graph. The presence of a cycle in the graph is a sufficient condition for deadlock. ○​ In the next diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1 → R2 → P2. So, deadlock is confirmed. ○​ Process Termination Two basic approaches, both of which recover resources allocated to processes: ​ Terminate all processes involved in the deadlock. This definitely solves the deadlock but at the expense of terminating more processes than would be absolutely necessary. ​ Terminate processes one by one until the deadlock is broken. This is more conservative but requires doing deadlock detection after each step. In the latter case, i.e., terminating the processes one by one, there are many factors that can go into deciding which processes to terminate next: ​ Process priority and how long it's been running. ​ Resources held by the process (ease of preemption and restoration). ​ Resources needed for completion. ​ Number of processes to terminate. ​ Whether the process is interactive or batch. ​ Whether the process has made non-restorable changes to any resource. Resource Preemption When pre-empting resources to relieve deadlock, there are three important issues to be addressed: ​ Selecting a victim: Deciding which resources to pre-empt from which processes involves many of the same decision criteria outlined above. ​ Rollback: Ideally reverting a preempted process to a safe state before resource allocation. However, since determining a safe state can be difficult, the safest approach is to abort the process and restart it from the beginning. ​ Starvation: Occurs when a process is continually preempted and never gets its resources. One solution is to use a priority system, where a process's priority is increased each time it is preempted. Over time, the process will reach a high enough priority to avoid further preemption. Deadlock Avoidance ​ Deadlock avoidance anticipates deadlock before it happens by using knowledge of resource usage, including available resources, allocations, future requests, and releases. ​ Most algorithms require processes to declare the maximum resources they may need in advance. ​ Based on this information, we can decide whether a process should wait for a resource to avoid a circular wait. ​ If the system is in a safe state, we try to maintain it and avoid moving to an unsafe state. ​ A system is in a safe state if it can allocate resources without entering a deadlock. ​ Deadlock avoidance algorithms prevent resource allocation if it would lead to an unsafe state. ​ These algorithms may result in low resource utilization since allocation is delayed. Deadlock Prevention We can prevent deadlock by eliminating any of the four conditions due to which deadlock might occur, i.e., mutual exclusion, hold and wait, no pre-emption, and circular wait: ​ Mutual Exclusion: One or more resources are non-sharable (only one process can use them at a time). ​ Hold and Wait: A process is holding at least one resource and waiting for resources. ​ No Pre-emption: A resource cannot be taken from a process unless the process releases the resource. ​ Circular Wait: A set of processes is waiting for each other in a circular form. Memory Management ​ Memory is the part of a computer that stores data. ​ The part of the operating system that manages the memory is called the memory manager. ​ The computer can change only data that is in main memory (RAM). Therefore, every program we execute and every file we access must be copied from a storage device into main memory. ​ The main task of the memory manager is to keep track of which parts of memory are used or unused, which location to allocate for a process, and when the process completes, how to deallocate the main memory, etc. ​ Another task is to swap between main memory and disk. Memory Management – Dynamic Loading and Linking ​ All programs are loaded into the main memory for execution. Sometimes a complete program is loaded into memory, but sometimes a certain part or routine of the program is loaded into the main memory only when it is called by the program. This mechanism is called Dynamic Loading, and it enhances performance. ​ Also, at times, one program depends on another program. In such a case, rather than loading all the dependent programs, the CPU links the dependent programs to the main executing program when required. This mechanism is known as Dynamic Linking. Memory Management - Paging and Swapping In computer operating systems, paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Swapping: A process needs to be in memory for execution. But sometimes there is not enough main memory to hold all the currently active processes in a timesharing system. So, excess processes are kept on disk and brought in to run dynamically. Swapping is the process of bringing each process into main memory, running it for a while, and then putting it back to the disk. Monoprogramming without Swapping or Paging: ​ One process in memory at all times. ​ Process has the entire memory available to it. ​ Only one process at a time can be running. ​ Not practical for multiprogramming. ​ Used on older OS such as MS-DOS. ​ As soon as the user types a command, the operating system copies the requested program from disk to memory and executes it. ​ When the process finishes, the operating system displays a prompt character and waits for a new user command. ​ When the operating system receives the command, it loads a new program into memory, overwriting the first one. Multiprogramming with Fixed Partition In a multiprogramming environment, several programs reside in primary memory at a time, and the CPU passes its control rapidly between these programs. One way to support multiprogramming is to divide the main memory into several partitions, each allocated to a single process. There are two types of memory partitioning: ​ Static partitioning: Memory is divided into some partitions, and its size is made in the beginning and remains fixed thereafter. ​ Dynamic partitioning: The size and the number of partitions are decided during runtime by the operating system. Memory Management Schemes Based on Contiguous Allocation Fixed partitioning divides memory into fixed-size partitions, each holding one program. The number of programs in memory is limited by the partition count. When a program ends, its partition becomes available for the next program in the queue. Virtual Memory ​ An operating system's virtual memory management feature enables a computer to access more memory than what is physically installed on the system. This additional memory, which is a portion of a hard drive configured to mimic the computer's RAM, is actually known as virtual memory. ​ Large programs can store themselves in the form of pages in virtual memory while executing, and only the required pages or portions of processes are loaded into the main memory. In real scenarios, most processes never need all their pages at once, for the following reasons: ​ Error handling code is not needed unless that specific error occurs, some of which are quite rare. ​ Arrays are often oversized for worst-case scenarios, and only a small fraction of the arrays are actually used in practice. ​ Certain features of certain programs are rarely used. Benefits of Having Virtual Memory Virtual memory allows large programs to be written, as the virtual address space is much larger than physical memory. It reduces I/O, enabling faster and easier process swapping. With virtual memory, programs occupy less physical space, allowing more programs to run simultaneously, increasing CPU utilization and throughput. Demand Paging ​ The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped in all at once. Rather, they are swapped in only when the process needs them (on demand). ​ There is a Memory Management Unit in the CPU whose job is to translate virtual addresses into physical addresses. ​ The relation between virtual addresses and physical memory addresses is given by the page table. Page Table ​ A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. ​ When a process requests data in memory, the operating system maps the virtual address provided by the process to the corresponding physical address in the actual memory where the data resides. ​ Each mapping is known as a page table entry (PTE). Page Replacement Algorithms Page Replacement Algorithms (PRA) determine which page to replace during a page fault. A page fault occurs when a program accesses a page not loaded in physical memory, prompting the OS to replace an existing page with the required one. A page fault is a type of interrupt. ​ First In, First Out (FIFO) Algorithm:​ FIFO is the simplest page replacement algorithm. Pages are managed in a queue, with the oldest page at the front. When a replacement is needed, the oldest page is removed.​ Belady’s anomaly: Increasing the number of page frames can sometimes result in more page faults instead of fewer. This counterintuitive behavior highlights the inefficiency of FIFO in certain scenarios.​ ​ Least Recently Used (LRU) Algorithm:​ A page that has not been used for the longest time in the main memory is the one that will be selected for replacement. It is easy to implement by keeping a list and replacing pages by looking back into time.​ ​ The Not Recently Used Page Replacement Algorithm:​ To allow the operating system to collect useful page usage statistics, most computers with virtual memory have two status bits, R and M, associated with each page. R is set whenever the page is referenced (read or written). M is set when the page is written to (i.e., modified). The bits are contained in each page table entry.​ ​ The Second-Chance Page Replacement Algorithm:​ The Second-Chance algorithm aims to find an old page that hasn't been referenced recently. If all pages are referenced, it degenerates into FIFO. In this case, the OS moves pages to the end of the list, clearing their R bit each time. Eventually, the algorithm cycles back to a page with its R bit cleared, and that page is evicted, ensuring the algorithm always terminates.​ ​ The Clock Page Replacement Algorithm:​ The Second-Chance algorithm can be inefficient due to the constant movement of pages. A more efficient approach uses a clock-based system, where page frames are arranged in a circular list. When a page fault occurs, the page pointed to by the clock hand is checked: if its R bit is 0, it is evicted and replaced; if R is 1, it is cleared, and the hand moves to the next page.​ ​ Working Set Page Replacement Algorithm:​ The working set of a process is the subset of pages it actively uses during execution. If the entire working set fits in memory, the process runs efficiently with few page faults. However, if memory is too small to hold the working set, the process experiences many page faults, causing slower execution due to the long delay in loading pages from disk (typically 10 milliseconds compared to nanoseconds for instruction execution). A program causing page faults every few instructions is called thrashing.​ Design Issues for Paging Systems Local vs. Global Allocation ​ If a local algorithm is used and the working set grows, thrashing will result, even if there are a sufficient number of free page frames. ​ If a global algorithm is used, it may be possible to start each process up with some number of pages proportional to the process’s size. PFF (Page Fault Frequency) Algorithm:​ It tells when to increase or decrease a process’ page allocation but says nothing about which page to replace on a fault. ​ Load Control: Thrashing occurs when the combined working sets of processes exceed memory capacity, even with optimal algorithms. A key indicator is the PFF algorithm showing processes needing more memory without any needing less. The solution is to swap some processes to disk, redistributing their page frames to reduce thrashing. ​ Page Size: Internal fragmentation occurs when random data does not fully occupy the last allocated page, leaving unused space. Larger page sizes, like 32 KB, result in more wasted space compared to smaller sizes like 16 KB or 4 KB. Thus, smaller page sizes generally minimize memory wastage by reducing internal fragmentation. ​ Separate Instruction and Data Spaces: Most computers use a single address space for programs and data, which works well if the space is large enough. When it's too small, fitting everything becomes challenging for programmers. ​ Shared Pages: In large multiprogramming systems, sharing pages is efficient when multiple users run the same program. Program text pages can be shared easily, but data pages are more complex to share. ​ Shared Libraries: Modern systems use numerous large libraries, such as I/O and graphics libraries, shared by many processes. Statically binding these libraries to every executable would significantly increase the size of programs, making them overly bloated. ​ Mapped Files: A process can use a system call to map a file into its virtual address space, bringing in pages on demand rather than upfront. Mapped files offer an alternative I/O model, allowing file access as a memory array instead of using read/write operations. If multiple processes map the same file, they can communicate via shared memory.​ ​ Cleaning Policy: Paging is most effective when there are enough free page frames available for use during page faults. If all frames are full, a page must be written to disk before a new one can be loaded. To manage this, a paging daemon runs in the background, periodically freeing page frames by evicting pages using a page replacement algorithm.​ ​ Virtual Memory Interface: In advanced systems, programmers can control memory mapping for better performance. They can share memory regions between processes by naming them, allowing one process to map the region into another process's space. Additionally, instead of copying message data, processes can pass messages by unmapping and mapping pages, reducing the cost of data transfer.

Use Quizgecko on...
Browser
Browser