OS Notes 3.pdf

Full Transcript

Operating System Chapter-3 Memory Management 1. List out Memory allocation methods. 1. Contiguous Memory Allocation Single Process Monitor Multiprogramming with Fixed partitioning Multiprogramming with Dynamic partitioning 2. Non-...

Operating System Chapter-3 Memory Management 1. List out Memory allocation methods. 1. Contiguous Memory Allocation Single Process Monitor Multiprogramming with Fixed partitioning Multiprogramming with Dynamic partitioning 2. Non-Contiguous Memory Allocation Paging Segmentation 2. Explain Contiguous memory allocation method. This is simple and old method allocation. It is not used in modern operating systems. Each process occupies a block of contiguous memory locations in main memory. Entire process is kept together in a contiguous section of memory. When a process is brought in memory, a memory is searched to find out a chunk of free memory having enough size to hold a process. Once such chunk is found required memory is allocated. If a contiguous memory space of the required size is not available in the main memory, the process is made to wait until contiguous space of the required size is available. Logical address space is not divided into any partitions. Also physical address space will be contiguous, without any gaps. 0000 Logical address space of p 6000 Physical 0799 Address space of F 6799 1 Operating System Advantages It is easy to implement and understand. Disadvantages Having poor memory utilization. 3. Explain Non- Contiguous memory allocation method. This method is used by most modern operating systems. Logical address space of a process is divided into partitions. And for each partition contiguous chunk of free memory is allocated. Physical address space will not be contiguous now. Logical address space of a process is divided into three partitions (A, B and C) and each partition is allocated separate chunk of memory in physical memory. B A B … C A Logical Address Space C Memory Advantages Having better memory utilization. Disadvantages It is complex to implement. 4. Explain Multiprogramming with Fixed (static) Partitions. This method allows multiple processes to execute simultaneously. Size is fixed by manufacturers. Memory is shared among operating systems and various simultaneously running processes. 2 Operating System Multiprogramming increases CPU utilization. Here, memory is divided into fixed-size partitions. Size can be equal or unequal for different partitions. Generally, unequally sized partitions are used for better utilization of memory. Each partition can accommodate exactly one process. Means only single process can be placed in one partition. Whenever any program needs to be loaded in memory, a free partition big enough to hold the program is found. This partition will be allocated to that program. If there is no free partition available of required size, then that process need to wait. Such process will be put in a queue. There are two ways possible 1. Using Multiple Input Queue 2. Using Single Input Queue Partition 4 Partition 3 Partition 2 Partition 1 Operating System 5. Explain Multiprogramming with Dynamic Partition. This method also has multiple processes to execute simultaneously. Memory is shared among operating systems and various simultaneously running processes. Memory is not divided into any fixed size partitions. Also the number of partitions is not fixed. Process is allocated exactly as much memory as it requires. 3 Operating System Whenever any process enter in a system, a chunk of free memory big enough to fit the process is found and allocated. If enough free memory is not available to fit the process, process needs to wait until required memory becomes available. OS 8M OS OS 56M Process1 Process1 20M 20M Process 2 14M Advantages: 1. Better utilization of memory. 2. There is no internal fragmentation 3. Degree of multiprogramming is not fixed here. Disadvantages: 1. External fragmentation is possible 6. What is Fragmentation? Explain in brief. Fragmentation refers to the unused memory that cannot be allocated to any process. There is free memory available, it cannot be used. Fragmentation is an unwanted problem in the operating system in which the processes are loaded and unloaded from memory, and free memory space is fragmented. Processes can't be assigned to memory blocks due to their small size, and the memory blocks stay unused. It is also necessary to understand that as programs are loaded and deleted from memory, they generate free space or a hole in the memory. 4 Operating System These small blocks cannot be allotted to new arriving processes, resulting in inefficient memory use. There are two different kinds of fragmentations 1. External fragmentation 2. Internal fragmentation 1. External fragmentation It refers to the wastage of free memory between partitions, caused by scattered non- contiguous free space. This is a severe problem in contiguous memory allocation method with dynamic partitioning. Memory allocation and de-allocation operations eventually result in small holes in the memory. But, as memory is allocated contiguously here, these holes can’t be used. This type of memory wastage is called external fragmentation. 2. Internal fragmentation It refers to the wastage of free memory within a partition, caused by the difference between the size of a partition and size of a process loaded. 7. Explain Memory Relocation and protection. Memory Relocation A process can be loaded in any partition of main memory. Or a process can be loaded at any location in main memory. Address in logical address space and physical address space will not be same here. Logical address specify the locations of instructions and data within a process address space while physical address specify the actual locations in main memory. Physical addresses are required to actually fetch information. There is a reference to any logical address, it should be converted to physical address. This problem is called Memory Relocation. Memory Protection Multiprogramming allows more than on process to run simultaneously. Memory is shared among various processes as well as operation system. 5 Operating System All concurrent processes will be in memory at the same time. These processes should not have access to unauthorized information of other processes. Each process should read and write data belonging to that process only. This problem is called Memory Protection. Here pair of registers is used. These registers are called limit register and base register. Limit register is used to store the size of the process. Base register is used to store the staring location of process in main memory. Whenever any process is loaded in main memory its staring location and size are stored in these two registers respectively. CPU generated logical address start from 0 and goes up to the size of the process. 8. Explain paging. Logical address space of a process is divided into partitions. For each partition contiguous chunk of free memory is allocated in physical memory. Physical Address (represented in bits): An address actually available on memory unit Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to the logical addresses Address generated by CPU is divided into Page number(p): Number of bits required to represent the pages in Logical Address Space or Page number Page offset(d): Number of bits required to represent particular word in a page or page size of Logical Address Space or word number of a page or page offset. 6 Operating System Physical Address is divided into Frame number(f): Number of bits required to represent the frame of Physical Address Space or Frame number. Frame offset (d): Number of bits required to represent particular word in a frame or frame size of Physical Address Space or word number of a frame or frame offset. A table called page table, is used to implement paging. When a process is to be executed, its pages are moved to free frames in physical memory. The information about frame number, where a page is stored, is kept in this page table. During the process execution a CPU generates a logical address to access instruction or data from a particular location. The page number is used to search the page table. This frame number indicates the actual frame on physical memory in which the page is stored. 9. Explain Segmentation. In Operating Systems, Segmentation is a memory management technique in which the memory is divided into the variable size parts. Each part is known as a segment which can be allocated to a process. The details about each segment are stored in a table called a segment table. Segment table is stored in one (or many) of the segments. 7 Operating System Segment table contains mainly two information about segment: 1. Base: It is the base address of the segment 2. Limit: It is the length of the segment. CPU generates a logical address which contains two parts: 1. Segment Number 2. Offset The operating system can easily translate a logical address into physical address on execution of a program. The Segment number is mapped to the segment table. The limit of the respective segment is compared with the offset. If the offset is less than the limit then the address is valid otherwise it throws an error as the address is invalid. In the case of valid addresses, the base address of the segment is added to the offset to get the physical address of the actual word in the main memory. Advantages of Segmentation 1. No internal fragmentation 2. Average Segment Size is larger than the actual page size. 3. Less overhead 8 Operating System 4. It is easier to relocate segments than entire address space. 5. The segment table is of lesser size as compared to the page table in paging. Disadvantages 1. It can have external fragmentation. 2. It is difficult to allocate contiguous memory to variable sized partition. 3. Costly memory management algorithms. 10. Write a short note on Translation Look-aside Buffer. A translation lookaside buffer (TLB) is a memory cache that is used to reduce the time taken to access a user memory location. It is a part of the chip's memory-management unit (MMU). The TLB stores the recent translations of virtual memory to physical memory and can be called an address-translation cache. A Translation look aside buffer can be defined as a memory cache which can be used to reduce the time taken to access the page table again and again. It is a memory cache which is closer to the CPU and the time taken by CPU to access TLB is lesser then that taken to access main memory. In other words, we can say that TLB is faster and smaller than the main memory but cheaper and bigger than the register. TLB follows the concept of locality of reference which means that it contains only the entries of those many pages that are frequently accessed by the CPU. In translation look aside buffers, there are tags and keys with the help of which, the mapping is done. TLB hit is a condition where the desired entry is found in translation look aside buffer. If this happens then the CPU simply access the actual location in the main memory. However, if the entry is not found in TLB (TLB miss) then CPU has to access page table in the main memory and then access the actual frame in the main memory. Therefore, in the case of TLB hit, the effective access time will be lesser as compare to the case of TLB miss. 9 Operating System 11. What is swapping? Swapping is a memory management scheme in which any process can be temporarily swapped from main memory to secondary memory so that the main memory can be made available for other processes. It is used to improve main memory utilization. In secondary memory, the place where the swapped-out process is stored is called swap space. The purpose of the swapping in operating system is to access the data present in the hard disk and bring it to RAM so that the application programs can use it. The thing to remember is that swapping is used only when data is not present in RAM. Although the process of swapping affects the performance of the system, it helps to run larger and more than one process. This is the reason why swapping is also referred to as memory compaction. The concept of swapping has divided into two more concepts: Swap-in and Swap- out. Swap-out is a method of removing a process from RAM and adding it to the hard disk. 10 Operating System Swap-in is a method of removing a program from a hard disk and putting it back into the main memory or RAM. Advantages of Swapping 1. It helps the CPU to manage multiple processes within a single main memory. 2. It helps to create and use virtual memory. 3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes do not have to wait very long before they are executed. 4. It improves the main memory utilization. Disadvantages of Swapping 1. If the computer system loses power, the user may lose all information related to the program in case of substantial swapping activity. 2. If the swapping algorithm is not good, the composite method can increase the number of Page Fault and decrease the overall processing performance. 12. What is Virtual Memory? Explain Demand paging. A Virtual memory is a technic that allows a process to execute even though it is partially loaded in main memory. 11 Operating System Virtual Memory is a storage scheme that provides user an illusion of having a very big main memory. This is done by treating a part of secondary memory as the main memory. In this scheme, User can load the bigger size processes than the available main memory by having the illusion that the memory is available to load the process. Instead of loading one big process in the main memory, the Operating System loads the different parts of more than one process in the main memory. By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization will also be increased. Demand Paging Demands paging system is similar to paging with swapping. Processes are kept on secondary storage, i.e on disk. To execute s process, it is swapped into the memory. But, rather swapping the entire process into memory, only pages, which are required for execution, are swapped. Page table includes the valid-invalid bit for each page entry. If it is set to valid, it indicates that the page is currently present in the memory. If it is set to invalid, it indicates that the page is still not loaded in the memory. When a process starts its execution this is set invalid for all the entries in the page table. When a page is loaded in memory, its corresponding frame number is entered and page validity bit is set to valid. If the referred page is not present in the main memory, then there will be a miss and the concept is called Page miss or page fault. The CPU has to access the missed page from the secondary memory. If the number of page fault is very high then the effective access time of the system will become very high. With each page table entry, a valid-invalid bit is associated (where 1 indicates in the memory and 0 indicates not in the memory) Initially, the valid-invalid bit is set to 0 for all table entries. 12 Operating System If the bit is set to "valid", then the associated page is both legal and is in memory. If the bit is set to "invalid" then it indicates that the page is either not valid or the page is valid but is currently not on the disk. For the pages that are brought into the memory, the page table is set as usual. But for the pages that are not currently in the memory, the page table is either simply marked as invalid or it contains the address of the page on the disk. 13

Use Quizgecko on...
Browser
Browser