Module II - Operating System PDF

Summary

This document details concepts related to operating systems, specifically focusing on process scheduling. It covers long-term, short-term, and medium-term scheduling, along with preemptive and non-preemptive execution strategies. The document outlines criteria for CPU scheduling, such as utilization, throughput, and turnaround time.

Full Transcript

**Scheduling** Scheduling is done to finish the work on time. The operating system schedules every computer resource before providing it to the process. Scheduling is the process of allotting the CPU to the processes present in the ready queue. It is also referred to as **process scheduling**. The...

**Scheduling** Scheduling is done to finish the work on time. The operating system schedules every computer resource before providing it to the process. Scheduling is the process of allotting the CPU to the processes present in the ready queue. It is also referred to as **process scheduling**. The operating system schedules the process so that the CPU always has one process to execute. This reduces the CPU's idle time and increases its utilization. The part of the OS that allots the computer resources to the processes is termed as a **scheduler**. Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types − - Long-Term Scheduler - Short-Term Scheduler - Medium-Term Scheduler It uses various scheduling algorithms to decide which process it must allot to the CPU. **Job Scheduling :** The job scheduling is the mechanism to select which process has to be brought into the ready queue. **CPU Scheduling**  CPU scheduling is the procedure of deciding which process will own the CPU to use while another process is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use line. Difference Between Job Scheduling and CPU Scheduling ---------------------------------------------------- ----------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------- **Job Scheduling vs CPU Scheduling** The job scheduling is the mechanism to select which process has to be brought into the ready queue. The CPU scheduling is the mechanism to select which process has to be executed next and allocates the CPU to that process. **Synonyms** The job scheduling is also known as long-term scheduling. The CPU scheduling is also known as short-term scheduling. **Processed By** The job scheduling is done by the long-term scheduler or the job scheduler. The CPU scheduling is done by the short-term scheduler or the CPU scheduler. **Process State Transition** The process transfers from new state to ready state in job scheduling. The process transfers from ready state to running state in CPU scheduling. **Multiprogramming** More control over multiprogramming in Job Scheduling. Less control over multiprogramming in CPU Scheduling. ----------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------- **CPU Scheduling** is a method that allows one process to use the CPU while another process is delayed (in standby) due to unavailability of any resources such as I / O etc., thus making full use of the CPU. The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer. Whenever the CPU becomes idle, the operating system must select one of the processes in the line ready for launch. The selection process is done by a temporary (CPU) scheduler. The Scheduler selects between memory processes ready to launch and assigns the CPU to one of them. There are mainly two types of CPU scheduling methods: - In **preemptive** scheduling, the CPU resources are allocated to a process for only a limited period of time and then those resources are taken back and assigned to another process (the next in execution). If the process is yet to complete its execution, it is placed back in the ready state, where it will remain till it gets a chance to execute once again. - In the case of **non-pre-emptive** scheduling, new processes are executed only after the current process has completed its execution. If a process is currently being executed by the CPU, it is not interrupted till it is completed. Once the process has completed its execution, the processor picks the next process from the **ready** **queue** (the queue in which all processes that are ready for execution are stored). - - - - - - **CPU Scheduling Criteria** The criteria the CPU takes into consideration while \"scheduling\" these processes are - CPU utilization, throughput, turnaround time, waiting time, and response time. - Utilization of CPU at maximum level.  **Keep the CPU as busy as possible**. **Allocation of CPU should be fair**. - **Throughput should be Maximum**. i.e. Number of processes that complete their execution per time unit should be maximized. - **Minimum turnaround time**, i.e. time taken by a process to finish execution should be the least. - There should be a **minimum waiting time** and the process should not starve in the ready queue. - **Minimum** **response time.** It means that the time when a process produces the first response should be as little as possible. **CPU Scheduling Algorithms** There are various algorithms which are used by the Operating System to schedule the processes on the processor in an efficient way. The Purpose of a Scheduling algorithm 1. Maximum CPU utilization 2. Fare allocation of CPU 3. Maximum throughput 4. Minimum turnaround time 5. Minimum waiting time 6. Minimum response time 1. **First Come First Serve** - - - - - - - - - Sums ![](media/image2.png) 2. **Round Robin** - - - 1. 2. 3. 4. 1. The higher the time quantum, the higher the response time in the system. 2. The lower the time quantum, the higher the context switching overhead in the system. 3. Deciding a perfect time quantum is really a very difficult task in the system. 3. **Shortest Job First** - - - - - - - 4. **Shortest remaining time first** - SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given its overhead charges are not counted.  - The context switch is done a lot more times in SRTF than in SJF and consumes the CPU's valuable time for processing. - - - Like the shortest job first, it also has the potential for process starvation.  - Long processes may be held off indefinitely if short processes are continually added.  5. **Priority scheduling** - Schedules tasks based on priority. - When the higher priority work arrives while a task with less priority is executed, the higher priority work takes the place of the less priority one and The latter is suspended until the execution is complete. - Lower is the number assigned, higher is the priority level of a process. - The average waiting time is less than FCFS - Less complex - One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the Starvation Problem.* *This is the problem in which a process has to wait for a longer amount of time to get scheduled into the CPU. This condition is called the starvation problem. Process Hierarchy ----------------- In a computer system, we are required to run many processes at a time and some processes need to create other processes during their execution. When a process creates another process, then the parent and the child processes tend to associate with each other in certain ways and further. The child process can also create other processes if required. This parent-child-like structure of processes forms a hierarchy, called Process Hierarchy. Following are the objects of a process hierarchy in an operating system: 1. **Parent-Child Relationship:** In a process hierarchy, each process except for the root process has a parent process from which it is created. The parent process is responsible for creating, managing, and controlling its child processes. This creates a hierarchical relationship where processes form a tree-like structure. 2. **Root Process:** The root process is the top-level process in the hierarchy and has no parent process. It is usually the first process that is created when the operating system starts. All other processes are descendants of the root process. 3. **Child Processes:** Each process can create one or more child processes. When a process creates a child process, the child process inherits certain attributes and resources from its parent, such as the memory space, file descriptors, and environment variables. 4. **Process Group:** Processes within the same process hierarchy can be organized into process groups. A process group is a collection of related processes that can be managed and controlled collectively. Process groups enable operations such as signaling and process control across multiple processes. 5. **Process Tree:** The process hierarchy forms a tree-like structure, often referred to as the process tree. The root process is at the top of the tree, and child processes branch out from their parent processes. The process tree represents the relationships and dependencies between processes within the system. 6. **Process ID (PID):** Each process in the system is assigned a unique process identifier (PID) that distinguishes it from other processes. PIDs are used for process management and identification purposes, allowing the operating system to track and manipulate processes. 7. **Process Termination:** When a process terminates, either voluntarily or due to an error or completion, its resources are released, and it is removed from the process hierarchy. If a parent process terminates before its child processes, the orphaned child processes may be adopted by a system-defined parent process. The process hierarchy in an operating system facilitates process management, resource allocation, and control. It allows for the organization and coordination of processes, enabling the execution of complex applications and multitasking. The hierarchical structure helps establish relationships and dependencies between processes, ensuring orderly execution and efficient resource utilization. **Memory Management** **What is memory management?** Memory management is allocating, freeing, and re-organizing memory in a computer system to optimize the available memory or to make more memory available. It keeps track of every memory location (if it is free or occupied). - Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution. - Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free. - It checks how much memory is to be allocated to processes. - It decides which process will get memory at what time. - It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status. **Memory allocation** When a program or process is to be executed, it needs some space in the memory, so some part of the memory has to be allotted to a process according to its requirements. This process is called memory allocation. **Memory allocation schemes** 1. **Contiguous memory allocation:** A. **Fixed Size Partitioning:** Each process is allotted to a fixed size continuous block in the main memory. B. **Variable Size Partitioning:** Each process is allotted space depending upon its requirements. There is no defined fixed-size block. - **Efficient memory utilization** − Contiguous memory allocation is efficient in terms of memory utilization, as there is no internal fragmentation within a process\'s allocated memory block. - **Simple and easy to manage** − This technique is simple and easy to manage, as the operating system can quickly allocate and deallocate memory to processes by assigning contiguous blocks. - **Fast access** − Since the memory is allocated in contiguous blocks, access to the memory is faster than other memory management techniques. - **External Fragmentation** − One of the main disadvantages of contiguous memory allocation is external fragmentation, which occurs when small gaps of free memory are scattered throughout the memory space. - **Limited memory capacity** − Contiguous memory allocation is limited by the size of the memory blocks available on the system, which may limit the total amount of memory that can be allocated to a process. - **Difficulty in sharing memory** − This technique makes it difficult to share memory between multiple processes, as each process is assigned a contiguous block of memory that cannot be shared with other processes. - **Lack of flexibility** − Contiguous memory allocation lacks flexibility in allocating and deallocating memory, as the operating system can only allocate memory in contiguous blocks. 2. **Non-Contiguous memory allocation:** **Memory management Techniques:** Paging, and swapping, segmentation and compaction are modern computers' four main memory management techniques. Swapping is the best technique for memory management because it provides the most efficient use of system resources. - **Paging** - Paging is the memory management technique in which secondary memory is divided into fixed-size blocks called pages, and main memory is divided into fixed-size blocks called frames. - The Frame has the same size as that of a Page. - The processes are initially in secondary memory, from where the processes are shifted to main memory (RAM) when there is a requirement. - Each process is mainly divided into parts where the size of each part is the same as the page size. - One page of a process is mainly stored in one of the memory frames. - Paging follows non-contiguous memory allocation. That means pages in the main memory can be stored at different locations in the memory. ![https://lh6.googleusercontent.com/55uqpSFjAXsBJ7FN\_oQfCc9lj03lbronFEAlTEzQ9lGKBnN5DoTmzXBJJ\_1sBE87lLd9HZz6lTTA2BFmiiOELqLD2llFo7l12wRdyCWUySf-WL4Wuy3S3CNnsXOl2oxHDVoeWaRGmOZQuqVAug](media/image4.png) **Advantages of paging:** - Pages reduce external fragmentation. - Simple to implement. - Memory efficient. - Due to the equal size of frames, swapping becomes very easy. - It is used for faster access of data. - **Swapping** - - - - - - - **Segmentation** 1. 2. 1. 2. 3. 4. 5. 1. 2. 3. - **Compaction** - Compaction is a technique to collect all the free memory present in the form of fragments into one large chunk of free memory, which can be used to run other processes. - In memory management, swapping creates multiple fragments in the memory because of the processes moving in and out. - Compaction refers to combining all the empty fragments together. - Compaction helps to solve the problem of fragmentation, but it requires too much CPU time. - It does that by moving all the processes towards one end of the memory and all the available free space towards the other end of the memory so that it becomes contiguous. - Compaction is used by many modern operating systems, such as Windows, Linux, and Mac OS X. https://lh5.googleusercontent.com/BUQI9DM3eUuAilZEdsZXm4iNxAYQGTy8FEGnJdvov9CcFr7QD30prQAP\_JJe8aaof1e5id-4tnXIyZGAcuWLijhgqR27QshXOYxA38Ojf4hfPFIeeum9CmPENlXYvVx0NSY5qc7gdFakw7fKtw - - - - - - - - **Fragmentation** As processes are loaded and unloaded from memory, the free memory spaces are broken into small pieces of memory that cannot be allocated to incoming processes due to their small size and the blocks remain unused. It is called **fragmentation**. This results in inefficient use of memory. Basically, there are two types of fragmentation: - Internal Fragmentation - External Fragmentation - **Internal Fragmentation** - **External Fragmentation** **Memory mapping** is the process by which program code is able to run where the operating system loads it. A computer program is compiled to a set of instructions that assume it is the only program running on a virtual computer. All references to locations of code and in memory data are based on this fixed view of the hardware on which it runs. In fact the code may be loaded anywhere in the real address space supported by the word length (eg. 32 or 64 bits). The low level kernel operation of the computer maps out the memory architecture in pages and needs to translate virtual to physical addresses of those memory pages. It may need to swap pages in and out of cache and where cache memory is limited, then out to disk. Most modern Operating Systems provide virtual memory. When you have virtual memory the memory addresses used in the program are not the same as the physical addresses used in hardware (\*). One advantage of this is that each process can have its own address space, independent of other processes. Another advantage is that files can be operated on in the same way as memory. This is done by mapping a file into the process's address space. There are many types of mapping, but the most common are: - - - (\*) Note hardware provides the mechanisms needed to do this; virtual addresses are understood by the hardware (chipset), but not by the RAM chips.

Use Quizgecko on...
Browser
Browser