systemcalls and process Copy.pptx

Full Transcript

FUNCTIONS OF OPERATING SYSTEM An operating system (OS) is a program that acts as an interface between the system hardware and the user. Moreover, it handles all the interactions between the software and the hardware. All the working of a computer system depends on the OS at th...

FUNCTIONS OF OPERATING SYSTEM An operating system (OS) is a program that acts as an interface between the system hardware and the user. Moreover, it handles all the interactions between the software and the hardware. All the working of a computer system depends on the OS at the base level. Further, it performs all the functions like handling memory, processes, the interaction between hardware and software, etc. Now, let us look at the functions of operating system Nahida Nazir 09/28/2024 1. Memory Management It is the management of the main or primary memory. Whatever program is executed, it has to be present in the main memory. Main memory is a quick storage area that may be accessed directly by the CPU. When the program is completed, the memory region is released and can be used by other programs. Therefore, there can be more than one program present at a time. Hence, it is required to manage the memory. Nahida Nazir 09/28/2024 The operating system: Allocates and deallocates the memory. Keeps a record of which part of primary memory is used by whom and how much. Distributes the memory while multiprocessing. In multiprogramming, the operating system selects which processes acquire memory when and how much memory they get. Nahida Nazir 09/28/2024 2. Processor Management/Scheduling Every software that runs on a computer, whether in the background or in the frontend, is a process. Processor management is an execution unit in which a program operates. The operating system determines the status of the processor and processes, selects a job and its processor, allocates the processor to the process, and de-allocates the processor after the process is completed. Nahida Nazir 09/28/2024 When more than one process runs on the system the OS decides how and when a process will use the CPU. Hence, the name is also CPU Scheduling. The OS: Allocates and deallocates processor to the processes. Keeps record of CPU status. Certain algorithms used for CPU scheduling are as follows: First Come First Serve (FCFS) Shortest Job First (SJF) Nahida Nazir 09/28/2024 Purpose of CPU scheduling The purpose of CPU scheduling is as follows: Proper utilization of CPU. Since the proper utilization of the CPU is necessary. Therefore, the OS makes sure that the CPU should be as busy as possible. Since every device should get a chance to use the processor. Hence, the OS makes sure that the devices get fair processor time. Increasing the efficiency of the system. Nahida Nazir 09/28/2024 3. Device Management An operating system regulates device connection using drivers. The processes may require devices for their use. This management is done by the OS. The OS: Allocates and deallocates devices to different processes. Keeps records of the devices. Decides which process can use which device for how much time. Nahida Nazir 09/28/2024 4. File Management The operating system manages resource allocation and de-allocation. It specifies which process receives the file and for how long. It also keeps track of information, location, uses, status, and so on. These groupings of resources are referred to as file systems. The files on a system are stored in different directories. The OS: Keeps records of the status and locations of files. Allocates and deallocates resources. Decides who gets the resources. Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 SYSTEM CALLS A system call is a mechanism that provides the interface between a process and the operating system. It is a programmatic method in which a computer program requests a service from the kernel of the OS. Nahida Nazir 09/28/2024 NEED OF SYSTEM CALL Reading and writing from files demand system calls. If a file system wants to create or delete files, system calls are required. System calls are used for the creation and management of new processes. Network connections need system calls for sending and receiving packets. Access to hardware devices like scanner, printer, need a system call. Nahida Nazir 09/28/2024 TYPES OF SYSTEM CALL Process Control File Management Device Management Information Maintenance Communications Nahida Nazir 09/28/2024 IMPORTANT SYSTEM CALLS USED IN OS Wait() The term "wait system call" typically refers to a system call used in programming to make a process wait for the completion of another process. In the context of operating systems, a system call is a request made by a program to the operating system kernel to perform certain tasks, such as process management. In the case of process management, when a parent process creates a child process, it may need to wait for the child process to finish before continuing its own execution. fork() Processes use this system call to create processes that are a copy of themselves. With the help of this system Call parent process creates a child process, and the execution of the parent process will be suspended till the child process executes. Nahida Nazir 09/28/2024 IMPORTANT SYSTEM CALLS exec() kill():This system call runs when an executable file in the context of an already running process that replaces the older executable file. However, the original process identifier remains as a new process is not built, but stack, data, head, data, etc. are replaced by the new process. The kill() system call is used by OS to send a termination signal to a process that urges the process to exit. However, a kill system call does not necessarily mean killing the process and can have various meanings. Nahida Nazir 09/28/2024 PROCESS A process is basically a program in execution. OS helps you to create, schedule, and terminates the processes which is used by CPU. A process created by the main process is called a child process Nahida Nazir 09/28/2024 MCQ Nahida Nazir 09/28/2024 PROCESS STATES  New  Ready  Running  Waiting  Blocked  Terminated Nahida Nazir 09/28/2024 STATES New - The process is in the stage of being created. Ready - The process has all the resources available that it needs to run, but the CPU is not currently working on this process's instructions. Running - The CPU is working on this process's instructions. Waiting - The process cannot run at the moment, because it is waiting for some resource to become available or for some event to occur. For example the process may be waiting for keyboard input, disk access request, inter-process messages, a timer to go off, or a child process to finish. Terminated - The process has completed. Nahida Nazir 09/28/2024 DIAGRAM OF PROCESS STATE Nahida Nazir 09/28/2024 COMPONENTS OF A PROCESS A program can be segregated into four pieces when put into memory to become a process: stack, heap, text, and data. Nahida Nazir 09/28/2024 Stack Temporary data like method or function parameters, return address, and local variables are stored in the process stack. Heap This is the memory that is dynamically allocated to a process during its execution. Text This comprises the contents present in the processor’s registers as well as the current activity reflected by the value of the program counter. Data The global as well as static variables are included in this section. Nahida Nazir 09/28/2024 DIFFERENT SECTIONS The text section comprises the compiled program code, read in from non-volatile storage when the program is launched. The data section stores global and static variables, allocated and initialized prior to executing main. The heap is used for dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc.. Nahida Nazir 09/28/2024 CONTINUED The stack is used for local variables. Space on the stack is reserved for local variables when they are declared ( at function entrance or elsewhere, depending on the language ), and the space is freed up when the variables go out of scope. Note that the stack is also used for function return values, and the exact mechanisms of stack management may be language specific. Note that the stack and the heap start at opposite ends of the process's free space and grow towards each other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or malloc will fail due to insufficient memory available Nahida Nazir 09/28/2024 PROCESS IN MEMORY Nahida Nazir 09/28/2024 PROCESS CONTROL BLOCK Process State - Running, waiting, etc., as discussed above. Process ID, and parent process ID. CPU registers and Program Counter - These need to be saved and restored when swapping processes in and out of the CPU. CPU-Scheduling information - Such as priority information and pointers to scheduling queues. Nahida Nazir 09/28/2024 CONTINUED Memory-Management information - E.g. page tables or segment tables. Accounting information - user and kernel CPU time consumed, account numbers, limits, etc. I/O Status information - Devices allocated, open file tables, etc. Nahida Nazir 09/28/2024 PCB Nahida Nazir 09/28/2024 OPERATIONS ON A PROCESS The user can perform the following operations on a process in the operating system: 1.Process creation 2.Process scheduling or dispatching 3.Blocking 4.Preemption 5.Termination Nahida Nazir 09/28/2024 Process creation Process creation is the initial step to process execution. It implies the creation of a new process for execution. Nahida Nazir 09/28/2024 Process scheduling\dispatching Scheduling or dispatching refers to the event where the OS puts the process from ready to running state. It is done by the system when there are free resources or there is a process of higher priority than the ongoing process. Nahida Nazir 09/28/2024 Blocking Block mode is a mode where the system waits for input-output. In process blocking operation, the system puts the process in the waiting state. When a task is blocked, it is unable to execute until the task prior to it has finished using the shared resource. Examples of shared resources are the CPU, network and network interfaces, memory, and disk. Nahida Nazir 09/28/2024 Preemption Preemption means the ability of the operating system to preempt a currently scheduled task in favour of a higher priority task. The resource being scheduled can be the processor or the I\O. Nahida Nazir 09/28/2024 Termination Ending a process is known as process termination. There are many events that may lead to process termination, some of them are: 1.One process terminating the other process. 2.A problem in the hardware. 3.The process is fully executed, implying that the OS is finished. 4.An operating system might terminate itself due to service errors. Nahida Nazir 09/28/2024 Which of the following is not a process state in an operating system? A) Running B) Ready C) Sleeping D) Terminated Nahida Nazir 09/28/2024 What is the state of a process that is ready to run but waiting for CPU allocation? A) Running B) Ready C) Blocked D) Terminated Nahida Nazir 09/28/2024 In which process state is a process currently being executed by the CPU? A) Waiting B) Ready C) Running D) Suspended Nahida Nazir 09/28/2024 What does a process in the 'Blocked' or 'Waiting' state indicate? A) The process is executing instructions B) The process is waiting for CPU time C) The process is waiting for an event or resource D) The process has completed execution Nahida Nazir 09/28/2024 If a process is in the ‘Suspended’ state, what does this indicate? A) The process is running normally. B) The process is temporarily halted and can be resumed later. C) The process has been terminated permanently. D) The process is waiting for an I/O operation Nahida Nazir 09/28/2024 Which action causes a process to transition from the "Ready" state to the "Running" state? A) Scheduling B) I/O completion C) Timeout D) Interrupt Nahida Nazir 09/28/2024 PROCESS SCHEDULING The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. Nahida Nazir 09/28/2024 PROCESS SCHEDULING The two main objectives of the process scheduling system are to keep the CPU busy at all times and to deliver "acceptable" response times for all programs, particularly for interactive ones. The process scheduler must meet these objectives by implementing suitable policies for swapping processes in and out of the CPU. ( Note that these objectives can be conflicting. In particular, every time the system steps in to swap processes it takes up time on the CPU to do so, which is thereby "lost" from doing any useful productive work. ) Nahida Nazir 09/28/2024 CATEGORIES OF SCHEDULING 1.Non-preemptive: Here the resource can’t be taken from a process until the process completes execution. The switching of resources occurs when the running process terminates and moves to a waiting state. 2.Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During resource allocation, the process switches from running state to ready state or from waiting state to ready state. This switching occurs as the CPU may give priority to other processes and replace the process with higher priority with the running process. Nahida Nazir 09/28/2024 SCHEDULING QUEUE All PCBs (Process Scheduling Blocks) are kept in Process Scheduling Queues by the OS. Each processing state has its own queue in the OS, and PCBs from all processes in the same execution state are put in the very same queue. A process’s PCB is unlinked from its present queue and then moved to its next state queue when its status changes. Nahida Nazir 09/28/2024 SCHEDULING QUEUES All processes are stored in the job queue. Processes in the Ready state are placed in the ready queue. Processes waiting for a device to become available or to deliver data are placed in device queues. There is generally a separate device queue for each device. Other queues may also be created and used as needed. Nahida Nazir 09/28/2024 TYPES Job queue − This queue keeps all the processes in the system. Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new process is always put in this queue. Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue. Nahida Nazir 09/28/2024 SCHEDULAR Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types − Nahida Nazir 09/28/2024 SCHEDULERS Long term scheduler short term scheduler medium scheduler Nahida Nazir 09/28/2024 SCHEDULAR A process migrates between various scheduling queues throughtout its lifetime. The process of selecting processes from these queues is carried out by a scheduler Nahida Nazir 09/28/2024 LONG TERM SCHEDULER Long term scheduler runs less frequently. Long Term Schedulers decide which program must get into the job queue. From the job queue, the Job Processor, selects processes and loads them into the memory for execution. Primary aim of the Job Scheduler is to maintain a good degree of Multiprogramming. Long-Term Scheduler is also known as Job Scheduler Nahida Nazir 09/28/2024 SHORT TERM SCHEDULER This is also known as CPU Scheduler and runs very frequently. The primary aim of this scheduler is to enhance CPU performance and increase process execution rate. Nahida Nazir 09/28/2024 MEDIUM TERM SCHEDULER Medium Term Scheduler :This scheduler removes the processes from memory (and from active contention for the CPU), and thus reduces the degree of multiprogramming. At some later time, the process can be reintroduced into memory and its execution van be continued where it left off. This scheme is called swapping. The process is swapped out, and is later swapped in, by the medium term scheduler. Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 DISPATCHER A dispatcher is a special program which comes into play after the scheduler. When the scheduler completes its job of selecting a process, it is the dispatcher which takes that process to the desired state/queue. The dispatcher is the module that gives a process control over the CPU after it has been selected by the short-term scheduler. This function involves the following: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program Nahida Nazir 09/28/2024 DISPATCHER VS SCHEDULAR Nahida Nazir 09/28/2024 DIFFERENCE BETWEEN SCHEDULER AND DISPATCHER The Difference between the Scheduler and Dispatcher – Consider a situation, where various processes are residing in the ready queue waiting to be executed. The CPU cannot execute all of these processes simultaneously, so the operating system has to choose a particular process on the basis of the scheduling algorithm used. So, this procedure of selecting a process among various processes is done by the scheduler. Once the scheduler has selected a process from the queue, the dispatcher comes into the picture, and it is the dispatcher who takes that process from the ready queue and moves it into the running state. Therefore, the scheduler gives the dispatcher an ordered list of processes which the dispatcher moves to the CPU over time Nahida Nazir 09/28/2024 CPU SCHEDULING CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution Nahida Nazir 09/28/2024 CPU SCHEDULING Nahida Nazir 09/28/2024 PREEMPTIVE SCHEDULING In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is important to run a task with a higher priority before another lower priority task, even if the lower priority task is still running. The lower priority task holds for some time and resumes when the higher priority task finishes its execution Nahida Nazir 09/28/2024 NON-PREEMPTIVE SCHEDULING In this type of scheduling method, the CPU has been allocated to a specific process. The process that keeps the CPU busy will release the CPU either by switching context or terminating. It is the only method that can be used for various hardware platforms. That's because it doesn't need special hardware (for example, a timer) like preemptive scheduling. Nahida Nazir 09/28/2024 WHEN SCHEDULING IS PREEMPTIVE OR NON-PREEMPTIVE? A process switches from the running to the waiting state. Specific process switches from the running state to the ready state. Specific process switches from the waiting state to the ready state. Process finished its execution and terminated. Nahida Nazir 09/28/2024 TERMINOLOGY Burst Time/Execution Time: It is a time required by the process to complete execution. It is also called running time. Arrival Time: when a process enters in a ready state Finish Time: when process complete and exit from a system Multiprogramming: A number of programs which can be present in memory at the same time. Nahida Nazir 09/28/2024 TERMINOLOGY CPU/IO burst cycle: Characterizes process execution, which alternates between CPU and I/O activity. CPU times are usually shorter than the time of I/O. Nahida Nazir 09/28/2024 SCHEDULING CRITERIA Nahida Nazir 09/28/2024 TERMS TO REMEMBER CPU utilization: CPU utilization is the main task in which the operating system needs to make sure that CPU remains as busy as possible. It can range from 0 to 100 percent. However, for the RTOS, it can be range from 40 percent for low-level and 90 percent for the high-level system. Throughput: The number of processes that finish their execution per unit time is known Throughput. So, when the CPU is busy executing the process, at that time, work is being done, and the work completed per unit time is called Throughput. Nahida Nazir 09/28/2024 TERMS TO REMEMBER Minimize: Waiting time: Waiting time is an amount that specific process needs to wait in the ready queue. Response time: It is an amount to time in which the request was submitted until the first response is produced. Turnaround Time: Turnaround time is an amount of time to execute a specific process. It is the calculation of the total time spent waiting to get into the memory, waiting in the queue and, executing on the CPU. The period between the time of process submission to the completion time is the turnaround time. Nahida Nazir 09/28/2024 TYPES OF CPU SCHEDULING ALGORITHM There are mainly six types of process scheduling algorithms First Come First Serve (FCFS) Shortest-Job-First (SJF) Scheduling Shortest Remaining Time Priority Scheduling Round Robin Scheduling Multilevel Queue Scheduling Nahida Nazir 09/28/2024 FIRST COME FIRST SERVE First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation first. This scheduling method can be managed with a FIFO queue. Nahida Nazir 09/28/2024 NUMERICAL FCFS Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 FCFS is nonpreemptive: process holds CPU until termination or I/O request Nahida Nazir 09/28/2024 FCFS WITH DIFFERENT ARRIVAL TIME Nahida Nazir 09/28/2024 Now, we know- Turn Around time = Exit time – Arrival time Waiting time = Turn Around time – Burst time Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 SOLVE THE PROBLEM FCFS Nahida Nazir 09/28/2024 CHARACTERISTICS OF FCFS METHOD: It offers non-preemptive. Jobs are always executed on a first-come, first-serve basis It is easy to implement and use. However, this method is poor in performance, and the general wait time is quite high. Nahida Nazir 09/28/2024 SHORTEST-JOB-FIRST (SJF) SCHEDULING ALGORITHM Associate with each process the length of its next CPU burst  Use these lengths to schedule the process with the shortest time  Always assign the CPU to the process that has the smallest next CPU burst  FCFS breaks the tie if two process have the same next CPU burst length Better term: Shortest-Next-CPU-Burst-Scheduling (SNCB) algorithm Nahida Nazir 09/28/2024 Advantages- It is simple and easy to understand. It can be easily implemented using queue data structure. It does not lead to starvation. Disadvantages- It does not consider the priority or burst time of the processes. It suffers from convoy effect. Nahida Nazir 09/28/2024 Convoy Effect In convoy effect, Consider processes with higher burst time arrived before the processes with smaller burst time. Then, smaller processes have to wait for a long time for longer processes to release the CPU. Nahida Nazir 09/28/2024 SJF is optimal – gives minimum average waiting time for a given set of processes  The difficulty is knowing the length of the next CPU request Nahida Nazir 09/28/2024 SJF NON PRE-EMPTIVE EXAMPLE WITH 0 ARRIVAL TIME Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 SJF algo cannot be implemented at the level of the short-term CPU scheduling  No way to know exact length of process’s next CPU burst  Estimate it using lengths of past bursts: next = average of all past bursts Nahida Nazir 09/28/2024 SJF WITH NON-PRE-EMPTION Nahida Nazir 09/28/2024 Turn Around time = Exit time – Arrival time Waiting time = Turn Around time – Burst time Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit Nahida Nazir 09/28/2024 SJF WITH PRE-EMPTION SJF with pre-emption is called shortest remaining time first Nahida Nazir 09/28/2024 PRIORITY SCHEDULING The SJF algorithm is a special case of the general priority scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa. Nahida Nazir 09/28/2024 As an example, consider the following set of processes, assumed to have arrived at time 0 in the order P1, P2, · · ·, Ps, with the length of the CPU burst given in milliseconds: Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 GANT CHART Nahida Nazir 09/28/2024 The average waiting time is 8.2 milliseconds. Priorities can be defined either internally or externally. Internally defined priorities use some quantity or quantities to compute the priority of a process. For example, time limits, memory requirements, the number of open files, and the ratio of average I/0 burst to average CPU burst have been used in computing priorities. External priorities are set by criteria outside the operating system, such as the importance of the process, the type and amount of funds being paid for computer use, the department sponsoring the work, and other, often politicat factors Nahida Nazir 09/28/2024 Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A nonpreemptive priority scheduling algorithm will simply put the new process at the head of the ready queue. A rnajor problem with priority scheduling algorithms is indefinite blocking, or starvation Nahida Nazir 09/28/2024 A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a techniqtJe of gradually increasing the priority of processes that wait in the system for a long time. For example, if priorities range from 127 (low) to 0 (high), we could increase the priority of a waiting process by 1 every 15 minutes. Nahida Nazir 09/28/2024 ROUND ROBIN The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar to FCFS scheduling, but preemption is added to enable the system to switch between processes. A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally fronc 10 to 100 milliseconds in length. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. Nahida Nazir 09/28/2024 To implement RR scheduling, we keep the ready queue as a FIFO queue o£ processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process. Nahida Nazir 09/28/2024 Turn around time(TAT) =Completion time-arrival time waiting time= TAT-Burst time(BT) or TAT= Burst time+Waiting time Nahida Nazir 09/28/2024 IMPLEMENT RR IF TIME QUANTUM IS 4NS Nahida Nazir 09/28/2024 If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next process in the queue, process P2. Process P2 does not need 4 milliseconds, so it quits before its time quantum expires. The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR schedule is as follows: Nahida Nazir 09/28/2024 GANT CHART Thus, the average waiting time is 17/3 = 5.66 milliseconds Nahida Nazir 09/28/2024 MUTILEVEL QUEUE SCHEDULING Each algorithm supports a different process, but in a general system, some processes require scheduling using a priority algorithm. While some processes want to stay in the system (interactive processes), others are background processes whose execution can be delayed. The number of ready queue algorithms between queues and within queues may differ between systems. A round-robin method with various time quantum is typically utilized for such maintenance. Several types of scheduling algorithms are designed for circumstances where the processes can be readily separated into groups. There are two sorts of processes that require different scheduling algorithms because they have varying response times and resource requirements. The foreground (interactive) and background processes (batch process) are distinguished. Background processes take priority over foreground processes. Nahida Nazir 09/28/2024 The ready queue has been partitioned into seven different queues using the multilevel queue scheduling technique. These processes are assigned to one queue based on their priority, such as memory size, process priority, or type. The method for scheduling each queue is different. Some queues are utilized for the foreground process, while others are used for the background process. The foreground queue may be scheduled using a round-robin method, and the background queue can be scheduled using an FCFS strategy. Nahida Nazir 09/28/2024 MUTILEVEL FEEDBACK QUEUE Each algorithm supports a different process, but some processes require scheduling using a priority algorithm in a general system. There is a different queue for foreground or background operations, but they do not switch between queues or change their foreground or background nature; this type of organization benefits from low scheduling but is inflexible. This strategy prioritizes operations that require I/O and are interactive. It is a distinct process with a distinct CPU burst time. It enables a process to switch between queues. If a process consumes too much processor time, it will be switched to the lowest priority queue. A process waiting in a lower priority queue for too long may be shifted to a higher priority queue. This type of aging prevents starvation. The parameters of the multilevel feedback queue scheduler are as follows: 1. The scheduling algorithm for every queue in the system. 2. The queues number in the system. 3. The method for determining when a queue should be demoted to a lower-priority queue. 4. When a process is upgraded to a higher-priority queue, this process determines when it gets upgraded. 5. The method for determining which processes will enter the queue and when those processes will require service Nahida Nazir 09/28/2024 Nahida Nazir 09/28/2024 MULTILEVEL QUEUE Q1 IS RR ,SJF process at bt queue P1 0 4 1 p2 0 3 1 p3 0 8 2 p4 8 5 2 p1 0 Nahida Nazir 09/28/2024 EXAMPLE Nahida Nazir 09/28/2024 MULTIPROCESSOR SCHEDULING, Multiple processor scheduling or multiprocessor scheduling focuses on designing the system's scheduling function, which consists of more than one processor. Multiple CPUs share the load (load sharing) in multiprocessor scheduling so that various processes run simultaneously. In general, multiprocessor scheduling is complex as compared to single processor scheduling. In the multiprocessor scheduling, there are many processors, and they are identical, and we can run any process at any time. Nahida Nazir 09/28/2024 The multiple CPUs in the system are in close communication, which shares a common bus, memory, and other peripheral devices. So we can say that the system is tightly coupled. These systems are used when we want to process a bulk amount of data, and these systems are mainly used in satellite, weather forecasting, etc. There are cases when the processors are identical, i.e., homogenous, in terms of their functionality in multiple-processor scheduling. We can use any processor available to run any process in the queue. Multiprocessor systems may be heterogeneous (different kinds of CPUs) or homogenous (the same CPU). There may be special scheduling constraints, such as devices connected via a private bus to only one Nahida Nazir 09/28/2024 There is no policy or rule which can be declared as the best scheduling solution to a system with a single processor. Similarly, there is no best scheduling solution for a system with multiple processors as well. Nahida Nazir 09/28/2024 APPROACHES TO MULTIPLE-PROCESSOR SCHEDULING One approach is when all the scheduling decisions and I/O processing are handled by a single processor which is called the Master Server and the other processors executes only the user code. This is simple and reduces the need of data sharing. This entire scenario is called Asymmetric Multiprocessing. A second approach uses Symmetric Multiprocessing where each processor is self scheduling. All processes may be in a common ready queue or each processor may have its own private queue for ready processes. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute. Nahida Nazir 09/28/2024 PROCESSOR AFFINITY Processor Affinity means a processes has an affinity for the processor on which it is currently running. When a process runs on a specific processor there are certain effects on the cache memory. The data most recently accessed by the process populate the cache for the processor and as a result successive memory access by the process are often satisfied in the cache memory. Now if the process migrates to another processor, the contents of the cache memory must be invalidated for the first processor and the cache for the second processor must be repopulated. Because of the high cost of invalidating and repopulating caches, most of the SMP(symmetric multiprocessing) systems try to avoid migration of processes from one processor to another and try to keep a process running on the same processor. This is known as PROCESSOR AFFINITY. There are two types of processor affinity: Nahida Nazir 09/28/2024 1.Soft Affinity – When an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing it will do so, this situation is called soft affinity. 2.Hard Affinity – Hard Affinity allows a process to specify a subset of processors on which it may run. Some systems such as Linux implements soft affinity but also provide some system calls like sched_setaffinity() that supports hard affinity. Nahida Nazir 09/28/2024 LOAD BALANCING Load Balancing is the phenomena which keeps the workload evenly distributed across all processors in an SMP system. Load balancing is necessary only on systems where each processor has its own private queue of process which are eligible to execute. Load balancing is unnecessary because once a processor becomes idle it immediately extracts a runnable process from the common run queue Nahida Nazir 09/28/2024 REAL TIME SCHEDULING, Real time systems within operating systems are specialized for tasks demanding immediate urgency, often linked to event control or reaction. Real-time tasks fall into two primary categories: soft real-time tasks and hard real-time tasks. These tasks possess distinct attributes and demands that shape their scheduling and management. Nahida Nazir 09/28/2024 1. Real time systems are like super-fast computers for urgent tasks. 2.Two types: hard (exact timing) and soft (flexible timing) tasks. 3.Special algorithms ensure tasks finish on time. 4.Different task types: hard (critical) and soft (more flexible). 5.Scheduler assigns priorities, ensures deadlines, and controls tasks. 6.Factors: scheduler priority, algorithms, resource allocation, etc. 7.Scheduling algorithms: FCFS, Round-Robin, Priority, Deadline. 8.Choose algorithm based on task needs, resources, timing, etc. Nahida Nazir 09/28/2024 1.Hard Real-Time Tasks: These are like urgent missions. They must be done exactly on time, or something really bad could happen. 2.Soft Real-Time Tasks: These are important too, but it’s okay if they’re a bit late sometimes. Nothing terrible will happen. Nahida Nazir 09/28/2024 Real-time systems need special ways of organizing tasks. These special ways focus on finishing tasks on time instead of using resources perfectly. 1.Priority List: Tasks are put in order based on how important they are. The most important task goes first. 2.Meeting Deadlines: These special ways help tasks finish on time, so no bad things happen. 3.Predictable and Safe: They make sure tasks happen when they should, making everything predictable and safe. Nahida Nazir 09/28/2024 THREAD SCHEDULING A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler. Multiple tasks of an application(for example, update the display, fetch data, spell checking, etc.) can be run simultaneously with the help of threads. In most of the operating systems, threads are a component of a process but are more light-weight in their creation than the latter. Nahida Nazir 09/28/2024 Linux operates in two modes: user mode and kernel mode. Kernel threads run in kernel mode and is a lightweight unit of kernel scheduling. At least one kernel thread exists within each process. On the other hand, if some userspace libraries implement the threads, they are called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Nahida Nazir 09/28/2024 Thread scheduling is a fundamental aspect of multitasking operating systems, where multiple threads of execution compete for CPU time. Threads are smaller units of execution within a process, and the scheduler determines which thread to execute next based on various scheduling algorithms and priorities. Here are some key points about thread scheduling: 1. Thread States: Threads typically exist in several states including running, ready, blocked, and terminated. The scheduler decides when to transition threads between these states based on events like I/O completion, thread yielding, or thread termination. 2. Scheduling Policies: Different operating systems may implement different scheduling policies such as First-Come- First-Served (FCFS), Round Robin, Shortest Job First (SJF), Priority-Based Scheduling, and Multilevel Feedback Queues (MLFQ). Each policy has its own advantages and trade-offs in terms of fairness, throughput, response time, and starvation avoidance. 3. Priority Scheduling: Many schedulers assign priorities to threads, and higher priority threads are given preference in execution. This ensures that critical tasks are completed promptly. However, it's important to avoid priority inversion and priority inversion avoidance mechanisms may be implemented to prevent this issue. 4. Thread Affinity: Some schedulers allow developers to specify thread affinity, which means specifying the preferred CPU core or set of cores where a thread should execute. This can help in optimizing cache usage and reducing context switching overhead. Nahida Nazir 09/28/2024

Use Quizgecko on...
Browser
Browser