Operating Systems CPU Scheduling (2024-2025)
Document Details
Uploaded by PainlessNaïveArt
The Egyptian E-Learning University
2025
Dr. Wafaa Samy, Dr. Hanaa Eissa
Tags
Summary
These lecture notes detail operating systems, covering different CPU scheduling concepts, including FCFS, SJF, and scheduling criteria such as throughput, turnaround time, waiting time, and response time. Specific examples are included.
Full Transcript
Year: 2024-2025 Fall Semester Operating Systems Dr. Wafaa Samy Dr. Hanaa Eissa Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 3.1 Modified by Dr. Wafaa Samy C...
Year: 2024-2025 Fall Semester Operating Systems Dr. Wafaa Samy Dr. Hanaa Eissa Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 3.1 Modified by Dr. Wafaa Samy Chapter 5: CPU Scheduling (Part 1) Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 Modified by Dr. Wafaa Samy Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multi-Processor Scheduling Algorithm Evaluation Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.3 Modified by Dr. Wafaa Samy Basic Concepts Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.4 Modified by Dr. Wafaa Samy Basic Concepts In a system with a single CPU core: Only one process can run at a time. Other processes must wait until the CPU’s core is free and can be rescheduled. The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.5 Modified by Dr. Wafaa Samy Basic Concepts (Cont.) A process is executed until it must wait (for the completion of I/O request). In a non-multiprogrammed system, the CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. With a multiprogramming system, we try to use this time productively. Every time one process has to wait, another process can take over use of the CPU. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Every time one process has to wait, another process can take over use of the CPU. On a multicore system, this concept of keeping the CPU busy is extended to all processing cores on the system. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.6 Modified by Dr. Wafaa Samy CPU–I/O Burst Cycle Maximum CPU utilization obtained with multiprogramming. The success of CPU scheduling depends on an observed property of processes. CPU–I/O Burst Cycle: Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst followed by I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the final CPU burst ends with a system request to terminate execution. CPU burst distribution is of main concern. Alternating sequence of Operating System Concepts – 10th Edition CPU and I/O bursts. Silberschatz, Galvin and Gagne ©2018 5.7 Modified by Dr. Wafaa Samy Histogram of CPU-burst Times The durations of CPU bursts have been measured extensively. Although they vary greatly from process to process and from computer to computer, they tend to have a frequency curve similar to that shown in figure. The curve is generally characterized with a large number of short CPU bursts and a small number of long CPU bursts. An I/O-bound program typically has many short CPU bursts. A CPU-bound program might have a few long CPU bursts. This distribution can be important when implementing a CPU-scheduling Large number of short bursts algorithm. Small number of longer bursts Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.8 Modified by Dr. Wafaa Samy CPU Scheduler The CPU scheduler selects from among the processes in ready queue, and allocates a CPU core to one of them. Queue may be ordered in various ways. A ready queue can be implemented as a first-in and first-out (FIFO) queue, a priority queue, a tree, or simply an unordered linked list. The records in the queues are generally process control blocks (PCBs) of the processes. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.9 Modified by Dr. Wafaa Samy CPU Scheduler (Cont.) CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state (e.g. as the result of an I/O request or an invocation of wait() for the termination of a child process). 2. Switches from running to ready state (e.g. when an interrupt occurs). 3. Switches from waiting to ready (e.g. at completion of I/O). 4. Terminates. For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution. For situations 2 and 3, however, there is a choice. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.10 Modified by Dr. Wafaa Samy Preemptive and Nonpreemptive Scheduling When scheduling takes place only under circumstances 1 and 4, the scheduling scheme is nonpreemptive (or cooperative). Under Nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state. Otherwise, it is preemptive scheduling. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.11 Modified by Dr. Wafaa Samy Preemptive Scheduling and Race Conditions Unfortunately, preemptive scheduling can result in race conditions when data are shared among several processes. 1. Consider the case of two processes that share data. 2. While one process is updating the data, it is preempted so that the second process can run. 3. The second process then tries to read the data, which are in an inconsistent state. This issue will be explored in detail in Chapter 6. Operating-system kernels can be designed as either non-preemptive or preemptive. Most modern operating systems are now fully preemptive when running in kernel mode. Virtually all modern operating systems including Windows, MacOS, Linux, and UNIX use preemptive scheduling algorithms. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.12 Modified by Dr. Wafaa Samy Dispatcher Another component involved in the CPU- scheduling function is the dispatcher. Dispatcher module gives control of the CPU to the process selected by the CPU scheduler; this function involves: 1. Context switch: Switching context from one process to another. 2. Switching to user mode. 3. Jumping to the proper location in the user program to restart (resume) that program. The dispatcher should be as fast as possible, since it is invoked during every context switch. Dispatch latency – time it takes for the dispatcher to stop one process and start another running. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.13 Modified by Dr. Wafaa Samy Scheduling Criteria Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.14 Modified by Dr. Wafaa Samy Scheduling Criteria Different CPU-scheduling algorithms have different properties. Many criteria have been suggested for comparing CPU-scheduling algorithms. 1. CPU utilization – We want to keep the CPU as busy as possible. CPU utilization can range from 0 to 100 percent. 2. Throughput – the number of processes that are completed their execution per time unit. 3. Turnaround time – the amount of time to execute a particular process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting in the ready queue, executing on the CPU, and doing I/O. 4. Waiting time – the amount of time a process has been waiting in the ready queue. (i.e. the sum of the periods spent waiting in the ready queue). 5. Response time – the amount of time it takes from when a request was submitted until the first response is produced. It is the time it takes to start responding, not the time it takes to output the response. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.15 Modified by Dr. Wafaa Samy Scheduling Algorithm Optimization Criteria It is desirable to: Max CPU utilization. Max throughput. Min turnaround time. Min waiting time. Min response time. It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. In most cases, we optimize the average measure. However, under some circumstances, we prefer to optimize the minimum or maximum values rather than the average. For example, to guarantee that all users get good service, we may want to minimize the maximum response time. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.16 Modified by Dr. Wafaa Samy Scheduling Algorithms Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.17 Modified by Dr. Wafaa Samy Scheduling Algorithms There are many different CPU scheduling algorithms: First-Come, First-Served (FCFS) Scheduling. Shortest-Job-First (SJF) Scheduling. Round Robin (RR) Scheduling. Priority Scheduling. Multilevel Queue Scheduling. Multilevel Feedback Queue Scheduling. Note: Although most modern CPU architectures have multiple processing cores, we describe these scheduling algorithms in the context of only one processing core available. That is, a single CPU that has a single processing core, thus the system is capable of only running one process at a time. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.18 Modified by Dr. Wafaa Samy First- Come, First-Served (FCFS) Scheduling FCFS algorithm: the process that requests the CPU first is allocated the CPU first. The FCFS scheduling algorithm is non-preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. The implementation of FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. Advantage: The simplest CPU-scheduling algorithm. Disadvantage: The average waiting time under the FCFS policy is often quite long. The FCFS algorithm is particularly troublesome for interactive systems, where it is important that each process get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep the CPU for an extended period. Short jobs get stuck behind long jobs. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.19 Modified by Dr. Wafaa Samy First- Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 Processes arrive at time 0 with the CPU burst given in milliseconds, the Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 milliseconds Turnaround time for P1 = 24; P2 = 27; P3 = 30 Average turnaround time = (24 + 27 + 30)/3 = 27 milliseconds Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.20 Modified by Dr. Wafaa Samy FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 milliseconds Turnaround time for P1= 30; P2= 3; P3 = 6 Average turnaround time = (30 + 3+ 6)/3 = 13 milliseconds Much better than previous case. Convoy effect - short process behind long process. Consider one CPU-bound and many I/O-bound processes. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.21 Modified by Dr. Wafaa Samy FCFS Scheduling (Cont.) Consider the performance of FCFS scheduling in a dynamic situation. Assume we have one CPU-bound process and many I/O-bound processes. The following scenario may result: The CPU-bound process will get and hold the CPU. During this time, all the other processes will finish their I/O and will move into the ready queue, waiting for the CPU. The I/O devices are idle. Then the CPU-bound process finishes its CPU burst and moves to an I/O device. All the I/O-bound processes, which have short CPU bursts, execute quickly and move back to the I/O queues. Therefore, the CPU sits idle. The CPU-bound process will then move back to the ready queue and be allocated the CPU. Again, all the I/O processes end up waiting in the ready queue until the CPU-bound process is done. There is a convoy effect as all the other processes wait for the one big process to get off the CPU. This effect results in lower CPU and device utilization than might be possible if the shorter processes were allowed to go first. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.22 Modified by Dr. Wafaa Samy FCFS Scheduling: Example Example Data: Process Arrival Burst Time Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 FCFS P1 P2 P3 P4 0 8 12 21 26 Average waiting time = ((0-0) + (8-1) + (12-2) + (21-3))/4 = (0+ 7+10+18) /4 = 35/4 = 8.75 Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.23 23 Modified by Dr. Wafaa Samy Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. 1. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. 2. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. The more appropriate term for this scheduling method would be the shortest-next-CPU-burst algorithm, because scheduling depends on the length of the next CPU burst of a process, rather than its total length. But, the most common term is SJF. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.24 Modified by Dr. Wafaa Samy Example of SJF ProcessArrival Burst Time P1 0.06 P2 2.08 P3 4.07 P4 5.03 SJF scheduling Gantt chart: Waiting time for P1 = 3; P2 = 16; P3 = 9; P4 = 0 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 milliseconds If we were using the FCFS scheduling scheme, the average waiting time would be 10.25 milliseconds. Exercise: Solve this example using the FCFS algorithm. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.25 Modified by Dr. Wafaa Samy Shortest-Job-First (SJF) Scheduling Advantage: SJF is optimal – it gives the minimum average waiting time for a given set of processes. Disadvantage: Starvation: Long-running jobs may starve if too many short jobs. The difficulty is knowing the length of the next CPU request. Difficult to implement (how do you know how long job will take?). How do we determine the length of the next CPU burst? Could ask the user. Estimate. Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.26 Modified by Dr. Wafaa Samy Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 5.27 Modified by Dr. Wafaa Samy