CPU Scheduling 24CSCI03I PDF
Document Details
Uploaded by PrestigiousImpressionism4967
The British University in Egypt
Wessam El-Behaidy
Tags
Summary
These lecture notes cover CPU scheduling, a fundamental operating system function. Almost all computer resources are scheduled before use. The document explores different scheduling algorithms and their characteristics.
Full Transcript
OPERATING SYSTEMS 24CSCI03I By Wessam El-Behaidy Associate Professor, Computer Science Main Reference Operating System Concepts, Abraham Silberschatz, 10th Edition CPU Scheduling Basic Concepts ▪ Sched...
OPERATING SYSTEMS 24CSCI03I By Wessam El-Behaidy Associate Professor, Computer Science Main Reference Operating System Concepts, Abraham Silberschatz, 10th Edition CPU Scheduling Basic Concepts ▪ Scheduling is a fundamental operating-system function. Almost all computer resources are scheduled before use. Reminder We have seen in a previous lecture that: ▪ With multiprogramming, several processes are loaded into memory at the same time ▪ Processes pass through several states (running, waiting, ready, etc.) during their lifetimes Process states Reminder We have seen in a previous lecture that: ▪ At any given time, a maximum of one process (per CPU core) can be in execution ▪ The process scheduler (or CPU scheduler) selects among available processes for next execution on a given CPU core Ready queue: set of all processes residing in main memory, ready and waiting to execute Wait queue: set of processes waiting for an event Processes migrate among the various queues as they change state Process Execution Cycle ▪ Process execution consists of a cycle of CPU execution (CPU burst) followed by I/O wait (I/O burst) An I/O-bound process has many short CPU bursts. A CPU-bound process might have a few long CPU bursts. ▪ The final CPU burst ends with a system request to terminate execution The distribution of CPU burst duration is of main concern for CPU scheduling decisions. CPU-scheduling Components Two components involved in the CPU-scheduling function: 1. CPU scheduler (Short term scheduler) selects a process from the processes in memory that are ready to execute and allocates the CPU to that process. 2. Dispatcher is the module that gives control of the CPU’s core to the process selected by the CPU scheduler. Dispatcher ▪ Dispatcher module gives control of the CPU core to the process selected by the CPU scheduler; this involves: Switching context Switching to user mode Jumping to the proper location in the user program to be resumed ▪ Dispatch latency – time it takes for the dispatcher to stop one process and start another running ▪ The dispatcher needs to be as fast as possible CPU Scheduler ▪ The CPU scheduler selects from among the processes in ready queue, and allocates a CPU core to one of them The ready queue is not necessarily a first-in, first-out (FIFO) queue. CPU Scheduling Algorithms The records in the queues are generally references to process control blocks (PCBs) of the processes CPU Scheduler ▪ CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state (for example, when an interrupt occurs) 3. Switches from waiting to ready 4. Terminates 2 4 3 1 CPU Scheduler ▪ For situations 1 and 4, a new process (if one exists in the ready queue) must be selected for execution. The scheduling scheme is nonpreemptive. Nonpreemptive Scheduling The process keeps the CPU until it releases it either by terminating or by switching to the waiting state. 2 4 3 1 CPU Scheduler ▪ For situations 2 and 3, however, the scheduling scheme is preemptive. Virtually all modern operating systems including Windows, MacOS, Linux, and UNIX use preemptive scheduling algorithms. 2 4 3 1 CPU Scheduler Preemptive Scheduling Can result in race conditions (synchronization problem) when data are shared among several processes. Consider the case of two processes that share data. While one process is updating the data, it is preempted so that the second process can run. The second process then tries to read the data, which are in an inconsistent state. Scheduling Criteria ▪ In which algorithm is judged to be best: CPU utilization – keep the CPU as busy as possible – Need to be Maximized Throughput – number of processes that complete their execution per time unit – Need to be Maximized Dependent on the process length Turnaround time – amount of time to execute a particular process – Need to be Minimized The sum of the periods spent waiting in the ready queue + executing on the CPU + doing I/O. Scheduling Criteria ▪ In which algorithm is judged to be best: Waiting time – amount of time a process has been waiting in the ready queue – Need to be Minimized Response time – amount of time it takes from request submission until the first response is produced – Need to be Minimized A process can produce some output fairly early and continue computing new results while previous results are being output to the user. CPU Scheduling Algorithms ▪ The problem ➔ which of the processes in the ready queue is to be allocated the CPU’s core. ▪ Simplifying Assumptions: Single processing core One process at a time One thread per process Processes are independent. Measure of comparison is the average waiting time. CPU Scheduling Algorithms ▪ Several scheduling policies (or “algorithms”) exist, with different trade-offs. ▪ Let’s look at the main ones: First- Come, First-Served (FCFS) Shortest-Job-First (SJF) Shortest-Remaining-Time-First (SRT) Round Robin (RR) Priority Scheduling Priority Scheduling with Round-Robin Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling First- Come, First-Served (FCFS) Scheduling ▪ The simplest CPU-scheduling algorithm. Implemented as a FIFO queue. It is nonpreemptive. ▪ EX: Process Burst Time P1 24 P2 3 P3 3 ▪ Suppose the processes arriving order: P1 , P2 , P3 ▪ The Gantt chart of the resulting schedule is: ▪ Waiting time for P1 = 0; P2 = 24; P3 = 27 ▪ Average waiting time: (0 + 24 + 27)/3 = 17 First- Come, First-Served (FCFS) Scheduling ▪ The simplest CPU-scheduling algorithm. Implemented as a FIFO queue. It is nonpreemptive. ▪ EX: Process Burst Time P1 24 P2 3 P3 3 ▪ Suppose the processes arriving order: P2 , P3 , P1 ▪ Waiting time for P1 = 6; P2 = 0; P3 = 3 ▪ Average waiting time: (0 + 6 + 3)/3 = 3 First- Come, First-Served (FCFS) Scheduling ▪ The simplest CPU-scheduling algorithm. Implemented as a FIFO queue. It is nonpreemptive. ▪ EX: Process Burst Time P1 24 P2 waiting3time under an FCFS Thus, the average policy is generally P3 not minimal 3 and may vary ▪ Suppose the processes arriving order: P2 , P3 , P1 ▪ Waiting time for P1 = 6; P2 = 0; P3 = 3 ▪ Average waiting time: (0 + 6 + 3)/3 = 3 First- Come, First-Served (FCFS) Scheduling ▪ Assume we have one CPU-bound process and many I/O-bound processes, the following scenario may result: When the CPU-bound process will get the CPU: all the other processes will finish their I/O and will move into the ready queue, the I/O devices are idle. The CPU-bound process finishes its CPU burst and moves to an I/O device. Allthe I/O-bound processes execute quickly and move back to the I/O queues. The CPU sits idle. First- Come, First-Served (FCFS) Scheduling ▪ Assume we have one CPU-bound process and many I/O-bound processes, the following scenario may result: When theisCPU-bound There process a convoy effect as will get other all the the CPU: processes waitprocesses all the other for the one big will process finish their to I/Oget and off ready will move into the the CPU. queue, This effect the I/O results devices in idle. are lower CPU and device utilization The CPU-bound process finishes its CPU burst and moves to an I/O device. All the I/O-bound processes execute quickly and move back to the I/O queues. The CPU sits idle. Shortest-Job-First (SJF) Scheduling ▪ Associate with each process the length of its next CPU burst Use these lengths to schedule the process with the shortest time If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. ▪ A more appropriate term for this scheduling method would be the shortest-next-CPU-burst Shortest-Job-First (SJF) Scheduling ▪ EX: ProcessArriva lBurst Time P1 0.0 6 P2 2.0 8 P3 4.0 7 P4 5.0 3 ▪ SJF scheduling Gantt chart ▪ Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 ▪ If we were using the FCFS scheduling scheme, the average waiting time would be 10.25 milliseconds. Shortest-Remaining-Time-First (SRT) Scheduling ▪ SRT is a preemptive variant of SJF with different arrival times processes. ▪ EX: ProcessAArrival TimeT Burst Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 Shortest-Remaining-Time-First (SRT) Scheduling ProcessAArrival TimeT Burst Time 1 Remaining 2 3 5 P1 0 8 7 7 7 7 P2 1 4 3 2 0 P3 2 9 9 9 P4 3 5 5 Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 Shortest-Remaining-Time-First (SRT) Scheduling ProcessAArrival TimeT Burst Time 1 Remaining 2 3 5 P1 0 8 7 7 7 7 P2 1 4 3 2 0 P3 and SRT SJF 2 algorithms are optimal 9 9 – give minimum 9 P4 3 average waiting5 time. 5 But the difficulty is knowing the length of the next CPU burst. Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 Round Robin (RR) ▪ Similar to FCFS scheduling, but preemption is added ▪ Each process gets a small unit of CPU time called time quantum q or time slice (usually 10-100 milliseconds). ▪ After this time has elapsed, the process is preempted and added to the end of the ready queue. Timer interrupts every time quantum to schedule next process. ▪ If there are 𝑛 processes in the ready queue and the time quantum is 𝑞, then: Each process gets 1/𝑛 of the CPU time, in chunks of at most 𝑞 time units at once. No process waits more than (𝑛 − 1)𝑞 time units. Example of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3 P3 3 P1 waits for 6 milliseconds (10 − 4) P2 waits for 4 milliseconds P3 waits for 7 milliseconds. The average waiting time is 17/3 = 5.66 milliseconds. Example of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3 P3 3 P1 waits for 6 milliseconds (10 − 4) Higher average P2 waits for 4 milliseconds waiting than SJF, but P3 waits for 7 milliseconds. better response The average waiting time is 17/3 = 5.66 milliseconds. Round Robin (RR) ▪ Performance depends heavily on q size q large FIFO q small context switch overhead is too high Priority Scheduling ▪ A priority number (integer) is associated with each process ▪ The CPU is allocated to the process with the highest priority (assume smallest integer highest priority) Preemptive or Nonpreemptive SJF is priority scheduling where priority is the inverse of predicted next CPU burst time Example: Non-preemptive Priority Scheduling ProcessAaBurst TimeT Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 ▪ Priority scheduling Gantt Chart ▪ Average waiting time = 8.2 Example: Preemptive Priority Scheduling ProcessAaArrival Time Burst TimeT Priority P1 0 10 3 P2 1 1 1 P3 2 2 4 P4 3 3 2 Example: Preemptive Priority Scheduling ProcessAaArrival Time Burst TimeT Priority P1 0 10 3 P2 1 1 1 P3 2 2 4 P4 3 3 2 P1 P2 P1 P4 P1 P3 0 1 2 3 6 14 16 ▪ Average waiting time =( (4) + (0) + (14-2) + 0 ) /4 = 4 Example: Non-Preemptive Priority Scheduling ProcessAaArrival Time Burst TimeT Priority P1 0 10 3 P2 1 1 1 P3 2 2 4 P4 3 3 2 P1 P2 P4 P3 0 10 11 14 16 ▪ Average waiting time =( 0 + 10 + 11 + 14 ) /4 = 8.75 Priority Scheduling ▪ Problem: Starvation – low priority processes may never execute Solution: Aging – as time progresses increase the priority of the process ▪ Equal-priority processes, can be: Scheduled in FCFS order. Or using round-robin: Priority Scheduling with Round-Robin ProcessABurst TimeT Priority P1 4 3 P2 5 2 P3 8 2 P4 7 1 P5 3 3 ▪ Gantt Chart with time quantum = 2 Multilevel queue Scheduling ▪ If all processes are in the same queue, to determine the highest-priority process, an O(n) search may be necessary. ▪ In Multilevel queue scheduling, the ready queue consists of multiple queues. Each queue has a distinct priority. Each queue might have its own scheduling algorithm Multilevel queue Scheduling ▪ Queues priority may be a fixed number (as previous figure) or may be based upon process type (as below figure) ▪ Also, it is possible to time-slice CPU time among the queues. Ex: In the foreground–background queues: The foreground (interactive) queue can be given 80 percent of the CPU time for RR scheduling among its processes. While the background (batch) queue receives 20 percent of the CPU to give to its processes on an FCFS basis. Multilevel Feedback Queue ▪ A process can move between the various queues. ▪ Multilevel-feedback-queue scheduler defined: Scheduling algorithms for each queue Method used to determine when to upgrade or demote a process Q0 ▪ It has three queues: 𝑄0 – RR with time quantum 8 milliseconds 𝑄1 – RR time quantum Q1 16 milliseconds 𝑄2 – FCFS Q2 Multilevel Feedback Queue ▪ Example: A process in Q0 gains CPU for 8 ms➔ If it does not finish, it is moved to queue Q1. At Q1 a process is served in RR for 16 ms ➔ If it still does not complete, it is moved to queue Q2 Q0 Q1 Q2 Multilevel Feedback Queue Multilevel Feedback Queue It the most general CPU-scheduling algorithm. But It is also the most complex algorithm, since it requires some means to select between different choices. Reading List ▪ You should study on books, not slides! Reading material for this lecture is: Silberschatz, Galvin, Gagne. Operating System Concepts, Tenth Edition: Chapter 5: CPU Scheduling (Sec. 5.1 to 5.3) ▪ Credits: Some of the material in these slides is reused (with modifications) from the official slides of the book Operating System Concepts, Tenth Edition, as permitted by their copyright note. Thank You